From Territorial to Functional Sovereignty: The Case of Amazon


Essay by Frank Pasquale: “…Who needs city housing regulators when AirBnB can use data-driven methods to effectively regulate room-letting, then house-letting, and eventually urban planning generally? Why not let Amazon have its own jurisdiction or charter city, or establish special judicial procedures for Foxconn? Some vanguardists of functional sovereignty believe online rating systems could replace state occupational licensure—so rather than having government boards credential workers, a platform like LinkedIn could collect star ratings on them.

In this and later posts, I want to explain how this shift from territorial to functional sovereignty is creating a new digital political economy. Amazon’s rise is instructive. As Lina Khan explains, “the company has positioned itself at the center of e-commerce and now serves as essential infrastructure for a host of other businesses that depend upon it.” The “everything store” may seem like just another service in the economy—a virtual mall. But when a firm combines tens of millions of customers with a “marketing platform, a delivery and logistics network, a payment service, a credit lender, an auction house…a hardware manufacturer, and a leading host of cloud server space,” as Khan observes, it’s not just another shopping option.

Digital political economy helps us understand how platforms accumulate power. With online platforms, it’s not a simple narrative of “best service wins.” Network effects have been on the cyberlaw (and digital economics) agenda for over twenty years. Amazon’s dominance has exhibited how network effects can be self-reinforcing. The more merchants there are selling on (or to) Amazon, the better shoppers can be assured that they are searching all possible vendors. The more shoppers there are, the more vendors consider Amazon a “must-have” venue. As crowds build on either side of the platform, the middleman becomes ever more indispensable. Oh, sure, a new platform can enter the market—but until it gets access to the 480 million items Amazon sells (often at deep discounts), why should the median consumer defect to it? If I want garbage bags, do I really want to go over to Target.com to re-enter all my credit card details, create a new log-in, read the small print about shipping, and hope that this retailer can negotiate a better deal with Glad? Or do I, ala Sunstein, want a predictive shopping purveyor that intimately knows my past purchase habits, with satisfaction just a click away?
As artificial intelligence improves, the tracking of shopping into the Amazon groove will tend to become ever more rational for both buyers and sellers. Like a path through a forest trod ever clearer of debris, it becomes the natural default. To examine just one of many centripetal forces sucking money, data, and commerce into online behemoths, play out game theoretically how the possibility of online conflict redounds in Amazon’s favor. If you have a problem with a merchant online, do you want to pursue it as a one-off buyer? Or as someone whose reputation has been established over dozens or hundreds of transactions—and someone who can credibly threaten to deny Amazon hundreds or thousands of dollars of revenue each year? The same goes for merchants: The more tribute they can pay to Amazon, the more likely they are to achieve visibility in search results and attention (and perhaps even favor) when disputes come up. What Bruce Schneier said about security is increasingly true of commerce online: You want to be in the good graces of one of the neo-feudal giants who bring order to a lawless realm. Yet few hesitate to think about exactly how the digital lords might use their data advantages against those they ostensibly protect.

Forward-thinking legal thinkers are helping us grasp these dynamics. For example, Rory van Loo has described the status of the “corporation as courthouse”—that is, when platforms like Amazon run dispute resolution schemes to settle conflicts between buyers and sellers. Van Loo describes both the efficiency gains that an Amazon settlement process might have over small claims court, and the potential pitfalls for consumers (such as opaque standards for deciding cases). I believe that, on top of such economic considerations, we may want to consider the political economic origins of e-commerce feudalism. For example, as consumer rights shrivel, it’s rational for buyers to turn to Amazon (rather than overwhelmed small claims courts) to press their case. The evisceration of class actions, the rise of arbitration, boilerplate contracts—all these make the judicial system an increasingly vestigial organ in consumer disputes. Individuals rationally turn to online giants for powers to impose order that libertarian legal doctrine stripped from the state. And in so doing, they reinforce the very dynamics that led to the state’s etiolation in the first place….(More)”.

Accountability of AI Under the Law: The Role of Explanation


Paper by Finale Doshi-Velez and Mason Kortz: “The ubiquity of systems using artificial intelligence or “AI” has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before—applications range from clinical decision support to autonomous driving and predictive policing. That said, our AIs continue to lag in common sense reasoning [McCarthy, 1960], and thus there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems [Bostrom, 2003, Amodei et al., 2016, Sculley et al., 2014]. How can we take advantage of what AI systems have to offer, while also holding them accountable?

In this work, we focus on one tool: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation [Goodman and Flaxman, 2016, Wachter et al., 2017a], and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. Good choices about when to demand explanation can help prevent negative consequences from AI systems, while poor choices may not only fail to hold AI systems accountable but also hamper the development of much-needed beneficial AI systems.

Below, we briefly review current societal, moral, and legal norms around explanation, and then focus on the different contexts under which explanation is currently required under the law. We find that there exists great variation around when explanation is demanded, but there also exist important consistencies: when demanding explanation from humans, what we typically want to know is whether and how certain input factors affected the final decision or outcome.

These consistencies allow us to list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans under the law. Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should generally be technically feasible but may sometimes be practically onerous—there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard….(More)”

Victims of Sexual Harassment Have a New Resource: AI


MIT Technology Review (The Download): “If you have ever dealt with sexual harassment in the workplace, there is now a private online place for you to go for help. Botler AI, a startup based in Montreal, on Wednesday launched a system that provides free information and guidance to those who have been sexually harassed and are unsure of their legal rights.

Using deep learning, the AI system was trained on more than 300,000 U.S. and Canadian criminal court documents, including over 57,000 documents and complaints related to sexual harassment. Using this information, the software predicts whether the situation explained by the user qualifies as sexual harassment, and notes which laws may have been violated under the criminal code. It then generates an incident report that the user can hand over to relevant authorities….

The tool starts by asking simple questions that can guide the software, like what state you live in and when the incident occured. Then, you explain your situation in plain language. The software then creates a report based on that account and what it has learned from the court cases on which it was trained.

The company’s ultimate goal is to provide free legal tools to help with a multitude of issues, not just sexual harassment. In this Botler isn’t alone—a similar company called DoNotPay started as an automated way to fight parking tickets but has since expanded massively (see “This Chatbot Will Help You Sue Anyone“)….(More).

Blockchain: Unpacking the disruptive potential of blockchain technology for human development.


IDRC white paper: “In the scramble to harness new technologies to propel innovation around the world, artificial intelligence, robotics, machine learning, and blockchain technologies are being explored and deployed in a wide variety of contexts globally.

Although blockchain is one of the most hyped of these new technologies, it is also perhaps the least understood. Blockchain is the distributed ledger — a database that is shared across multiple sites or institutions to furnish a secure and transparent record of events occurring during the provision of a service or contract — that supports cryptocurrencies (digital assets designed to work as mediums of exchange).

Blockchain is now underpinning applications such as land registries and identity services, but as its popularity grows, its relevance in addressing socio-economic gaps and supporting development targets like the globally-recognized UN Sustainable Development Goals is critical to unpack. Moreover, for countries in the global South that want to be more than just end users or consumers, the complex infrastructure requirements and operating costs of blockchain could prove challenging. For the purposes of real development, we need to not only understand how blockchain is workable, but also who is able to harness it to foster social inclusion and promote democratic governance.

This white paper explores the potential of blockchain technology to support human development. It provides a non-technical overview, illustrates a range of applications, and offers a series of conclusions and recommendations for additional research and potential development programming….(More)”.

Stewardship in the “Age of Algorithms”


Clifford Lynch at First Monday: “This paper explores pragmatic approaches that might be employed to document the behavior of large, complex socio-technical systems (often today shorthanded as “algorithms”) that centrally involve some mixture of personalization, opaque rules, and machine learning components. Thinking rooted in traditional archival methodology — focusing on the preservation of physical and digital objects, and perhaps the accompanying preservation of their environments to permit subsequent interpretation or performance of the objects — has been a total failure for many reasons, and we must address this problem.

The approaches presented here are clearly imperfect, unproven, labor-intensive, and sensitive to the often hidden factors that the target systems use for decision-making (including personalization of results, where relevant); but they are a place to begin, and their limitations are at least outlined.

Numerous research questions must be explored before we can fully understand the strengths and limitations of what is proposed here. But it represents a way forward. This is essentially the first paper I am aware of which tries to effectively make progress on the stewardship challenges facing our society in the so-called “Age of Algorithms;” the paper concludes with some discussion of the failure to address these challenges to date, and the implications for the roles of archivists as opposed to other players in the broader enterprise of stewardship — that is, the capture of a record of the present and the transmission of this record, and the records bequeathed by the past, into the future. It may well be that we see the emergence of a new group of creators of documentation, perhaps predominantly social scientists and humanists, taking the front lines in dealing with the “Age of Algorithms,” with their materials then destined for our memory organizations to be cared for into the future…(More)”.

Solving Public Problems with Data


Dinorah Cantú-Pedraza and Sam DeJohn at The GovLab: “….To serve the goal of more data-driven and evidence-based governing,  The GovLab at NYU Tandon School of Engineering this week launched “Solving Public Problems with Data,” a new online course developed with support from the Laura and John Arnold Foundation.

This online lecture series helps those working for the public sector, or simply in the public interest, learn to use data to improve decision-making. Through real-world examples and case studies — captured in 10 video lectures from leading experts in the field — the new course outlines the fundamental principles of data science and explores ways practitioners can develop a data analytical mindset. Lectures in the series include:

  1. Introduction to evidence-based decision-making  (Quentin Palfrey, formerly of MIT)
  2. Data analytical thinking and methods, Part I (Julia Lane, NYU)
  3. Machine learning (Gideon Mann, Bloomberg LP)
  4. Discovering and collecting data (Carter Hewgley, Johns Hopkins University)
  5. Platforms and where to store data (Arnaud Sahuguet, Cornell Tech)
  6. Data analytical thinking and methods, Part II (Daniel Goroff, Alfred P. Sloan Foundation)
  7. Barriers to building a data practice (Beth Blauer, Johns Hopkins University and GovEx)
  8. Data collaboratives (Stefaan G. Verhulst, The GovLab)
  9. Strengthening a data analytic culture (Amen Ra Mashariki, ESRI)
  10. Data governance and sharing (Beth Simone Noveck, NYU Tandon/The GovLab)

The goal of the lecture series is to enable participants to define and leverage the value of data to achieve improved outcomes and equities, reduced cost and increased efficiency in how public policies and services are created. No prior experience with computer science or statistics is necessary or assumed. In fact, the course is designed precisely to serve public professionals seeking an introduction to data science….(More)”.

SAM, the first A.I. politician on Messenger


 at Digital Trends: “It’s said that all politicians are the same, but it seems safe to assume that you’ve never seen a politician quite like this. Meet SAM, heralded as the politician of the future. Unfortunately, you can’t exactly shake this politician’s hand, or have her kiss your baby. Rather, SAM is the world’s first Virtual Politician (and a female presence at that), “driven by the desire to close the gap between what voters want and what politicians promise, and what they actually achieve.”

The artificially intelligent chat bot is currently live on Facebook Messenger, though she probably is most helpful to those in New Zealand. After all, the bot’s website notes, “SAM’s goal is to act as a representative for all New Zealanders, and evolves based on voter input.” Capable of being reached by anyone at just about anytime from anywhere, this may just be the single most accessible politician we’ve ever seen. But more importantly, SAM purports to be a true representative, claiming to analyze “everyone’s views [and] opinions, and impact of potential decisions.” This, the bot notes, could make for better policy for everyone….(More)”.

Nearly All of Wikipedia Is Written By Just 1 Percent of Its Editors


Daniel Oberhaus at Motherboard: “…Sixteen years later, the free encyclopedia and fifth most popular website in the world is well on its way to this goal. Today, Wikipedia is home to 43 million articles in 285 languages and all of these articles are written and edited by an autonomous group of international volunteers.

Although the non-profit Wikimedia Foundation diligently keeps track of how editors and users interact with the site, until recently it was unclear how content production on Wikipedia was distributed among editors. According to the results of a recent study that looked at the 250 million edits made on Wikipedia during its first ten years, only about 1 percent of Wikipedia’s editors have generated 77 percent of the site’s content.

“Wikipedia is both an organization and a social movement,” Sorin Matei, the director of the Purdue University Data Storytelling Network and lead author of the study, told me on the phone. “The assumption is that it’s a creation of the crowd, but this couldn’t be further from the truth. Wikipedia wouldn’t have been possible without a dedicated leadership.”

At the time of writing, there are roughly 132,000 registered editors who have been active on Wikipedia in the last month (there are also an unknown number of unregistered Wikipedians who contribute to the site). So statistically speaking, only about 1,300 people are creating over three-quarters of the 600 new articles posted to Wikipedia every day.

Of course, these “1 percenters” have changed over the last decade and a half. According to Matei, roughly 40 percent of the top 1 percent of editors bow out about every five weeks. In the early days, when there were only a few hundred thousand people collaborating on Wikipedia, Matei said the content production was significantly more equitable. But as the encyclopedia grew, and the number of collaborators grew with it, a cadre of die-hard editors emerged that have accounted for the bulk of Wikipedia’s growth ever since.

Matei and his colleague Brian Britt, an assistant professor of journalism at South Dakota State University, used a machine learning algorithm to crawl the quarter of a billion publicly available edit logs from Wikipedia’s first decade of existence. The results of this research, published September as a book, suggests that for all of Wikipedia’s pretension to being a site produced by a network of freely collaborating peers, “some peers are more equal than others,” according to Matei.

Matei and Britt argue that rather than being a decentralized, spontaneously evolving organization, Wikipedia is better described as an “adhocracy“—a stable hierarchical power structure which nevertheless allows for a high degree of individual mobility within that hierarchy….(More)”.

More Machine Learning About Congress’ Priorities


ProPublica: “We keep training machine learning models on Congress. Find out what this one learned about lawmakers’ top issues…

Speaker of the House Paul Ryan is a tax wonk ― and most observers of Congress know that. But knowing what interests the other 434 members of Congress is harder.

To make it easier to know what issues each lawmaker really focuses on, we’re launching a new feature in our Represent database called Policy Priorities. We had two goals in creating it: To help researchers and journalists understand what drives particular members of Congress and to enable regular citizens to compare their representatives’ priorities to their own and their communities.

We created Policy Priorities using some sophisticated computer algorithms (more on this in a second) to calculate interest based on what each congressperson talks ― and brags ― about in their press releases.

Voting and drafting legislation aren’t the only things members of Congress do with their time, but they’re often the main way we analyze congressional data, in part because they’re easily measured. But the job of a member of Congress goes well past voting. They go to committee meetings, discuss policy on the floor and in caucuses, raise funds and ― important for our purposes ― communicate with their constituents and journalists back home. They use press releases to talk about what they’ve accomplished and to demonstrate their commitment to their political ideals.

We’ve been gathering these press releases for a few years, and have a body of some 86,000 that we used for a kind of analysis called machine learning….(More)”.

Leveraging the disruptive power of artificial intelligence for fairer opportunities


Makada Henry-Nickie at Brookings: “According to President Obama’s Council of Economic Advisers (CEA), approximately 3.1 million jobs will be rendered obsolete or permanently altered as a consequence of artificial intelligence technologies. Artificial intelligence (AI) will, for the foreseeable future, have a significant disruptive impact on jobs. That said, this disruption can create new opportunities if policymakers choose to harness them—including some with the potential to help address long-standing social inequities. Investing in quality training programs that deliver premium skills, such as computational analysis and cognitive thinking, provides a real opportunity to leverage AI’s disruptive power.

AI’s disruption presents a clear challenge: competition to traditional skilled workers arising from the cross-relevance of data scientists and code engineers, who can adapt quickly to new contexts. Data analytics has become an indispensable feature of successful companies across all industries. ….

Investing in high-quality education and training programs is one way that policymakers proactively attempt to address the workforce challenges presented by artificial intelligence. It is essential that we make affirmative, inclusive choices to ensure that marginalized communities participate equitably in these opportunities.

Policymakers should prioritize understanding the demographics of those most likely to lose jobs in the short-run. As opposed to obsessively assembling case studies, we need to proactively identify policy entrepreneurs who can conceive of training policies that equip workers with technical skills of “long-game” relevance. As IBM points out, “[d]ata democratization impacts every career path, so academia must strive to make data literacy an option, if not a requirement, for every student in any field of study.”

Machines are an equal opportunity displacer, blind to color and socioeconomic status. Effective policy responses require collaborative data collection and coordination among key stakeholders—policymakers, employers, and educational institutions—to  identify at-risk worker groups and to inform workforce development strategies. Machine substitution is purely an efficiency game in which workers overwhelmingly lose. Nevertheless, we can blunt these effects by identifying critical leverage points….

Policymakers can choose to harness AI’s disruptive power to address workforce challenges and redesign fair access to opportunity simultaneously. We should train our collective energies on identifying practical policies that update our current agrarian-based education model, which unfairly disadvantages children from economically segregated neighborhoods…(More)”