A Rule of Persons, Not Machines: The Limits of Legal Automation


Paper by Frank A. Pasquale: “For many legal futurists, attorneys’ work is a prime target for automation. They view the legal practice of most businesses as algorithmic: data (such as facts) are transformed into outputs (agreements or litigation stances) via application of set rules. These technophiles promote substituting computer code for contracts and descriptions of facts now written by humans. They point to early successes in legal automation as proof of concept. TurboTax has helped millions of Americans file taxes, and algorithms have taken over certain aspects of stock trading. Corporate efforts to “formalize legal code” may bring new efficiencies in areas of practice characterized by both legal and factual clarity.

However, legal automation can also elide or exclude important human values, necessary improvisations, and irreducibly deliberative governance. Due process, appeals, and narratively intelligible explanation from persons, for persons, depend on forms of communication that are not reducible to software. Language is constitutive of these aspects of law. To preserve accountability and a humane legal order, these reasons must be expressed in language by a responsible person. This basic requirement for legitimacy limits legal automation in several contexts, including corporate compliance, property recordation, and contracting. A robust and ethical legal profession respects the flexibility and subtlety of legal language as a prerequisite for a just and accountable social order. It ensures a rule of persons, not machines…(More)”

Rational Inattention: A Disciplined Behavioral Model


Paper by Bartosz Mackowiak, Filip Matejka and Mirko Wiederholt: “This survey paper argues that rational inattention matters. It is likely to become an important part of Economics, because it bridges a gap between classical economics and behavioral economics. Actions look behavioral, since agents cannot process all available information; yet agents optimize in the sense that they try to deal optimally with their cognitive limitations – hence the term ”rational inattention.” We show how rational inattention describes the adaptation of agents’ behavioral biases due to policy and other changes of the economic environment. Then, we survey the existing literature, and discuss what the unifying mechanisms behind the results in these papers are. Finally, we lay out implications for policy, and propose what we believe are the most fruitful steps for future research in this area. Economics is about adjustments to scarcity.

Rational inattention studies adjustments to scarcity of attention. Understanding how people summarize, filter, and digest the abundant available information is key to understanding many phenomena in economics. Several crucial findings in economics, even some whole subfields, have been built around the assumptions of imperfect or asymmetric information. However, nowadays, many more forms of information than ever before are available due to new technologies, yet we are able to digest little of it. Which form of imperfect information we possess and act upon is thus largely not determined by which information is given to us, but by which information we choose to attend to….(More)”.

New Technologies Won’t Reduce Scarcity, but Here’s Something That Might


Vasilis Kostakis and Andreas Roos at the Harvard Business Review: “In a book titled Why Can’t We All Just Get Along?, MIT scientists Henry Lieberman and Christopher Fry discuss why we have wars, mass poverty, and other social ills. They argue that we cannot cooperate with each other to solve our major problems because our institutions and businesses are saturated with a competitive spirit. But Lieberman and Fry have some good news: modern technology can address the root of the problem. They believe that we compete when there is scarcity, and that recent technological advances, such as 3D printing and artificial intelligence, will end widespread scarcity. Thus, a post-scarcity world, premised on cooperation, would emerge.

But can we really end scarcity?

We believe that the post-scarcity vision of the future is problematic because it reflects an understanding of technology and the economy that could worsen the problems it seeks to address. This is the bad news. Here’s why:

New technologies come to consumers as finished products that can be exchanged for money. What consumers often don’t understand is that the monetary exchange hides the fact that many of these technologies exist at the expense of other humans and local environments elsewhere in the global economy….

The good news is that there are alternatives. The wide availability of networked computers has allowed new community-driven and open-source business models to emerge. For example, consider Wikipedia, a free and open encyclopedia that has displaced the Encyclopedia Britannica and Microsoft Encarta. Wikipedia is produced and maintained by a community of dispersed enthusiasts primarily driven by other motives than profit maximization.  Furthermore, in the realm of software, see the case of GNU/Linux on which the top 500 supercomputers and the majority of websites run, or the example of the Apache Web Server, the leading software in the web-server market. Wikipedia, Apache and GNU/Linux demonstrate how non-coercive cooperation around globally-shared resources (i.e. a commons) can produce artifacts as innovative, if not more, as those produced by industrial capitalism.

In the same way, the emergence of networked micro-factories are giving rise to new open-source business models in the realm of design and manufacturing. Such spaces can either be makerspaces, fab labs, or other co-working spaces, equipped with local manufacturing technologies, such as 3D printing and CNC machines or traditional low-tech tools and crafts. Moreover, such spaces often offer collaborative environments where people can meet in person, socialize and co-create.

This is the context in which a new mode of production is emerging. This mode builds on the confluence of the digital commons of knowledge, software, and design with local manufacturing technologies.  It can be codified as “design global, manufacture local” following the logic that what is light (knowledge, design) becomes global, while what is heavy (machinery) is local, and ideally shared. Design global, manufacture local (DGML) demonstrates how a technology project can leverage the digital commons to engage the global community in its development, celebrating new forms of cooperation. Unlike large-scale industrial manufacturing, the DGML model emphasizes application that is small-scale, decentralized, resilient, and locally controlled. DGML could recognize the scarcities posed by finite resources and organize material activities accordingly. First, it minimizes the need to ship materials over long distances, because a considerable part of the manufacturing takes place locally. Local manufacturing also makes maintenance easier, and also encourages manufacturers to design products to last as long as possible. Last, DGML optimizes the sharing of knowledge and design as there are no patent costs to pay for….(More)”

Sidewalks, Streets, and Tweets: Is Twitter a Public Forum?


Valerie C. Brannon at the Congressional Research Service: “On May 23, 2018, a federal district court in New York in Knight First Amendment Institute v. Trump held that the Free Speech Clause of the First Amendment prohibited President Trump from blocking Twitter users solely based on those users’ expression of their political views. In so doing, the court weighed in on the now-familiar but rapidly evolving debate over when an online forum qualifies as a “public forum” entitled to special consideration under the First Amendment. Significantly, the district court concluded that “the interactive space for replies and retweets created by each tweet sent by the @realDonaldTrump account” should be considered a “designated public forum” where the protections of the First Amendment apply. This ruling is limited to the @realDonaldTrump Twitter account but implicates a number of larger legal issues, including when a social media account is operated by the government rather than by a private citizen, and when the government has opened up that social media account as a forum for private speech. The ability of public officials to restrict private speech on Twitter may be of particular interest to Congress, given that almost all Members now have a Twitter account….(More)”.

The UK government’s imaginative use of evidence to make policy


Paul Cairney in British Politics: “It is easy to show that the UK Government rarely conducts ‘evidence-based policymaking’, but not to describe a politically feasible use of evidence in Westminster politics. Rather, we need to understand developments from a policymaker’s perspective before we can offer advice to which they will pay attention. ‘Policy-based evidence’ (PBE) is a dramatic political slogan, not a way to promote pragmatic discussion. We need to do more than declare PBE if we seek to influence the relationship between evidence and policymaking. To produce more meaningful categories we need clearer criteria which take into account the need to combine evidence, values, and political judgement. To that end, I synthesise policy theories to identify the limits to the use of evidence in policy, and case studies of ‘families policies’ to show how governments use evidence politically….(More)”.

The Slippery Math of Causation


Pradeep Mutalik for Quanta Magazine: “You often hear the admonition “correlation does not imply causation.” But what exactly is causation? Unlike correlation, which has a specific mathematical meaning, causation is a slippery concept that has been debated by philosophers for millennia. It seems to get conflated with our intuitions or preconceived notions about what it means to cause something to happen. One common-sense definition might be to say that causation is what connects one prior process or agent — the cause — with another process or state — the effect. This seems reasonable, except that it is useful only when the cause is a single factor, and the connection is clear. But reality is rarely so simple.

Although we tend to credit or blame things on a single major cause, in nature and in science there are almost always multiple factors that have to be exactly right for an event to take place. For example, we might attribute a forest fire to the carelessly thrown cigarette butt, but what about the grassy tract leading to the forest, the dryness of the vegetation, the direction of the wind and so on? All of these factors had to be exactly right for the fire to start. Even though many tossed cigarette butts don’t start fires, we zero in on human actions as causes, ignoring other possibilities, such as sparks from branches rubbing together or lightning strikes, or acts of omission, such as failing to trim the grassy path short of the forest. And we tend to focus on things that can be manipulated: We overlook the direction of the wind because it is not something we can control. Our scientifically incomplete intuitive model of causality is nevertheless very useful in practice, and helps us execute remedial actions when causes are clearly defined. In fact, artificial intelligence pioneer Judea Pearl has published a new book about why it is necessary to teach cause and effect to intelligent machines.

However, clearly defined causes may not always exist. Complex, interdependent multifactorial causes arise often in nature and therefore in science. Most scientific disciplines focus on different aspects of causality in a simplified manner. Physicists may talk about causal influences being unable to propagate faster than the speed of light, while evolutionary biologists may discuss proximate and ultimate causes as mentioned in our previous puzzle on triangulation and motion sickness. But such simple situations are rare, especially in biology and the so-called “softer” sciences. In the world of genetics, the complex multifactorial nature of causality was highlighted in a recent Quanta article by Veronique Greenwood that described the intertwined effects of genes.

One well-known approach to understanding causality is to separate it into two types: necessary and sufficient….(More)”

Democracy doomsday prophets are missing this critical shift


Bruno Kaufmann and Joe Mathews in the Washington Post: “The new conventional wisdom seems to be that electoral democracy is in decline. But this ignores another widespread trend: direct democracy at the local and regional level is booming, even as disillusion with representative government at the national level grows.

Today, 113 of the world’s 117 democratic countries offer their citizens legally or constitutionally established rights to bring forward a citizens’ initiative, referendum or both. And since 1980, roughly 80 percent of countries worldwide have had at least one nationwide referendum or popular vote on a legislative or constitutional issue.

Of all the nationwide popular votes in the history of the world, more than half have taken place in the past 30 years. As of May 2018, almost 2,000 nationwide popular votes on substantive issues have taken place, with 1,059 in Europe, 191 in Africa, 189 in Asia, 181 in the Americas and 115 in Oceania, based on our research.

That is just at the national level. Other major democracies — Germany, the United States and India — do not permit popular votes on substantive issues nationally but support robust direct democracy at the local and regional levels. The number of local votes on issues has so far defied all attempts to count them — they run into the tens of thousands.

This robust democratization, at least when it comes to direct legislation, provides a context that’s generally missing when doomsday prophets suggest that democracy is dying by pointing to authoritarian-leaning leaders like Turkish President Recep Tayyip Erdogan, Russian President Vladimir Putin, Hungarian Prime Minister Viktor Orbán, Philippine President Rodrigo Duterte and U.S. President Donald Trump.

Indeed, the two trends — the rise of populist authoritarianism in some nations and the rise of local and direct democracy in some areas — are related. Frustration is growing with democratic systems at national levels, and yes, some people become more attracted to populism. But some of that frustration is channeled into positive energy — into making local democracy more democratic and direct.

Cities from Seoul to San Francisco are hungry for new and innovative tools that bring citizens into processes of deliberation that allow the people themselves to make decisions and feel invested in government actions. We’ve seen local governments embrace participatory budgeting, participatory planning, citizens’ juries and a host of experimental digital tools in service of that desired mix of greater public deliberation and more direct public action….(More).”

Algorithm Observatory: Where anyone can study any social computing algorithm.


About: “We know that social computing algorithms are used to categorize us, but the way they do so is not always transparent. To take just one example, ProPublica recently uncovered that Facebook allows housing advertisers to exclude users by race.

Even so, there are no simple and accessible resources for us, the public, to study algorithms empirically, and to engage critically with the technologies that are shaping our daily lives in such profound ways.

That is why we created Algorithm Observatory.

Part media literacy project and part citizen experiment, the goal of Algorithm Observatory is to provide a collaborative online lab for the study of social computing algorithms. The data collected through this site is analyzed to compare how a particular algorithm handles data differently depending on the characteristics of users.

Algorithm Observatory is a work in progress. This prototype only allows users to explore Facebook advertising algorithms, and the functionality is limited. We are currently looking for funding to realize the project’s full potential: to allow anyone to study any social computing algorithm….

Our future plans

This is a prototype, which only begins to showcase the things that Algorithm Observatory will be able to do in the future.

Eventually, the website will allow anyone to design an experiment involving a social computing algorithm. The platform will allow researchers to recruit volunteer participants, who will be able to contribute content to the site securely and anonymously. Researchers will then be able to conduct an analysis to compare how the algorithm handles users differently depending on individual characteristics. The results will be shared by publishing a report evaluating the social impact of the algorithm. All data and reports will become publicly available and open for comments and reviews. Researchers will be able to study any algorithm, because the site does not require direct access to the source code, but relies instead on empirical observation of the interaction between the algorithm and volunteer participants….(More)”.

How Citizens Can Hack EU Democracy


Stephen Boucher at Carnegie Europe: “…To connect citizens with the EU’s decisionmaking center, European politicians will need to provide ways to effectively hack this complex system. These democratic hacks need to be visible and accessible, easily and immediately implementable, viable without requiring changes to existing European treaties, and capable of having a traceable impact on policy. Many such devices could be imagined around these principles. Here are three ideas to spur debate.

Hack 1: A Citizens’ Committee for the Future in the European Parliament

The European Parliament has proposed that twenty-seven of the seventy-three seats left vacant by Brexit should be redistributed among the remaining member states. According to one concept, the other forty-six unassigned seats could be used to recruit a contingent of ordinary citizens from around the EU to examine legislation from the long-term perspective of future generations. Such a “Committee for the Future” could be given the power to draft a response to a yearly report on the future produced by the president of the European Parliament, initiate debates on important political themes of their own choosing, make submissions on future-related issues to other committees, and be consulted by members of the European Parliament (MEPs) on longer-term matters.

MEPs could decide to use these forty-six vacant seats to invite this Committee for the Future to sit, at least on a trial basis, with yearly evaluations. This arrangement would have real benefits for EU politics, acting as an antidote to the union’s existential angst and helping the EU think systemically and for the longer term on matters such as artificial intelligence, biodiversity, climate concerns, demography, mobility, and energy.

Hack 2: An EU Participatory Budget

In 1989, the city of Porto Alegre, Brazil, decided to cede control of a share of its annual budget for citizens to decide upon. This practice, known as participatory budgets, has since spread globally. As of 2015, over 1,500 instances of participatory budgets have been implemented across five continents. These processes generally have had a positive impact, with people proving that they take public spending matters seriously.

To replicate these experiences at the European level, the complex realities of EU budgeting would require specific features. First, participative spending probably would need to be both local and related to wider EU priorities in order to ensure that citizens see its relevance and its wider European implications. Second, significant resources would need to be allocated to help citizens come up with and promote projects. For instance, the city of Paris has ensured that each suggested project that meets the eligibility requirements has a desk officer within its administration to liaise with the idea’s promoters. It dedicates significant resources to reach out to citizens, in particular in the poorer neighborhoods of Paris, both online and face-to-face. Similar efforts would need to be deployed across Europe. And third, in order to overcome institutional complexities, the European Parliament would need to work with citizens as part of its role in negotiating the budget with the European Council.

Hack 3: An EU Collective Intelligence Forum

Many ideas have been put forward to address popular dissatisfaction with representative democracy by developing new forums such as policy labs, consensus conferences, and stakeholder facilitation groups. Yet many citizens still feel disenchanted with representative democracy, including at the EU level, where they also strongly distrust lobby groups. They need to be involved more purposefully in policy discussions.

A yearly Deliberative Poll could be run on a matter of significance, ahead of key EU summits and possibly around the president of the commission’s State of the Union address. On the model of the first EU-wide Deliberative Poll, Tomorrow’s Europe, this event would bring together in Brussels a random sample of citizens from all twenty-seven EU member states, and enable them to discuss various social, economic, and foreign policy issues affecting the EU and its member states. This concept would have a number of advantages in terms of promoting democratic participation in EU affairs. By inviting a truly representative sample of citizens to deliberate on complex EU matters over a weekend, within the premises of the European Parliament, the European Parliament would be the focus of a high-profile event that would draw media attention. This would be especially beneficial if—unlike Tomorrow’s Europe—the poll was not held at arm’s length by EU policymakers, but with high-level national officials attending to witness good-quality deliberation remolding citizens’ views….(More)”.

The Unlinkable Data Challenge: Advancing Methods in Differential Privacy


National Institute of Standards and Technology: “Databases across the country include information with potentially important research implications and uses, e.g. contingency planning in disaster scenarios, identifying safety risks in aviation, assist in tracking contagious diseases, identifying patterns of violence in local communities.  However, included in these datasets are personally identifiable information (PII) and it is not enough to simply remove PII from these datasets.  It is well known that using auxiliary and possibly completely unrelated datasets, in combination with records in the dataset, can correspond to uniquely identifiable individuals (known as a linkage attack).  Today’s efforts to remove PII do not provide adequate protection against linkage attacks. With the advent of “big data” and technological advances in linking data, there are far too many other possible data sources related to each of us that can lead to our identity being uncovered.

Get Involved – How to Participate

The Unlinkable Data Challenge is a multi-stage Challenge.  This first stage of the Challenge is intended to source detailed concepts for new approaches, inform the final design in the two subsequent stages, and provide recommendations for matching stage 1 competitors into teams for subsequent stages.  Teams will predict and justify where their algorithm fails with respect to the utility-privacy frontier curve.

In this stage, competitors are asked to propose how to de-identify a dataset using less than the available privacy budget, while also maintaining the dataset’s utility for analysis.  For example, the de-identified data, when put through the same analysis pipeline as the original dataset, produces comparable results (i.e. similar coefficients in a linear regression model, or a classifier that produces similar predictions on sub-samples of the data).

This stage of the Challenge seeks Conceptual Solutions that describe how to use and/or combine methods in differential privacy to mitigate privacy loss when publicly releasing datasets in a variety of industries such as public safety, law enforcement, healthcare/biomedical research, education, and finance.  We are limiting the scope to addressing research questions and methodologies that require regression, classification, and clustering analysis on datasets that contain numerical, geo-spatial, and categorical data.

To compete in this stage, we are asking that you propose a new algorithm utilizing existing or new randomized mechanisms with a justification of how this will optimize privacy and utility across different analysis types.  We are also asking you to propose a dataset that you believe would make a good use case for your proposed algorithm, and provide a means of comparing your algorithm and other algorithms.

All submissions must be made using the submission form provided on HeroX website….(More)“.