Constitutional Democracy and Technology in the age of Artificial Intelligence


Paul Nemitz at Royal Society Philosophical Transactions: “Given the foreseeable pervasiveness of Artificial Intelligence in modern societies, it is legitimate and necessary to ask the question how this new technology must be shaped to support the maintenance and strengthening of constitutional democracy.

This paper first describes the four core elements of today’s digital power concentration, which need to be seen in cumulation and which, seen together, are both a threat to democracy and to functioning markets. It then recalls the experience with the lawless internet and the relationship between technology and the law as it has developed in the internet economy and the experience with GDPR before it moves on to the key question for AI in democracy, namely which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and encompass the legitimacy of democratic process, thus laws.

The paper closes with a call for a new culture of incorporating the principles of Democracy, Rule of law and Human Rights by design in AI and a three level technological impact assessment for new technologies like AI as a practical way forward for this purpose….(More).

Don’t Believe the Algorithm


Hannah Fry at the Wall Street Journal: “The Notting Hill Carnival is Europe’s largest street party. A celebration of black British culture, it attracts up to two million revelers, and thousands of police. At last year’s event, the Metropolitan Police Service of London deployed a new type of detective: a facial-recognition algorithm that searched the crowd for more than 500 people wanted for arrest or barred from attending. Driving around in a van rigged with closed-circuit TVs, the police hoped to catch potentially dangerous criminals and prevent future crimes.

It didn’t go well. Of the 96 people flagged by the algorithm, only one was a correct match. Some errors were obvious, such as the young woman identified as a bald male suspect. In those cases, the police dismissed the match and the carnival-goers never knew they had been flagged. But many were stopped and questioned before being released. And the one “correct” match? At the time of the carnival, the person had already been arrested and questioned, and was no longer wanted.

Given the paltry success rate, you might expect the Metropolitan Police Service to be sheepish about its experiment. On the contrary, Cressida Dick, the highest-ranking police officer in Britain, said she was “completely comfortable” with deploying such technology, arguing that the public expects law enforcement to use cutting-edge systems. For Dick, the appeal of the algorithm overshadowed its lack of efficacy.

She’s not alone. A similar system tested in Wales was correct only 7% of the time: Of 2,470 soccer fans flagged by the algorithm, only 173 were actual matches. The Welsh police defended the technology in a blog post, saying, “Of course no facial recognition system is 100% accurate under all conditions.” Britain’s police force is expanding the use of the technology in the coming months, and other police departments are following suit. The NYPD is said to be seeking access to the full database of drivers’ licenses to assist with its facial-recognition program….(More).

The UK’s Gender Pay Gap Open Data Law Has Flaws, But Is A Positive Step Forward


Article by Michael McLaughlin: “Last year, the United Kingdom enacted a new regulation requiring companies to report information about their gender pay gap—a measure of the difference in average pay between men and women. The new rules are a good example of how open data can drive social change. However, the regulations have produced some misleading statistics, highlighting the importance of carefully crafting reporting requirements to ensure that they produce useful data.

In the UK, nearly 11,000 companies have filed gender pay gap reports, which include both the difference between the mean and median hourly pay rates for men and women as well the difference in bonuses. And the initial data reveals several interesting findings. Median pay for men is 11.8 percent higher than for women, on average, and nearly 87 percent of companies pay men more than women on average. In addition, over 1,000 firms had a median pay gap greater than 30 percent. The sectors with the highest pay gaps—construction, finance, and insurance—each pay men at least 20 percent more than women. A major reason for the gap is a lack of women in senior positions—UK women actually make more than men between the ages of 22-29. The total pay gap is also a result of more women holding part-time jobs.

However, as detractors note, the UK’s data can be misleading. For example, the data overstates the pay gap on bonuses because it does not adjust these figures for hours worked. More women work part-time than men, so it makes sense that women would receive less in bonus pay when they work less. The data also understates the pay gap because it excludes the high compensation of partners in organizations such as law firms, a group that includes few women. And it is important to note that—by definition—the pay gap data does not compare the wages of men and women working the same jobs, so the data says nothing about whether women receive equal pay for equal work.

Still, publication of the data has sparked an important national conversation. Google searches in the UK for the phrase “gender pay gap” experienced a 12-month high the week the regulations began enforcement, and major news sites like Financial Times have provided significant coverage of the issue by analyzing the reported data. While it is too soon to tell if the law will change employer behavior, such as businesses hiring more female executives, or employee behavior, such as women leaving companies or fields that pay less, countries with similar reporting requirements, such as Belgium, have seen the pay gap narrow following implementation of their rules.

Requiring companies to report this data to the government may be the only way to obtain gender pay gap data, because evidence suggests that the private sector will not produce this data on its own. Only 300 UK organizations joined a voluntary government program to report their gender pay gap in 2011, and as few as 11 actually published the data. Crowdsourced efforts, where women voluntary report their pay, have also suffered from incomplete data. And even complete data does not illuminate variables such as why women may work in a field that pays less….(More)”.

AI and Big Data: A Blueprint for a Human Rights, Social and Ethical Impact Assessment


Alessandro Mantelero in Computer Law & Security Review: “The use of algorithms in modern data processing techniques, as well as data-intensive technological trends, suggests the adoption of a broader view of the data protection impact assessment. This will force data controllers to go beyond the traditional focus on data quality and security, and consider the impact of data processing on fundamental rights and collective social and ethical values.

Building on studies of the collective dimension of data protection, this article sets out to embed this new perspective in an assessment model centred on human rights (Human Rights, Ethical and Social Impact Assessment-HRESIA). This self-assessment model intends to overcome the limitations of the existing assessment models, which are either too closely focused on data processing or have an extent and granularity that make them too complicated to evaluate the consequences of a given use of data. In terms of architecture, the HRESIA has two main elements: a self-assessment questionnaire and an ad hoc expert committee. As a blueprint, this contribution focuses mainly on the nature of the proposed model, its architecture and its challenges; a more detailed description of the model and the content of the questionnaire will be discussed in a future publication drawing on the ongoing research….(More)”.

Long Term Info-structure


Long Now Foundation Seminar by Juan Benet: “We live in a spectacular time,”…”We’re a century into our computing phase transition. The latest stages have created astonishing powers for individuals, groups, and our species as a whole. We are also faced with accumulating dangers — the capabilities to end the whole humanity experiment are growing and are ever more accessible. In light of the promethean fire that is computing, we must prevent bad outcomes and lock in good ones to build robust foundations for our knowledge, and a safe future. There is much we can do in the short-term to secure the long-term.”

“I come from the front lines of computing platform design to share a number of new super-powers at our disposal, some old challenges that are now soluble, and some new open problems. In this next decade, we’ll need to leverage peer-to-peer networks, crypto-economics, blockchains, Open Source, Open Services, decentralization, incentive-structure engineering, and so much more to ensure short-term safety and the long-term flourishing of humanity.”

Juan Benet is the inventor of the InterPlanetary File System (IPFS)—a new protocol which uses content-addressing to make the web faster, safer, and more open—and the creator of Filecoin, a cryptocurrency-incentivized storage market….(More + Video)”

The Blockchain and the New Architecture of Trust


Book by Kevin Werbach: “The blockchain entered the world on January 3, 2009, introducing an innovative new trust architecture: an environment in which users trust a system—for example, a shared ledger of information—without necessarily trusting any of its components. The cryptocurrency Bitcoin is the most famous implementation of the blockchain, but hundreds of other companies have been founded and billions of dollars invested in similar applications since Bitcoin’s launch. Some see the blockchain as offering more opportunities for criminal behavior than benefits to society. In this book, Kevin Werbach shows how a technology resting on foundations of mutual mistrust can become trustworthy.

The blockchain, built on open software and decentralized foundations that allow anyone to participate, seems like a threat to any form of regulation. In fact, Werbach argues, law and the blockchain need each other. Blockchain systems that ignore law and governance are likely to fail, or to become outlaw technologies irrelevant to the mainstream economy. That, Werbach cautions, would be a tragic waste of potential. If, however, we recognize the blockchain as a kind of legal technology, which shapes behavior in new ways, it can be harnessed to create tremendous business and social value….(More)”.

Remembering and Forgetting in the Digital Age


Book by Thouvenin, Florent (et al.): “… examines the fundamental question of how legislators and other rule-makers should handle remembering and forgetting information (especially personally identifiable information) in the digital age. It encompasses such topics as privacy, data protection, individual and collective memory, and the right to be forgotten when considering data storage, processing and deletion. The authors argue in support of maintaining the new digital default, that (personally identifiable) information should be remembered rather than forgotten.

The book offers guidelines for legislators as well as private and public organizations on how to make decisions on remembering and forgetting personally identifiable information in the digital age. It draws on three main perspectives: law, based on a comprehensive analysis of Swiss law that serves as an example; technology, specifically search engines, internet archives, social media and the mobile internet; and an interdisciplinary perspective with contributions from various disciplines such as philosophy, anthropology, sociology, psychology, and economics, amongst others.. Thanks to this multifaceted approach, readers will benefit from a holistic view of the informational phenomenon of “remembering and forgetting”.

This book will appeal to lawyers, philosophers, sociologists, historians, economists, anthropologists, and psychologists among many others. Such wide appeal is due to its rich and interdisciplinary approach to the challenges for individuals and society at large with regard to remembering and forgetting in the digital age…(More)”

The Smart Transition: An Opportunity for a Sensor-Based Public-Health Risk Governance?


Anna Berti Suman in the International Review of Law, Computers & Technology: “This contribution analyses the promises and challenges of using bottom-up produced sensors data to manage public-health risks in the (smart) city. The article criticizes traditional ways of governing public-health risks with the aim to inspect the contribution that a sensor-based risk governance may bring to the fore. The failures of the top-down model serve to illustrate that the smart transformation of the city’s living environments may stimulate a better public-health risk governance and a new city’s utopia.

The central question this contribution addresses is: How could the potential of a city’s network of sensors and of datainfrastructures contribute to smartly realizing healthier cities, free from environmental risk? The central aim of the article is to reflect on the opportunity to combine top-down and bottom-up sensing approaches. In view of this aim, the complementary potential of top and bottom sensing is inspected. Citizen sensing practices are discussed as manifestation of the new public sphere and a taxonomy for a sensor-based risk governance is developed. The challenges hidden behind this arguably inclusive transition are dismantled….(More)”.

When Westlaw Fuels Ice Surveillance: Ethics in the Big Data Policing Era


Sarah Lamdan at New York University Review of Law & Social Change: “Legal research companies are selling surveillance data and services to U.S. Immigration and Customs Enforcement (ICE) and other law enforcement agencies.

This article discusses ethical issues that arise when lawyers buy and use legal research services sold by the vendors that build ICE’s surveillance systems. As the legal profession collectively pays millions of dollars for computer assisted legal research services, lawyers should consider whether doing so in the era of big data policing compromises their confidentiality requirements and their obligation to supervise third party vendors….(More)”

What is mechanistic evidence, and why do we need it for evidence-based policy?


Paper by Caterina Marchionni and Samuli Reijula: “It has recently been argued that successful evidence-based policy should rely on two kinds of evidence: statistical and mechanistic. The former is held to be evidence that a policy brings about the desired outcome, and the latter concerns how it does so. Although agreeing with the spirit of this proposal, we argue that the underlying conception of mechanistic evidence as evidence that is different in kind from correlational, difference-making or statistical evidence, does not correctly capture the role that information about mechanisms should play in evidence-based policy. We offer an alternative account of mechanistic evidence as information concerning the causal pathway connecting the policy intervention to its outcome. Not only can this be analyzed as evidence of difference-making, it is also to be found at any level and is obtainable by a broad range of methods, both experimental and observational. Using behavioral policy as an illustration, we draw the implications of this revised understanding of mechanistic evidence for debates concerning policy extrapolation, evidence hierarchies, and evidence integration…(More)”.