Open Justice: Public Entrepreneurs Learn to Use New Technology to Increase the Efficiency, Legitimacy, and Effectiveness of the Judiciary


The GovLab: “Open justice is a growing movement to leverage new technologies – including big data, digital platforms, blockchain and more – to improve legal systems by making the workings of courts easier to understand, scrutinize and improve. Through the use of new technology, open justice innovators are enabling greater efficiency, fairness, accountability and a reduction in corruption in the third branch of government. For example, the open data portal ‘Atviras Teismas’ Lithuania (translated ‘open court’ Lithuania) is a platform for monitoring courts and judges through performance metrics’. This portal serves to make the courts of Lithuania transparent and benefits both courts and citizens by presenting comparative data on the Lithuanian Judiciary.

To promote more Open Justice projects, the GovLab in partnership with the Electoral Tribunal of the Federal Judiciary (TEPJF) of Mexico, launched an historic, first of its kind, online course on Open Justice. Designed primarily for lawyers, judges, and public officials – but also intended to appeal to technologists, and members of the public – the Spanish-language course consists of 10 modules.

Each of the ten modules comprises:

  1. A short video-based lecture
  2. An original Open Justice reader
  3. Associated additional readings
  4. A self-assessment quiz
  5. A demonstration of a platform or tool
  6. An interview with a global practitioner

Among those featured in the interviews are Felipe Moreno of Jusbrasil, Justin Erlich of OpenJustice California, Liam Hayes of Aurecon, UK, Steve Ghiassi of Legaler, Australia, and Sara Castillo of Poder Judicial, Chile….(More)”.

Building Trust in Human Centric Artificial Intelligence


Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: “Artificial intelligence (AI) has the potential to transform our world for the better: it can improve healthcare, reduce energy consumption, make cars safer, and enable farmers to use water and natural resources more efficiently. AI can be used to predict environmental and climate change, improve financial risk management and provides the tools to manufacture, with less waste, products tailored to our needs. AI can also help to detect fraud and cybersecurity threats, and enables law enforcement agencies to fight crime more efficiently.

AI can benefit the whole of society and the economy. It is a strategic technology that is now being developed and used at a rapid pace across the world. Nevertheless, AI also brings with it new challenges for the future of work, and raises legal and ethical questions.

To address these challenges and make the most of the opportunities which AI offers, the Commission published a European strategy in April 2018. The strategy places people at the centre of the development of AI — human-centric AI. It is a three-pronged approach to boost the EU’s technological and industrial capacity and AI uptake across the economy, prepare for socio-economic changes, and ensure an appropriate ethical and legal framework.

To deliver on the AI strategy, the Commission developed together with Member States a coordinated plan on AI, which it presented in December 2018, to create synergies, pool data — the raw material for many AI applications — and increase joint investments. The aim is to foster cross-border cooperation and mobilise all players to increase public and private investments to at least EUR 20 billion annually over the next decade.

The Commission doubled its investments in AI in Horizon 2020 and plans to invest EUR 1 billion annually from Horizon Europe and the Digital Europe Programme, in support notably of common data spaces in health, transport and manufacturing, and large experimentation facilities such as smart hospitals and infrastructures for automated vehicles and a strategic research agenda.

To implement such a common strategic research, innovation and deployment agenda the Commission has intensified its dialogue with all relevant stakeholders from industry, research institutes and public authorities. The new Digital Europe programme will also be crucial in helping to make AI available to small and medium-size enterprises across all Member States through digital innovation hubs, strengthened testing and experimentation facilities, data spaces and training programmes.

Building on its reputation for safe and high-quality products, Europe’s ethical approach to AI strengthens citizens’ trust in the digital development and aims at building a competitive advantage for European AI companies. The purpose of this Communication is to launch a comprehensive piloting phase involving stakeholders on the widest scale in order to test the practical implementation of ethical guidance for AI development and use…(More)”.

The Automated Administrative State


Paper by Danielle Citron and Ryan Calo: “The administrative state has undergone radical change in recent decades. In the twentieth century, agencies in the United States generally relied on computers to assist human decision-makers. In the twenty-first century, computers are making agency decisions themselves. Automated systems are increasingly taking human beings out of the loop. Computers terminate Medicaid to cancer patients and deny food stamps to individuals. They identify parents believed to owe child support and initiate collection proceedings against them. Computers purge voters from the rolls and deem small businesses ineligible for federal contracts [1].

Automated systems built in the early 2000s eroded procedural safeguards at the heart of the administrative state. When government makes important decisions that affect our lives, liberty, and property, it owes us “due process”— understood as notice of, and a chance to object to, those decisions. Automated systems, however, frustrate these guarantees. Some systems like the “no-fly” list were designed and deployed in secret; others lacked record-keeping audit trails, making review of the law and facts supporting a system’s decisions impossible. Because programmers working at private contractors lacked training in the law, they distorted policy when translating it into code [2].

Some of us in the academy sounded the alarm as early as the 1990s, offering an array of mechanisms to ensure the accountability and transparency of automated administrative state [3]. Yet the same pathologies continue to plague government decision-making systems today. In some cases, these pathologies have deepened and extended. Agencies lean upon algorithms that turn our personal data into predictions, professing to reflect who we are and what we will do. The algorithms themselves increasingly rely upon techniques, such as deep learning, that are even less amenable to scrutiny than purely statistical models. Ideals of what the administrative law theorist Jerry Mashaw has called “bureaucratic justice” in the form of efficiency with a “human face” feel impossibly distant [4].

The trend toward more prevalent and less transparent automation in agency decision-making is deeply concerning. For a start, we have yet to address in any meaningful way the widening gap between the commitments of due process and the actual practices of contemporary agencies [5]. Nonetheless, agencies rush to automate (surely due to the influence and illusive promises of companies seeking lucrative contracts), trusting algorithms to tell us if criminals should receive probation, if public school teachers should be fired, or if severely disabled individuals should receive less than the maximum of state-funded nursing care [6]. Child welfare agencies conduct intrusive home inspections because some system, which no party to the interaction understands, has rated a poor mother as having a propensity for violence. The challenges of preserving due process in light of algorithmic decision-making is an area of renewed and active attention within academia, civil society, and even the courts [7].

Second, and routinely overlooked, we are applying the new affordances of artificial intelligence in precisely the wrong contexts…(More)”.

Opening Internet Monopolies to Competition with Data Sharing Mandates


Policy Brief by Claudia Biancotti (PIIE) and Paolo Ciocca (Consob): “Over the past few years, it has become apparent that a small number of technology companies have assembled detailed datasets on the characteristics, preferences, and behavior of billions of individuals. This concentration of data is at the root of a worrying power imbalance between dominant internet firms and the rest of society, reflecting negatively on collective security, consumer rights, and competition. Introducing data sharing mandates, or requirements for market leaders to share user data with other firms and academia, would have a positive effect on competition. As data are a key input for artificial intelligence (AI), more widely available information would help spread the benefits of AI through the economy. On the other hand, data sharing could worsen existing risks to consumer privacy and collective security. Policymakers intending to implement a data sharing mandate should carefully evaluate this tradeoff….(More).

Weapons of Mass Distraction: Foreign State-Sponsored Disinformation in the Digital Age


Report by Christina Nemr and William Gangware: “The proliferation of social media platforms has democratized the dissemination and consumption of information, thereby eroding traditional media hierarchies and undercutting claims of authority. In this environment, states and individuals can easily spread disinformation at lightning speed and with serious impact.

Today’s information ecosystem presents significant vulnerabilities that foreign states can exploit, and they revolve around three primary, interconnected elements:

  1. The medium – the platforms on which disinformation flourishes;
  2. the message – what is being conveyed through disinformation; and,
  3. the audience – the consumers of such content.

The problem of disinformation is therefore not one that can be solved through any single solution, whether psychological or technological. An effective response to this challenge requires understanding the converging factors of technology, media, and human behavior.

This interdisciplinary review, commissioned by the United States Department of State’s Global Engagement Center, presents a holistic overview of the disinformation landscape by examining 1) psychological vulnerabilities to disinformation, 2) current foreign state-sponsored disinformation and propaganda efforts both abroad and in the United States, 3) social media companies’ efforts to counter disinformation, and 4) knowledge and technology gaps that remain….(More)”.

A compendium of innovation methods


Report by Geoff Mulgan and Kirsten Bound: “Featured in this compendium are just some of the innovation methods we have explored over the last decade. Some, like seed accelerator programmes, we have invested in and studied. Others, like challenge prizes, standards of evidence or public sector labs, we have developed and helped to spread around the world.

Each section gives a simple introduction to the method and describes Nesta’s work in relation to it. In each case, we have also provided links to further relevant resources and inspiration on our website and beyond.

The 13 methods featured are:

  1. Accelerator programmes
  2. Anticipatory regulation
  3. Challenge prizes
  4. Crowdfunding
  5. Experimentation
  6. Futures
  7. Impact investment
  8. Innovation mapping
  9. People Powered Results: the 100 day challenge
  10. Prototyping
  11. Public and social innovation labs
  12. Scaling grants for social innovations
  13. Standards of Evidence…(More)”.

Understanding algorithmic decision-making: Opportunities and challenges


Study by Claude Castelluccia and Daniel Le Métayer for the European Parliament: “While algorithms are hardly a recent invention, they are nevertheless increasingly involved in systems used to support decision-making. These systems, known as ‘ADS’ (algorithmic decision systems), often rely on the analysis of large amounts of personal data to infer correlations or, more generally, to derive information deemed useful to make decisions. Human intervention in the decision-making may vary, and may even be completely out of the loop in entirely automated systems. In many situations, the impact of the decision on people can be significant, such as access to credit, employment, medical treatment, or judicial sentences, among other things.

Entrusting ADS to make or to influence such decisions raises a variety of ethical, political, legal, or technical issues, where great care must be taken to analyse and address them correctly. If they are neglected, the expected benefits of these systems may be negated by a variety of different risks for individuals (discrimination, unfair practices, loss of autonomy, etc.), the economy (unfair practices, limited access to markets, etc.), and society as a whole (manipulation, threat to democracy, etc.).

This study reviews the opportunities and risks related to the use of ADS. It presents policy options to reduce the risks and explain their limitations. We sketch some options to overcome these limitations to be able to benefit from the tremendous possibilities of ADS while limiting the risks related to their use. Beyond providing an up-to date and systematic review of the situation, the study gives a precise definition of a number of key terms and an analysis of their differences to help clarify the debate. The main focus of the study is the technical aspects of ADS. However, to broaden the discussion, other legal, ethical and social dimensions are considered….(More)”.

Seeing, Naming, Knowing


Essay by Nora N. Khan for Brooklyn Rail: “…. Throughout this essay, I use “machine eye” as a metaphor for the unmoored orb, a kind of truly omnidirectional camera (meaning, a camera that can look in every direction and vector that defines the dimensions of a sphere), and as a symbolic shorthand for the sum of four distinct realms in which automated vision is deployed as a service. (Vision as a Service, reads the selling tag for a new AI surveillance camera company).10 Those four general realms are: 

1. Massive AI systems fueled by the public’s flexible datasets of their personal images, creating a visual culture entirely out of digitized images. 

2. Facial recognition technologies and neural networks improving atop their databases. 

3. The advancement of predictive policing to sort people by types. 

4. The combination of location-based tracking, license plate-reading, and heat sensors to render skein-like, live, evolving maps of people moving, marked as likely to do X.

Though we live the results of its seeing, and its interpretation of its seeing, for now I would hold on blaming ourselves for this situation. We are, after all, the living instantiations of a few thousand years of such violent seeing globally, enacted through imperialism, colonialism, caste stratification, nationalist purges, internal class struggle, and all the evolving theory to support and galvanize the above. Technology simply recasts, concentrates, and amplifies these “tendencies.” They can be hard to see at first because the eye’s seeing seems innocuous, and is designed to seem so. It is a direct expression of the ideology of software, which reflects its makers’ desires. These makers are lauded as American pioneers, innovators, genius-heroes living in the Bay Area in the late 1970s, vibrating at a highly specific frequency, the generative nexus of failed communalism and an emerging Californian Ideology. That seductive ideology has been exported all over the world, and we are only now contending with its impact.

Because the workings of machine visual culture are so remote from our sense perception, and because it so acutely determines our material (economic, social), and affective futures, I invite you to see underneath the eye’s outer glass shell, its holder, beyond it, to the grid that organizes its “mind.” That mind simulates a strain of ideology about who exactly gets to gather data about those on that grid below, and how that data should be mobilized to predict the movements and desires of the grid dwellers. This mind, a vast computational regime we are embedded in, drives the machine eye. And this computational regime has specific values that determine what is seen, how it is seen, and what that seeing means….(More)”.

OECD survey reveals many people unhappy with public services and benefits


Report by OECD: “Many people in OECD countries believe public services and social benefits are inadequate and hard to reach. More than half say they do not receive their fair share of benefits given the taxes they pay, and two-thirds believe others get more than they deserve. Nearly three out of four people say they want their government to do more to protect their social and economic security.  

These are among the findings of a new OECD survey, “Risks that Matter”, which asked over 22,000 people aged 18 to 70 years old in 21 countries about their worries and concerns and how well they think their government helps them tackle social and economic risks.

This nationally representative survey finds that falling ill and not being able to make ends meet are often at the top of people’s lists of immediate concerns. Making ends meet is a particularly common worry for those on low incomes and in countries that were hit hard by the financial crisis. Older people are most often worried about their health, while younger people are frequently concerned with securing adequate housing. When asked about the longer-term, across all countries, getting by in old age is the most commonly cited worry.

The survey reveals a dissatisfaction with current social policy. Only a minority are satisfied with access to services like health care, housing, and long-term care. Many believe the government would not be able to provide a proper safety net if they lost their income due to job loss, illness or old age. More than half think they would not be able to easily access public benefits if they needed them.

“This is a wake-up call for policy makers,” said OECD Secretary-General Angel Gurría. “OECD countries have some of the most advanced and generous social protection systems in the world. They spend, on average, more than one-fifth of their GDP on social policies. Yet, too many people feel they cannot count fully on their government when they need help. A better understanding of the factors driving this perception and why people feel they are struggling is essential to making social protection more effective and efficient. We must restore trust and confidence in government, and promote equality of opportunity.”

In every country surveyed except Canada, Denmark, Norway and the Netherlands, most people say that their government does not incorporate the views of people like them when designing social policy. In a number of countries, including Greece, Israel, Lithuania, Portugal and Slovenia, this share rises to more than two-thirds of respondents. This sense of not being part of the policy debate increases at higher levels of education and income, while feelings of injustice are stronger among those from high-income households.

Public perceptions of fairness are worrying. More than half of respondents say they do not receive their fair share of benefits given the taxes they pay, a share that rises to three quarters or more in Chile, Greece, Israel and Mexico. At the same time, people are calling for more help from government. In almost all countries, more than half of respondents say they want the government to do more for their economic and social security. This is especially the case for older respondents and those on low incomes.

Across countries, people are worried about financial security in old age, and most are willing to pay more to support public pension systems… (More)”.

The Future of Government 2030+


Report by Lucia Vesnic Alujevic, Eckhard Stoermer, Jennifer-Ellen Rudkin, Fabiana Scapolo and Lucy Kimbell: “The Future of Government 2030+: A Citizen Centric Perspective on New Government Models project brings citizens to the centre of the scene. The objective of this project is to explore the emerging societal challenges, analyse trends in a rapidly changing digital world and launch an EU-wide debate on the possible future government models. To address this, citizen engagement, foresight and design are combined, with recent literature from the field of digital politics and media as a framework. The main research question of the project is: How will citizens, together with other actors, shape governments, policies and democracy in 2030 and beyond? Throughout the highly participatory process, more than 150 citizens, together with CSO, think tank, business and public sector representatives, as well as 100 design students participated in the creation of future scenarios and concepts. Four scenarios have been created using the 20 stories emerged from citizen workshops. They served as an inspiration for design students to develop 40 FuturGov concepts. Through the FuturGov Engagement Game, the project’s ambition is to trigger and launch a debate with citizens, businesses, civil society organizations, policy-makers and civil servants in Europe….(More)”.