Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies


The Administrative Conference of the United States: “Artificial intelligence (AI) promises to transform how government agencies do their work. Rapid developments in AI have the potential to reduce the cost of core governance functions, improve the quality of decisions, and unleash the power of administrative data, thereby making government performance more efficient and effective. Agencies that use AI to realize these gains will also confront important questions about the proper design of algorithms and user interfaces, the respective scope of human and machine decision-making, the boundaries between public actions and private contracting, their own capacity to learn over time using AI, and whether the use of AI is even permitted.

These are important issues for public debate and academic inquiry. Yet little is known about how agencies are currently using AI systems beyond a few headlinegrabbing examples or surface-level descriptions. Moreover, even amidst growing public and  scholarly discussion about how society might regulate government use of AI, little attention has been devoted to how agencies acquire such tools in the first place or oversee their use. In an effort to fill these gaps, the Administrative Conference of the United States (ACUS) commissioned this report from researchers at Stanford University and New York University. The research team included a diverse set of lawyers, law students, computer scientists, and social scientists with the capacity to analyze these cutting-edge issues from technical, legal, and policy angles. The resulting report offers three cuts at federal agency use of AI:

  • a rigorous canvass of AI use at the 142 most significant federal departments, agencies, and sub-agencies (Part I)
  • a series of in-depth but accessible case studies of specific AI applications at seven leading agencies covering a range of governance tasks (Part II); and
  • a set of cross-cutting analyses of the institutional, legal, and policy challenges raised by agency use of AI (Part III)….(More)”

This emoji could mean your suicide risk is high, according to AI


Rebecca Ruiz at Mashable: “Since its founding in 2013, the free mental health support service Crisis Text Line has focused on using data and technology to better aid those who reach out for help. 

Unlike helplines that offer assistance based on the order in which users dialed, texted, or messaged, Crisis Text Line has an algorithm that determines who is in most urgent need of counseling. The nonprofit is particularly interested in learning which emoji and words texters use when their suicide risk is high, so as to quickly connect them with a counselor. Crisis Text Line just released new insights about those patterns. 

Based on its analysis of 129 million messages processed between 2013 and the end of 2019, the nonprofit found that the pill emoji, or 💊, was 4.4 times more likely to end in a life-threatening situation than the word suicide. 

Other words that indicate imminent danger include 800mg, acetaminophen, excedrin, and antifreeze; those are two to three times more likely than the word suicide to involve an active rescue of the texter. The loudly crying emoji face, or 😭, is similarly high-risk. In general, the words that trigger the greatest alarm suggest the texter has a method or plan to attempt suicide or may be in the process of taking their own life. …(More)”.

Our personal health history is too valuable to be harvested by the tech giants


Eerke Boiten at The Guardian: “…It is clear that the black box society does not only feed on internet surveillance information. Databases collected by public bodies are becoming more and more part of the dark data economy. Last month, it emerged that a data broker in receipt of the UK’s national pupil database had shared its access with gambling companies. This is likely to be the tip of the iceberg; even where initial recipients of shared data might be checked and vetted, it is much harder to oversee who the data is passed on to from there.

Health data, the rich population-wide information held within the NHS, is another such example. Pharmaceutical companies and internet giants have been eyeing the NHS’s extensive databases for commercial exploitation for many years. Google infamously claimed it could save 100,000 lives if only it had free rein with all our health data. If there really is such value hidden in NHS data, do we really want Google to extract it to sell it to us? Google still holds health data that its subsidiary DeepMind Health obtained illegally from the NHS in 2016.

Although many health data-sharing schemes, such as in the NHS’s register of approved data releases], are said to be “anonymised”, this offers a limited guarantee against abuse.

There is just too much information included in health data that points to other aspects of patients’ lives and existence. If recipients of anonymised health data want to use it to re-identify individuals, they will often be able to do so by combining it, for example, with publicly available information. That this would be illegal under UK data protection law is a small consolation as it would be extremely hard to detect.

It is clear that providing access to public organisations’ data for research purposes can serve the greater good and it is unrealistic to expect bodies such as the NHS to keep this all in-house.

However, there are other methods by which to do this, beyond the sharing of anonymised databases. CeLSIUS, for example, a physical facility where researchers can interrogate data under tightly controlled conditions for specific registered purposes, holds UK census information over many years.

These arrangements prevent abuse, such as through deanonymisation, do not have the problem of shared data being passed on to third parties and ensure complete transparency of the use of the data. Online analogues of such set-ups do not yet exist, but that is where the future of safe and transparent access to sensitive data lies….(More)”.

How to Put the Data Subject's Sovereignty into Practice. Ethical Considerations and Governance Perspectives



Paper by Peter Dabrock: “Ethical considerations and governance approaches of AI are at a crossroads. Either one tries to convey the impression that one can bring back a status quo ante of our given “onlife”-era, or one accepts to get responsibly involved in a digital world in which informational self-determination can no longer be safeguarded and fostered through the old fashioned data protection principles of informed consent, purpose limitation and data economy. The main focus of the talk is on how under the given conditions of AI and machine learning, data sovereignty (interpreted as controllability [not control (!)] of the data subject over the use of her data throughout the entire data processing cycle) can be strengthened without hindering innovation dynamics of digital economy and social cohesion of fully digitized societies. In order to put this approach into practice the talk combines a presentation of the concept of data sovereignty put forward by the German Ethics Council with recent research trends in effectively applying the AI ethics principles of explainability and enforceability…(More)”.

Realizing the Potential of AI Localism


Stefaan G. Verhulst and Mona Sloane at Project Syndicate: “Every new technology rides a wave from hype to dismay. But even by the usual standards, artificial intelligence has had a turbulent run. Is AI a society-renewing hero or a jobs-destroying villain? As always, the truth is not so categorical.

As a general-purpose technology, AI will be what we make of it, with its ultimate impact determined by the governance frameworks we build. As calls for new AI policies grow louder, there is an opportunity to shape the legal and regulatory infrastructure in ways that maximize AI’s benefits and limit its potential harms.

Until recently, AI governance has been discussed primarily at the national level. But most national AI strategies – particularly China’s – are focused on gaining or maintaining a competitive advantage globally. They are essentially business plans designed to attract investment and boost corporate competitiveness, usually with an added emphasis on enhancing national security.

This singular focus on competition has meant that framing rules and regulations for AI has been ignored. But cities are increasingly stepping into the void, with New York, Toronto, Dubai, Yokohama, and others serving as “laboratories” for governance innovation. Cities are experimenting with a range of policies, from bans on facial-recognition technology and certain other AI applications to the creation of data collaboratives. They are also making major investments in responsible AI research, localized high-potential tech ecosystems, and citizen-led initiatives.

This “AI localism” is in keeping with the broader trend in “New Localism,” as described by public-policy scholars Bruce Katz and the late Jeremy Nowak. Municipal and other local jurisdictions are increasingly taking it upon themselves to address a broad range of environmental, economic, and social challenges, and the domain of technology is no exception.

For example, New York, Seattle, and other cities have embraced what Ira Rubinstein of New York University calls “privacy localism,” by filling significant gaps in federal and state legislation, particularly when it comes to surveillance. Similarly, in the absence of a national or global broadband strategy, many cities have pursued “broadband localism,” by taking steps to bridge the service gap left by private-sector operators.

As a general approach to problem solving, localism offers both immediacy and proximity. Because it is managed within tightly defined geographic regions, it affords policymakers a better understanding of the tradeoffs involved. By calibrating algorithms and AI policies for local conditions, policymakers have a better chance of creating positive feedback loops that will result in greater effectiveness and accountability….(More)”.

Smarter government or data-driven disaster: the algorithms helping control local communities


Release by MuckRock: “What is the chance you, or your neighbor, will commit a crime? Should the government change a child’s bus route? Add more police to a neighborhood or take some away?

Every day government decisions from bus routes to policing used to be based on limited information and human judgment. Governments now use the ability to collect and analyze hundreds of data points everyday to automate many of their decisions.

Does handing government decisions over to algorithms save time and money? Can algorithms be fairer or less biased than human decision making? Do they make us safer? Automation and artificial intelligence could improve the notorious inefficiencies of government, and it could exacerbate existing errors in the data being used to power it.

MuckRock and the Rutgers Institute for Information Policy & Law (RIIPL) have compiled a collection of algorithms used in communities across the country to automate government decision-making.

Go right to the database.

We have also compiled policies and other guiding documents local governments use to make room for the future use of algorithms. You can find those as a project on DocumentCloud.

View policies on smart cities and technologies

These collections are a living resource and attempt to communally collect records and known instances of automated decision making in government….(More)”.

An Algorithm That Grants Freedom, or Takes It Away


Cade Metz and Adam Satariano at The New York Times: “…In Philadelphia, an algorithm created by a professor at the University of Pennsylvania has helped dictate the experience of probationers for at least five years.

The algorithm is one of many making decisions about people’s lives in the United States and Europe. Local authorities use so-called predictive algorithms to set police patrols, prison sentences and probation rules. In the Netherlands, an algorithm flagged welfare fraud risks. A British city rates which teenagers are most likely to become criminals.

Nearly every state in America has turned to this new sort of governance algorithm, according to the Electronic Privacy Information Center, a nonprofit dedicated to digital rights. Algorithm Watch, a watchdog in Berlin, has identified similar programs in at least 16 European countries.

As the practice spreads into new places and new parts of government, United Nations investigators, civil rights lawyers, labor unions and community organizers have been pushing back.

They are angered by a growing dependence on automated systems that are taking humans and transparency out of the process. It is often not clear how the systems are making their decisions. Is gender a factor? Age? ZIP code? It’s hard to say, since many states and countries have few rules requiring that algorithm-makers disclose their formulas.

They also worry that the biases — involving race, class and geography — of the people who create the algorithms are being baked into these systems, as ProPublica has reported. In San Jose, Calif., where an algorithm is used during arraignment hearings, an organization called Silicon Valley De-Bug interviews the family of each defendant, takes this personal information to each hearing and shares it with defenders as a kind of counterbalance to algorithms.

Two community organizers, the Media Mobilizing Project in Philadelphia and MediaJustice in Oakland, Calif., recently compiled a nationwide database of prediction algorithms. And Community Justice Exchange, a national organization that supports community organizers, is distributing a 50-page guide that advises organizers on how to confront the use of algorithms.

The algorithms are supposed to reduce the burden on understaffed agencies, cut government costs and — ideally — remove human bias. Opponents say governments haven’t shown much interest in learning what it means to take humans out of the decision making. A recent United Nations report warned that governments risked “stumbling zombie-like into a digital-welfare dystopia.”…(More)”.

Enchanted Determinism: Power without Responsibility in Artificial Intelligence


Paper by Alexander Campolo and Kate Crawford: “Deep learning techniques are growing in popularity within the field of artificial intelligence (AI). These approaches identify patterns in large scale datasets, and make classifications and predictions, which have been celebrated as more accurate than those of humans. But for a number of reasons, including nonlinear path from inputs to outputs, there is a dearth of theory that can explain why deep learning techniques work so well at pattern detection and prediction. Claims about “superhuman” accuracy and insight, paired with the inability to fully explain how these results are produced, form a discourse about AI that we call enchanted determinism. To analyze enchanted determinism, we situate it within a broader epistemological diagnosis of modernity: Max Weber’s theory of disenchantment. Deep learning occupies an ambiguous position in this framework. On one hand, it represents a complex form of technological calculation and prediction, phenomena Weber associated with disenchantment.

On the other hand, both deep learning experts and observers deploy enchanted, magical discourses to describe these systems’ uninterpretable mechanisms and counter-intuitive behavior. The combination of predictive accuracy and mysterious or unexplainable properties results in myth-making about deep learning’s transcendent, superhuman capacities, especially when it is applied in social settings. We analyze how discourses of magical deep learning produce techno-optimism, drawing on case studies from game-playing, adversarial examples, and attempts to infer sexual orientation from facial images. Enchantment shields the creators of these systems from accountability while its deterministic, calculative power intensifies social processes of classification and control….(More)”.

Whose Side are Ethics Codes On?


Paper by Anne L. Washington and Rachel S. Kuo: “The moral authority of ethics codes stems from an assumption that they serve a unified society, yet this ignores the political aspects of any shared resource. The sociologist Howard S. Becker challenged researchers to clarify their power and responsibility in the classic essay: Whose Side Are We On. Building on Becker’s hierarchy of credibility, we report on a critical discourse analysis of data ethics codes and emerging conceptualizations of beneficence, or the “social good”, of data technology. The analysis revealed that ethics codes from corporations and professional associations conflated consumers with society and were largely silent on agency. Interviews with community organizers about social change in the digital era supplement the analysis, surfacing the limits of technical solutions to concerns of marginalized communities. Given evidence that highlights the gulf between the documents and lived experiences, we argue that ethics codes that elevate consumers may simultaneously subordinate the needs of vulnerable populations. Understanding contested digital resources is central to the emerging field of public interest technology. We introduce the concept of digital differential vulnerability to explain disproportionate exposures to harm within data technology and suggest recommendations for future ethics codes….(More)”.

Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI


Paper by Fjeld, Jessica and Achten, Nele and Hilligoss, Hannah and Nagy, Adam and Srikumar, Madhulika: “The rapid spread of artificial intelligence (AI) systems has precipitated a rise in ethical and human rights-based frameworks intended to guide the development and use of these technologies. Despite the proliferation of these “AI principles,” there has been little scholarly focus on understanding these efforts either individually or as contextualized within an expanding universe of principles with discernible trends.

To that end, this white paper and its associated data visualization compare the contents of thirty-six prominent AI principles documents side-by-side. This effort uncovered a growing consensus around eight key thematic trends: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values.

Underlying this “normative core,” our analysis examined the forty-seven individual principles that make up the themes, detailing notable similarities and differences in interpretation found across the documents. In sharing these observations, it is our hope that policymakers, advocates, scholars, and others working to maximize the benefits and minimize the harms of AI will be better positioned to build on existing efforts and to push the fractured, global conversation on the future of AI toward consensus…(More)”.