Four Principles to Make Data Tools Work Better for Kids and Families


Blog by the Annie E. Casey Foundation: “Advanced data analytics are deeply embedded in the operations of public and private institutions and shape the opportunities available to youth and families. Whether these tools benefit or harm communities depends on their design, use and oversight, according to a report from the Annie E. Casey Foundation.

Four Principles to Make Advanced Data Analytics Work for Children and Families examines the growing field of advanced data analytics and offers guidance to steer the use of big data in social programs and policy….

The Foundation report identifies four principles — complete with examples and recommendations — to help steer the growing field of data science in the right direction.

Four Principles for Data Tools

  1. Expand opportunity for children and families. Most established uses of advanced analytics in education, social services and criminal justice focus on problems facing youth and families. Promising uses of advanced analytics go beyond mitigating harm and help to identify so-called odds beaters and new opportunities for youth.
    • Example: The Children’s Data Network at the University of Southern California is helping the state’s departments of education and social services explore why some students succeed despite negative experiences and what protective factors merit more investment.
    • Recommendation: Government and its philanthropic partners need to test if novel data science applications can create new insights and when it’s best to apply them.
       
  2. Provide transparency and evidence. Advanced analytical tools must earn and maintain a social license to operate. The public has a right to know what decisions these tools are informing or automating, how they have been independently validated, and who is accountable for answering and addressing concerns about how they work.
    • Recommendations: Local and state task forces can be excellent laboratories for testing how to engage youth and communities in discussions about advanced analytics applications and the policy frameworks needed to regulate their use. In addition, public and private funders should avoid supporting private algorithms whose design and performance are shielded by trade secrecy claims. Instead, they should fund and promote efforts to develop, evaluate and adapt transparent and effective models.
       
  3. Empower communities. The field of advanced data analytics often treats children and families as clients, patients and consumers. Put to better use, these same tools can help elucidate and reform the systems acting upon children and families. For this shift to occur, institutions must focus analyses and risk assessments on structural barriers to opportunity rather than individual profiles.
    • Recommendation: In debates about the use of data science, greater investment is needed to amplify the voices of youth and their communities.
       
  4. Promote equitable outcomes. Useful advanced analytics tools should promote more equitable outcomes for historically disadvantaged groups. New investments in advanced analytics are only worthwhile if they aim to correct the well-documented bias embedded in existing models.
    • Recommendations: Advanced analytical tools should only be introduced when they reduce the opportunity deficit for disadvantaged groups — a move that will take organizing and advocacy to establish and new policy development to institutionalize. Philanthropy and government also have roles to play in helping communities test and improve tools and examples that already exist….(More)”.

How Can Policy Makers Predict the Unpredictable?


Essay by Meg King and Aaron Shull: “Policy makers around the world are leaning on historical analogies to try to predict how artificial intelligence, or AI — which, ironically, is itself a prediction technology — will develop. They are searching for clues to inform and create appropriate policies to help foster innovation while addressing possible security risks. Much in the way that electrical power completely changed our world more than a century ago — transforming every industry from transportation to health care to manufacturing — AI’s power could effect similar, if not even greater, disruption.

Whether it is the “next electricity” or not, one fact all can agree on is that AI is not a thing in itself. Most authors contributing to this essay series focus on the concept that AI is a general-purpose technology — or GPT — that will enable many applications across a variety of sectors. While AI applications are expected to have a significantly positive impact on our lives, those same applications will also likely be abused or manipulated by bad actors. Setting rules at both the national and the international level — in careful consultation with industry — will be crucial for ensuring that AI offers new capabilities and efficiencies safely.

Situating this discussion, though, requires a look back, in order to determine where we may be going. While AI is not new — Marvin Minsky developed what is widely believed to be the first neural network learning machine in the early 1950s — its scale, scope, speed of adoption and potential use cases today highlight a number of new challenges. There are now many ominous signs pointing to extreme danger should AI be deployed in an unchecked manner, particularly in military applications, as well as worrying trends in the commercial context related to potential discrimination, undermining of privacy, and upended traditional employment structures and economic models….(More)”

The necessity of judgment


Essay by Jeff Malpas in AI and Society: “In 2016, the Australian Government launched an automated debt recovery system through Centrelink—its Department of Human Services. The system, which came to be known as ‘Robodebt’, matched the tax records of welfare recipients with their declared incomes as held by Ethe Department and then sent out debt notices to recipients demanding payment. The entire system was computerized, and many of those receiving debt notices complained that the demands for repayment they received were false or inaccurate as well as unreasonable—all the more so given that those being targeted were, almost by definition, those in already vulnerable circumstances. The system provoked enormous public outrage, was subjected to successful legal challenge, and after being declared unlawful, the Government paid back all of the payments that had been received, and eventually, after much prompting, issued an apology.

The Robodebt affair is characteristic of a more general tendency to shift to systems of automated decision-making across both the public and the private sector and to do so even when those systems are flawed and known to be so. On the face of it, this shift is driven by the belief that automated systems have the capacity to deliver greater efficiencies and economies—in the Robodebt case, to reduce costs by recouping and reducing social welfare payments. In fact, the shift is characteristic of a particular alliance between digital technology and a certain form of contemporary bureaucratised capitalism. In the case of the automated systems we see in governmental and corporate contexts—and in many large organisations—automation is a result both of the desire on the part of software, IT, and consultancy firms to increase their customer base as well as expand the scope of their products and sales, and of the desire on the part of governments and organisations to increase control at the same time as they reduce their reliance on human judgment and capacity. The fact is, such systems seldom deliver the efficiencies or economies they are assumed to bring, and they also give rise to significant additional costs in terms of their broader impact and consequences, but the imperatives of sales and seemingly increased control (as well as an irrational belief in the benefits of technological solutions) over-ride any other consideration. The turn towards automated systems like Robodebt is, as is now widely recognised, a common feature of contemporary society. To look to a completely different domain, new military technologies are being developed to provide drone weapon systems with the capacity to identify potential threats and defend themselves against them. The development is spawning a whole new field of military ethics-based entirely around the putative ‘right to self-defence’ of automated weapon systems.

In both cases, the drone weapon system and Robodebt, we have instances of the development of automated systems that seem to allow for a form of ‘judgment’ that appears to operate independently of human judgment—hence the emphasis on this systems as autonomous. One might argue—and typically it is so argued—that any flaws that such systems currently present can be overcome either through the provision of more accurate information or through the development of more complex forms of artificial intelligence….(More)”.

AI’s Wide Open: A.I. Technology and Public Policy


Paper by Lauren Rhue and Anne L. Washington: “Artificial intelligence promises predictions and data analysis to support efficient solutions for emerging problems. Yet, quickly deploying AI comes with a set of risks. Premature artificial intelligence may pass internal tests but has little resilience under normal operating conditions. This Article will argue that regulation of early and emerging artificial intelligence systems must address the management choices that lead to releasing the system into production. First, we present examples of premature systems in the Boeing 737 Max, the 2020 coronavirus pandemic public health response, and autonomous vehicle technology. Second, the analysis highlights relevant management practices found in our examples of premature AI. Our analysis suggests that redundancy is critical to protecting the public interest. Third, we offer three points of context for premature AI to better assess the role of management practices.

AI in the public interest should: 1) include many sensors and signals; 2) emerge from a broad range of sources; and 3) be legible to the last person in the chain. Finally, this Article will close with a series of policy suggestions based on this analysis. As we develop regulation for artificial intelligence, we need to cast a wide net to identify how problems develop within the technologies and through organizational structures….(More)”.

The Reasonable Robot: Artificial Intelligence and the Law


Book by Ryan Abbott: “AI and people do not compete on a level-playing field. Self-driving vehicles may be safer than human drivers, but laws often penalize such technology. People may provide superior customer service, but businesses are automating to reduce their taxes. AI may innovate more effectively, but an antiquated legal framework constrains inventive AI. In The Reasonable Robot, Ryan Abbott argues that the law should not discriminate between AI and human behavior and proposes a new legal principle that will ultimately improve human well-being. This work should be read by anyone interested in the rapidly evolving relationship between AI and the law….(More)”.

Challenging the Use of Algorithm-driven Decision-making in Benefits Determinations Affecting People with Disabilities


Paper by Lydia X. Z. Brown, Michelle Richardson, Ridhi Shetty, and Andrew Crawford: “Governments are increasingly turning to algorithms to determine whether and to what extent people should receive crucial benefits for programs like Medicaid, Medicare, unemployment, and Social Security Disability. Billed as a way to increase efficiency and root out fraud, these algorithm-driven decision-making tools are often implemented without much public debate and are incredibly difficult to understand once underway. Reports from people on the ground confirm that the tools are frequently reducing and denying benefits, often with unfair and inhumane results.

Benefits recipients are challenging these tools in court, arguing that flaws in the programs’ design or execution violate their due process rights, among other claims. These cases are some of the few active courtroom challenges to algorithm-driven decision-making, producing important precedent about people’s right to notice, explanation, and other procedural due process safeguards when algorithm-driven decisions are made about them. As the legal and policy world continues to recognize the outsized impact of algorithm-driven decision-making in various aspects of our lives, public benefits cases provide important insights into how such tools can operate; the risks of errors in design and execution; and the devastating human toll when tools are adopted without effective notice, input, oversight, and accountability. 

This report analyzes lawsuits that have been filed within the past 10 years arising from the use of algorithm-driven systems to assess people’s eligibility for, or the distribution of, public benefits. It identifies key insights from the various cases into what went wrong and analyzes the legal arguments that plaintiffs have used to challenge those systems in court. It draws on direct interviews with attorneys who have litigated these cases and plaintiffs who sought to vindicate their rights in court – in some instances suing not only for themselves, but on behalf of similarly situated people. The attorneys work in legal aid offices, civil rights litigation shops, law school clinics, and disability protection and advocacy offices. The cases cover a range of benefits issues and have netted mixed results.

People with disabilities experience disproportionate and particular harm because of unjust algorithm-driven decision-making, and we have attempted to center disabled people’s stories and cases in this paper. As disabled people fight for rights inside and outside the courtroom on a wide range of issues, we focus on litigation and highlight the major legal theories for challenging improper algorithm-driven benefit denials in the U.S. 

The good news is that in some cases, plaintiffs are successfully challenging improper adverse benefits decisions with Constitutional, statutory, and administrative claims. But like other forms of civil rights and impact litigation, the bad news is that relief can be temporary and is almost always delayed. Litigation must therefore work in tandem with the development of new processes driven by people who require access to public assistance and whose needs are centered in these processes. We hope this contribution informs not only the development of effective litigation, but a broader public conversation about the thoughtful design, use, and oversight of algorithm-driven decision-making systems….(More)”.

Artificial intelligence, transparency, and public decision-making


Paper by Karl de Fine Licht & Jenny de Fine Licht: “The increasing use of Artificial Intelligence (AI) for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on how transparency can secure high-quality, fair, and reliable decisions, far less attention has been devoted to the role of transparency when it comes to how the general public come to perceive AI decision-making as legitimate and worthy of acceptance. Since relying on coercion is not only normatively problematic but also costly and highly inefficient, perceived legitimacy is fundamental to the democratic system. This paper discusses how transparency in and about AI decision-making can affect the public’s perception of the legitimacy of decisions and decision-makers and produce a framework for analyzing these questions. We argue that a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient ground for perceived legitimacy without producing the harms full transparency would bring….(More)”.

Automating Society Report 2020


Bertelsmann Stiftung: “When launching the first edition of this report, we decided to  call  it  “Automating  Society”,  as ADM systems  in  Europe  were  mostly  new, experimental,  and  unmapped  –  and,  above all, the exception rather than the norm.

This situation has changed rapidly. As clearly shown by over 100 use cases of automated decision-making systems in 16 European countries, which have been compiled by a research network for the 2020 edition of the Automating Society report by Bertelsmann Stiftung and AlgorithmWatch. The report shows: Even though algorithmic systems are increasingly being used by public administration and private companies, there is still a lack of transparency, oversight and competence.

The stubborn opacity surrounding the ever-increasing use of ADM systems has made it all the more urgent that we continue to increase our efforts. Therefore, we have added four countries (Estonia, Greece, Portugal, and Switzerland) to the 12 we already analyzed in the previous edition of this report, bringing the total to 16 countries. While far from exhaustive, this allows us to provide a broader picture of the ADM scenario in Europe. Considering the impact these systems may have on everyday life, and how profoundly they challenge our intuitions – if not our norms and rules – about the relationship between democratic governance and automation, we believe this is an essential endeavor….(More)”.

Algorithm Tips


About: “Algorithm Tips is here to help you start investigating algorithmic decision-making power in society.

This site offers a database of leads which you can search and filter. It’s a curated set of algorithms being used across the US government at the federal, state, and local levels. You can subscribe to alerts for when new algorithms matching your interests are found. For details on our curation methodology see here.

We also provide resources such as example investigations, methodological tips, and guidelines for public records requests related to algorithms.

Finally, we blog about some of the more interesting examples of algorithms we’ve uncovered in our research….(More)”.

Understanding Bias in Facial Recognition Technologies


Paper by David Leslie: “Over the past couple of years, the growing debate around automated facial recognition has reached a boiling point. As developers have continued to swiftly expand the scope of these kinds of technologies into an almost unbounded range of applications, an increasingly strident chorus of critical voices has sounded concerns about the injurious effects of the proliferation of such systems on impacted individuals and communities.

Opponents argue that the irresponsible design and use of facial detection and recognition technologies (FDRTs) threatens to violate civil liberties, infringe on basic human rights and further entrench structural racism and systemic marginalisation. They also caution that the gradual creep of face surveillance infrastructures into every domain of lived experience may eventually eradicate the modern democratic forms of life that have long provided cherished means to individual flourishing, social solidarity and human self-creation. Defenders, by contrast, emphasise the gains in public safety, security and efficiency that digitally streamlined capacities for facial identification, identity verification and trait characterisation may bring.

In this explainer, I focus on one central aspect of this debate: the role that dynamics of bias and discrimination play in the development and deployment of FDRTs. I examine how historical patterns of discrimination have made inroads into the design and implementation of FDRTs from their very earliest moments. And, I explain the ways in which the use of biased FDRTs can lead distributional and recognitional injustices. I also describe how certain complacent attitudes of innovators and users toward redressing these harms raise serious concerns about expanding future adoption. The explainer concludes with an exploration of broader ethical questions around the potential proliferation of pervasive face-based surveillance infrastructures and makes some recommendations for cultivating more responsible approaches to the development and governance of these technologies….(More)”.