AI Accountability Policy Report


Report by NTIA: “Artificial intelligence (AI) systems are rapidly becoming part of the fabric of everyday American life. From customer service to image generation to manufacturing, AI systems are everywhere.

Alongside their transformative potential for good, AI systems also pose risks of harm. These risks include inaccurate or false outputs; unlawful discriminatory algorithmic decision making; destruction of jobs and the dignity of work; and compromised privacy, safety, and security. Given their influence and ubiquity, these systems must be subject to security and operational mechanisms that mitigate risk and warrant stakeholder trust that they will not cause harm….


The AI Accountability Policy Report
 conceives of accountability as a chain of inputs linked to consequences. It focuses on how information flow (documentation, disclosures, and access) supports independent evaluations (including red-teaming and audits), which in turn feed into consequences (including liability and regulation) to create accountability. It concludes with recommendations for federal government action, some of which elaborate on themes in the AI EO, to encourage and possibly require accountability inputs…(More)”.

Graphic showing the AI Accountability Chain model

A.I.-Generated Garbage Is Polluting Our Culture


Article by Eric Hoel: “Increasingly, mounds of synthetic A.I.-generated outputs drift across our feeds and our searches. The stakes go far beyond what’s on our screens. The entire culture is becoming affected by A.I.’s runoff, an insidious creep into our most important institutions.

Consider science. Right after the blockbuster release of GPT-4, the latest artificial intelligence model from OpenAI and one of the most advanced in existence, the language of scientific research began to mutate. Especially within the field of A.I. itself.

study published this month examined scientists’ peer reviews — researchers’ official pronouncements on others’ work that form the bedrock of scientific progress — across a number of high-profile and prestigious scientific conferences studying A.I. At one such conference, those peer reviews used the word “meticulous” more than 34 times as often as reviews did the previous year. Use of “commendable” was around 10 times as frequent, and “intricate,” 11 times. Other major conferences showed similar patterns.

Such phrasings are, of course, some of the favorite buzzwords of modern large language models like ChatGPT. In other words, significant numbers of researchers at A.I. conferences were caught handing their peer review of others’ work over to A.I. — or, at minimum, writing them with lots of A.I. assistance. And the closer to the deadline the submitted reviews were received, the more A.I. usage was found in them.

If this makes you uncomfortable — especially given A.I.’s current unreliability — or if you think that maybe it shouldn’t be A.I.s reviewing science but the scientists themselves, those feelings highlight the paradox at the core of this technology: It’s unclear what the ethical line is between scam and regular usage. Some A.I.-generated scams are easy to identify, like the medical journal paper featuring a cartoon rat sporting enormous genitalia. Many others are more insidious, like the mislabeled and hallucinated regulatory pathway described in that same paper — a paper that was peer reviewed as well (perhaps, one might speculate, by another A.I.?)…(More)”.

Facial Recognition Technology: Current Capabilities, Future Prospects, and Governance


Report by the National Academies of Sciences, Engineering, and Medicine: “Facial recognition technology is increasingly used for identity verification and identification, from aiding law enforcement investigations to identifying potential security threats at large venues. However, advances in this technology have outpaced laws and regulations, raising significant concerns related to equity, privacy, and civil liberties.

This report explores the current capabilities, future possibilities, and necessary governance for facial recognition technology. Facial Recognition Technology discusses legal, societal, and ethical implications of the technology, and recommends ways that federal agencies and others developing and deploying the technology can mitigate potential harms and enact more comprehensive safeguards…(More)”.

Bring on the Policy Entrepreneurs


Article by Erica Goldman: “Teaching early-career researchers the skills to engage in the policy arena could prepare them for a lifetime of high-impact engagement and invite new perspectives into the democratic process.

In the first six months of the COVID-19 pandemic, the scientific literature worldwide was flooded with research articles, letters, reviews, notes, and editorials related to the virus. One study estimates that a staggering 23,634 unique documents were published between January 1 and June 30, 2020, alone.

Making sense of that emerging science was an urgent challenge. As governments all over the world scrambled to get up-to-date guidelines to hospitals and information to an anxious public, Australia stood apart in its readiness to engage scientists and decisionmakers collaboratively. The country used what was called a “living evidence” approach to synthesizing new information, making it available—and helpful—in real time.

Each week during the pandemic, the Australian National COVID‑19 Clinical Evidence Taskforce came together to evaluate changes in the scientific literature base. They then spoke with a single voice to the Australian clinical community so clinicians had rapid, evidence-based, and nationally agreed-upon guidelines to provide the clarity they needed to care for people with COVID-19.

This new model for consensus-aligned, evidence-based decisionmaking helped Australia navigate the pandemic and build trust in the scientific enterprise, but it did not emerge overnight. It took years of iteration and effort to get the living evidence model ready to meet the moment; the crisis of the pandemic opened a policy window that living evidence was poised to surge through. Australia’s example led the World Health Organization and the United Kingdom’s National Institute for Health and Care Excellence to move toward making living evidence models a pillar of decisionmaking for all their health care guidelines. On its own, this is an incredible story, but it also reveals a tremendous amount about how policies get changed…(More)”.

Navigating the Future of Work: Perspectives on Automation, AI, and Economic Prosperity


Report by Erik Brynjolfsson, Adam Thierer and Daron Acemoglu: “Experts and the media tend to overestimate technology’s negative impact on employment. Case studies suggest that technology-induced unemployment fears are often exaggerated, evidenced by the McKinsey Global Institute reversing its AI forecasts and the growth in jobs predicted to be at high risk of automation.

Flexible work arrangements, technical recertification, and creative apprenticeship models offer real-time learning and adaptable skills development to prepare workers for future labor market and technological changes.

AI can potentially generate new employment opportunities, but the complex transition for workers displaced by automation—marked by the need for retraining and credentialing—indicates that the productivity benefits may not adequately compensate for job losses, particularly among low-skilled workers.

Instead of resorting to conflictual relationships, labor unions in the US must work with employers to support firm automation while simultaneously advocating for worker skill development, creating a competitive business enterprise built on strong worker representation similar to that found in Germany…(More)”.

Meta to shut off data access to journalists


Article by Sara Fischer: “Meta plans to officially shutter CrowdTangle, the analytics tool widely used by journalists and researchers to see what’s going viral on Facebook and Instagram, the company’s president of global affairs Nick Clegg told Axios in an interview.

Why it matters: The company plans to instead offer select researchers access to a set of new data tools, but news publishers, journalists or anyone with commercial interests will not be granted access to that data.

The big picture: The effort comes amid a broader pivot from Meta away from news and politics and more toward user-generated viral videos.

  • Meta acquired CrowdTangle in 2016 at a time when publishers were heavily reliant on the tech giant for traffic.
  • In recent years, it’s stopped investing in the tool, making it less reliable.

The new research tools include Meta’s Content Library, which it launched last year, and an API, or backend interface used by developers.

  • Both tools offer researchers access to huge swaths of data from publicly accessible content across Facebook and Instagram.
  • The tools are available in 180 languages and offer global data.
  • Researchers must apply for access to those tools through the Inter-university Consortium for Political and Social Research at the University of Michigan, which will vet their requests…(More)”

How to Run a Public Records Audit with a Team of Students


Article by By Lam Thuy Vo: “…The Markup (like many other organizations) uses public record requests as an important investigative tool, and we’ve published tips for fellow journalists on how to best craft their requests for specific investigations. But based on where government institutions are located, public record laws vary. Generally, government institutions are required to release documents to anyone who requests them, except when information falls under a specific exemption, like information that invades an individual’s privacy or if there are trade secrets. Federal institutions are governed by the Freedom of Information Act (FOIA), but state or local government agencies have their own state freedom of information laws, and they aren’t all identical. 

Public record audits take a step back. By sending the same freedom of information (FOI) request to agencies around the country, audits can help journalists, researchers and everyday people track which agency will release records and which may not, and if they’re complying with state laws. According to the national freedom of information coalition, “audits have led to legislative reforms and the establishment of ombudsman positions to represent the public’s interests.” 

The basics of auditing is simple: Send the same FOI request to different government agencies, document how you followed up, and document the outcome. Here’s how we coordinated this process with student reporters…(More)”.

Advancing Equitable AI in the US Social Sector


Article by Kelly Fitzsimmons: “…when developed thoughtfully and with equity in mind, AI-powered applications have great potential to help drive stronger and more equitable outcomes for nonprofits, particularly in the following three areas.

1. Closing the data gap. A widening data divide between the private and social sectors threatens to reduce the effectiveness of nonprofits that provide critical social services in the United States and leave those they serve without the support they need. As Kriss Deiglmeir wrote in a recent Stanford Social Innovation Review essay, “Data is a form of power. And the sad reality is that power is being held increasingly by the commercial sector and not by organizations seeking to create a more just, sustainable, and prosperous world.” AI can help break this trend by democratizing the process of generating and mobilizing data and evidence, thus making continuous research and development, evaluation, and data analysis more accessible to a wider range of organizations—including those with limited budgets and in-house expertise.

Take Quill.org, a nonprofit that provides students with free tools that help them build reading comprehension, writing, and language skills. Quill.org uses an AI-powered chatbot that asks students to respond to open-ended questions based on a piece of text. It then reviews student responses and offers suggestions for improvement, such as writing with clarity and using evidence to support claims. This technology makes high-quality critical thinking and writing support available to students and schools that might not otherwise have access to them. As Peter Gault, Quill.org’s founder and executive director, recently shared, “There are 27 million low-income students in the United States who struggle with basic writing and find themselves disadvantaged in school and in the workforce. … By using AI to provide students with immediate feedback on their writing, we can help teachers support millions of students on the path to becoming stronger writers, critical thinkers, and active members of our democracy.”..(More)”.

Power and Governance in the Age of AI


Reflections by several experts: “The best way to think about ChatGPT is as the functional equivalent of expensive private education and tutoring. Yes, there is a free version, but there is also a paid subscription that gets you access to the latest breakthroughs and a more powerful version of the model. More money gets you more power and privileged access. As a result, in my courses at Middlebury College this spring, I was obliged to include the following statement in my syllabus:

“Policy on the use of ChatGPT: You may all use the free version however you like and are encouraged to do so. For purposes of equity, use of the subscription version is forbidden and will be considered a violation of the Honor Code. Your professor has both versions and knows the difference. To ensure you are learning as much as possible from the course readings, careful citation will be mandatory in both your informal and formal writing.”

The United States fails to live up to its founding values when it supports a luxury brand-driven approach to educating its future leaders that is accessible to the privileged and a few select lottery winners. One such “winning ticket” student in my class this spring argued that the quality-education-for-all issue was of such importance for the future of freedom that he would trade his individual good fortune at winning an education at Middlebury College for the elimination of ALL elite education in the United States so that quality education could be a right rather than a privilege.

A democracy cannot function if the entire game seems to be rigged and bought by elites. This is true for the United States and for democracies in the making or under challenge around the world. Consequently, in partnership with other liberal democracies, the U.S. government must do whatever it can to render both public and private governance more transparent and accountable. We should not expect authoritarian states to help us uphold liberal democratic values, nor should we expect corporations to do so voluntarily…(More)”.

Limiting Data Broker Sales in the Name of U.S. National Security: Questions on Substance and Messaging


Article by Peter Swire and Samm Sacks: “A new executive order issued today contains multiple provisions, most notably limiting bulk sales of personal data to “countries of concern.” The order has admirable national security goals but quite possibly would be ineffective and may be counterproductive. There are serious questions about both the substance and the messaging of the order. 

The new order combines two attractive targets for policy action. First, in this era of bipartisan concern about China, the new order would regulate transactions specifically with “countries of concern,” notably China, but also others such as Iran and North Korea. A key rationale for the order is to prevent China from amassing sensitive information about Americans, for use in tracking and potentially manipulating military personnel, government officials, or anyone else of interest to the Chinese regime. 

Second, the order targets bulk sales, to countries of concern, of sensitive personal information by data brokers, such as genomic, biometric, and precise geolocation data. The large and growing data broker industry has come under well-deserved bipartisan scrutiny for privacy risks. Congress has held hearings and considered bills to regulate such brokers. California has created a data broker registry and last fall passed the Delete Act to enable individuals to require deletion of their personal data. In January, the Federal Trade Commission issued an order prohibiting data broker Outlogic from sharing or selling sensitive geolocation data, finding that the company had acted without customer consent, in an unfair and deceptive manner. In light of these bipartisan concerns, a new order targeting both China and data brokers has a nearly irresistible political logic.

Accurate assessment of the new order, however, requires an understanding of this order as part of a much bigger departure from the traditional U.S. support for free and open flows of data across borders. Recently, in part for national security reasons, the U.S. has withdrawn its traditional support in the World Trade Organization (WTO) for free and open data flows, and the Department of Commerce has announced a proposed rule, in the name of national security, that would regulate U.S.-based cloud providers when selling to foreign countries, including for purposes of training artificial intelligence (AI) models. We are concerned that these initiatives may not sufficiently account for the national security advantages of the long-standing U.S. position and may have negative effects on the U.S. economy.

Despite the attractiveness of the regulatory targets—data brokers and countries of concern—U.S. policymakers should be cautious as they implement this order and the other current policy changes. As discussed below, there are some possible privacy advances as data brokers have to become more careful in their sales of data, but a better path would be to ensure broader privacy and cybersecurity safeguards to better protect data and critical infrastructure systems from sophisticated cyberattacks from China and elsewhere…(More)”.