Computing Power and the Governance of AI


Blog by Lennart Heim, Markus Anderljung, Emma Bluemke, and Robert Trager: “Computing power – compute for short – is a key driver of AI progress. Over the past thirteen years, the amount of compute used to train leading AI systems has increased by a factor of 350 million. This has enabled the major AI advances that have recently gained global attention.

Governments have taken notice. They are increasingly engaged in compute governance: using compute as a lever to pursue AI policy goals, such as limiting misuse risks, supporting domestic industries, or engaging in geopolitical competition. 

There are at least three ways compute can be used to govern AI. Governments can: 

  • Track or monitor compute to gain visibility into AI development and use
  • Subsidize or limit access to compute to shape the allocation of resources across AI projects
  • Monitor activity, limit access, or build “guardrails” into hardware to enforce rules

Compute governance is a particularly important approach to AI governance because it is feasible. Compute is detectable: training advanced AI systems requires tens of thousands of highly advanced AI chips, which cannot be acquired or used inconspicuously. It is excludable: AI chips, being physical goods, can be given to or taken away from specific actors and in cases of specific uses. And it is quantifiable: chips, their features, and their usage can be measured. Compute’s detectability and excludability are further enhanced by the highly concentrated structure of the AI supply chain: very few companies are capable of producing the tools needed to design advanced chips, the machines needed to make them, or the data centers that house them. 

However, just because compute can be used as a tool to govern AI doesn’t mean that it should be used in all cases. Compute governance is a double-edged sword, with both potential benefits and the risk of negative consequences: it can support widely shared goals like safety, but it can also be used to infringe on civil liberties, perpetuate existing power structures, and entrench authoritarian regimes. Indeed, some things are better ungoverned. 

In our paper we argue that compute is a particularly promising node for AI governance. We also highlight the risks of compute governance and offer suggestions for how to mitigate them. This post summarizes our findings and key takeaways, while also offering some of our own commentary…(More)”

AI is too important to be monopolised


Article by Marietje Schaake: “…From the promise of medical breakthroughs to the perils of election interference, the hopes of helpful climate research to the challenge of cracking fundamental physics, AI is too important to be monopolised.

Yet the market is moving in exactly that direction, as resources and talent to develop the most advanced AI sit firmly in the hands of a very small number of companies. That is particularly true for resource-intensive data and computing power (termed “compute”), which are required to train large language models for a variety of AI applications. Researchers and small and medium-sized enterprises risk fatal dependency on Big Tech once again, or else they will miss out on the latest wave of innovation. 

On both sides of the Atlantic, feverish public investments are being made in an attempt to level the computational playing field. To ensure scientists have access to capacities comparable to those of Silicon Valley giants, the US government established the National AI Research Resource last month. This pilot project is being led by the US National Science Foundation. By working with 10 other federal agencies and 25 civil society groups, it will facilitate government-funded data and compute to help the research and education community build and understand AI. 

The EU set up a decentralised network of supercomputers with a similar aim back in 2018, before the recent wave of generative AI created a new sense of urgency. The EuroHPC has lived in relative obscurity and the initiative appears to have been under-exploited. As European Commission president Ursula von der Leyen said late last year: we need to put this power to useThe EU now imagines that democratised supercomputer access can also help with the creation of “AI factories,” where small businesses pool their resources to develop new cutting-edge models. 

There has long been talk of considering access to the internet a public utility, because of how important it is for education, employment and acquiring information. Yet rules to that end were never adopted. But with the unlocking of compute as a shared good, the US and the EU are showing real willingness to make investments into public digital infrastructure.

Even if the latest measures are viewed as industrial policy in a new jacket, they are part of a long overdue step to shape the digital market and offset the outsized power of big tech companies in various corners of our societies…(More)”.

Applying AI to Rebuild Middle Class Jobs


Paper by David Autor: “While the utopian vision of the current Information Age was that computerization would flatten economic hierarchies by democratizing information, the opposite has occurred. Information, it turns out, is merely an input into a more consequential economic function, decision-making, which is the province of elite experts. The unique opportunity that AI offers to the labor market is to extend the relevance, reach, and value of human expertise. Because of AI’s capacity to weave information and rules with acquired experience to support decision-making, it can be applied to enable a larger set of workers possessing complementary knowledge to perform some of the higher-stakes decision-making tasks that are currently arrogated to elite experts, e.g., medical care to doctors, document production to lawyers, software coding to computer engineers, and undergraduate education to professors. My thesis is not a forecast but an argument about what is possible: AI, if used well, can assist with restoring the middle-skill, middle-class heart of the US labor market that has been hollowed out by automation and globalization…(More)”.

AI cannot be used to deny health care coverage, feds clarify to insurers


Article by Beth Mole: “Health insurance companies cannot use algorithms or artificial intelligence to determine care or deny coverage to members on Medicare Advantage plans, the Centers for Medicare & Medicaid Services (CMS) clarified in a memo sent to all Medicare Advantage insurers.

The memo—formatted like an FAQ on Medicare Advantage (MA) plan rules—comes just months after patients filed lawsuits claiming that UnitedHealth and Humana have been using a deeply flawed AI-powered tool to deny care to elderly patients on MA plans. The lawsuits, which seek class-action status, center on the same AI tool, called nH Predict, used by both insurers and developed by NaviHealth, a UnitedHealth subsidiary.

According to the lawsuits, nH Predict produces draconian estimates for how long a patient will need post-acute care in facilities like skilled nursing homes and rehabilitation centers after an acute injury, illness, or event, like a fall or a stroke. And NaviHealth employees face discipline for deviating from the estimates, even though they often don’t match prescribing physicians’ recommendations or Medicare coverage rules. For instance, while MA plans typically provide up to 100 days of covered care in a nursing home after a three-day hospital stay, using nH Predict, patients on UnitedHealth’s MA plan rarely stay in nursing homes for more than 14 days before receiving payment denials, the lawsuits allege…(More)”

Training Data for the Price of a Sandwich


Article by Stefan Baack: “Common Crawl (henceforth also referred to as CC) is an organization that has been essential to the technological advancements of generative AI, but is largely unknown to the broader public. This California nonprofit with only a handful of employees has crawled billions of web pages since 2008 and it makes this data available without charge via Amazon Web Services (AWS). Because of the enormous size and diversity (in terms of sources and formats) of the data, it has been pivotal as a source for training data for many AI builders. Generative AI in its current form would probably not be possible without Common Crawl, given that the vast majority of data used to train the original model behind OpenAI’s ChatGPT, the generative AI product that set off the current hype, came from it (Brown et al. 2020). The same is true for many models published since then.

Although pivotal, Common Crawl has so far received relatively little attention for its contribution to generative AI…(More)”.

Nobody knows how to audit AI


Axios: “Some legislators and experts are pushing independent auditing of AI systems to minimize risks and build trust, Ryan reports.

Why it matters: Consumers don’t trust big tech to self-regulate and government standards may come slowly or never.

The big picture: Failure to manage risk and articulate values early in the development of an AI system can lead to problems ranging from biased outcomes from unrepresentative data to lawsuits alleging stolen intellectual property.

Driving the news: Sen. John Hickenlooper (D-Colo.) announced in a speech on Monday that he will push for the auditing of AI systems, because AI models are using our data “in ways we never imagined and certainly never consented to.”

  • “We need qualified third parties to effectively audit generative AI systems,” Hickenlooper said, “We cannot rely on self-reporting alone. We should trust but verify” claims of compliance with federal laws and regulations, he said.

Catch up quick: The National Institute of Standards and Technology (NIST) developed an AI Risk Management Framework to help organizations think about and measure AI risks, but it does not certify or validate AI products.

  • President Biden’s executive order on AI mandated that NIST expand its support for generative AI developers and “create guidance and benchmarks for evaluating and auditing AI capabilities,” especially in risky areas such as cybersecurity and bioweapons.

What’s happening: A growing range of companies provide services that evaluate whether AI models are complying with local regulations or promises made by their developers — but some AI companies remain committed to their own internal risk research and processes.

  • NIST is only the “tip of the spear” in AI safety, Hickenlooper said. He now wants to establish criteria and a path to certification for third-party auditors.

The “Big Four” accounting firms — Deloitte, EY, KPMG and PwC — sense business opportunities in applying audit methodologies to AI systems, Nicola Morini Bianzino, EY’s global chief technology officer, tells Axios.

  • Morini Bianzino cautions that AI audits might “look more like risk management for a financial institution, as opposed to audit as a certifying mark. Because, honestly, I don’t know technically how we would do that.”
  • Laura Newinski, KPMG’s COO, tells Axios the firm is developing AI auditing services and “attestation about whether data sets are accurate and follow certain standards.”

Established players such as IBM and startups such as Credo provide AI governance dashboards that tell clients in real time where AI models could be causing problems — around data privacy, for example.

  • Anthropic believes NIST should focus on “building a robust and standardized benchmark for generative AI systems” that all private AI companies can adhere to.

Market leader OpenAI announced in October that it’s creating a “risk-informed development policy” and has invited experts to apply to join its OpenAI Red Teaming Network.

Yes, but: An AI audit industry without clear standards could be a recipe for confusion, both for corporate customers and consumers using AI…(More)”.

AI for Good: Applications in Sustainability, Humanitarian Action, and Health


Book by Juan M. Lavista Ferres, and William B. Weeks: “…delivers an insightful and fascinating discussion of how one of the world’s most recognizable software companies is tackling intractable social problems with the power of artificial intelligence (AI). In the book, you’ll see real in-the-field examples of researchers using AI with replicable methods and reusable AI code to inspire your own uses.

The authors also provide:

  • Easy-to-follow, non-technical explanations of what AI is and how it works
  • Examples of the use of AI for scientists working on mitigating climate change, showing how AI can better analyze data without human bias, remedy pattern recognition deficits, and make use of satellite and other data on a scale never seen before so policy makers can make informed decisions
  • Real applications of AI in humanitarian action, whether in speeding disaster relief with more accurate data for first responders or in helping address populations that have experienced adversity with examples of how analytics is being used to promote inclusivity
  • A deep focus on AI in healthcare where it is improving provider productivity and patient experience, reducing per-capita healthcare costs, and increasing care access, equity, and outcomes
  • Discussions of the future of AI in the realm of social benefit organizations and efforts…(More)”

The Cult of AI


Article by Robert Evans: “…Cult members are often depicted in the media as weak-willed and foolish. But the Church of Scientology — long accused of being a cult, an allegation they have endlessly denied — recruits heavily among the rich and powerful. The Finders, a D.C.-area cult that started in the 1970s, included a wealthy oil-company owner and multiple members with Ivy League degrees. All of them agreed to pool their money and hand over control of where they worked and how they raised their children to their cult leader. Haruki Murakami wrote that Aum Shinrikyo members, many of whom were doctors or engineers, “actively sought to be controlled.”

Perhaps this feels like a reach. But the deeper you dive into the people — and subcultures that are pushing AI forward — the more cult dynamics you begin to notice.

I should offer a caveat here: There’s nothing wrong with the basic technology we call “AI.” That wide banner term includes tools as varied as text- or facial-recognition programs, chatbots, and of course sundry tools to clone voices and generate deepfakes or rights-free images with odd numbers of fingers. CES featured some real products that harnessed the promise of machine learning (I was particularly impressed by a telescope that used AI to clean up light pollution in images). But the good stuff lived alongside nonsense like “ChatGPT for dogs” (really just an app to read your dog’s body language) and an AI-assisted fleshlight for premature ejaculators. 

And, of course, bad ideas and irrational exuberance are par for the course at CES. Since 1967, the tech industry’s premier trade show has provided anyone paying attention with a preview of how Big Tech talks about itself, and our shared future. But what I saw this year and last year, from both excited futurist fanboys and titans of industry, is a kind of unhinged messianic fervor that compares better to Scientology than to the iPhone…(More)”.

Why Machines Learn: The Elegant Maths Behind Modern AI


Book by Anil Ananthaswamy: “Machine-learning systems are making life-altering decisions for us: approving mortgage loans, determining whether a tumour is cancerous, or deciding whether someone gets bail. They now influence discoveries in chemistry, biology and physics – the study of genomes, extra-solar planets, even the intricacies of quantum systems.

We are living through a revolution in artificial intelligence that is not slowing down. This major shift is based on simple mathematics, some of which goes back centuries: linear algebra and calculus, the stuff of eighteenth-century mathematics. Indeed by the mid-1850s, a lot of the groundwork was all done. It took the development of computer science and the kindling of 1990s computer chips designed for video games to ignite the explosion of AI that we see all around us today. In this enlightening book, Anil Ananthaswamy explains the fundamental maths behind AI, which suggests that the basics of natural and artificial intelligence might follow the same mathematical rules…(More)”.

Governing Data and AI to Protect Inner Freedoms Includes a Role for IP


Article by Giuseppina (Pina) D’Agostino and Robert Fay: “Generative artificial intelligence (AI) has caught regulators everywhere by surprise. Its ungoverned and growing ubiquity is similar to that of the large digital platforms that play an important role in the work and personal lives of billions of individuals worldwide. These platforms rely on advertising revenue dependent on user data derived from numerous undisclosed sources, including through covert tracking of interactions on digital platforms, surveillance of conversations, monitoring of activity across platforms and acquisition of biometric data through immersive virtual reality games, just to name a few.

This complex milieu creates a suite of public policy challenges. One of the most important yet least explored is the intersection of intellectual property (IP), data governance, AI and the platforms’ underlying business model. The global scale, the quasi-monopolistic dominance enjoyed by the large platforms, and their control over data and data analytics have explicit implications for fundamental human rights, including freedom of thought…(More)”.