Blog by Lennart Heim, Markus Anderljung, Emma Bluemke, and Robert Trager: “Computing power – compute for short – is a key driver of AI progress. Over the past thirteen years, the amount of compute used to train leading AI systems has increased by a factor of 350 million. This has enabled the major AI advances that have recently gained global attention.
Governments have taken notice. They are increasingly engaged in compute governance: using compute as a lever to pursue AI policy goals, such as limiting misuse risks, supporting domestic industries, or engaging in geopolitical competition.
There are at least three ways compute can be used to govern AI. Governments can:
- Track or monitor compute to gain visibility into AI development and use
- Subsidize or limit access to compute to shape the allocation of resources across AI projects
- Monitor activity, limit access, or build “guardrails” into hardware to enforce rules
Compute governance is a particularly important approach to AI governance because it is feasible. Compute is detectable: training advanced AI systems requires tens of thousands of highly advanced AI chips, which cannot be acquired or used inconspicuously. It is excludable: AI chips, being physical goods, can be given to or taken away from specific actors and in cases of specific uses. And it is quantifiable: chips, their features, and their usage can be measured. Compute’s detectability and excludability are further enhanced by the highly concentrated structure of the AI supply chain: very few companies are capable of producing the tools needed to design advanced chips, the machines needed to make them, or the data centers that house them.
However, just because compute can be used as a tool to govern AI doesn’t mean that it should be used in all cases. Compute governance is a double-edged sword, with both potential benefits and the risk of negative consequences: it can support widely shared goals like safety, but it can also be used to infringe on civil liberties, perpetuate existing power structures, and entrench authoritarian regimes. Indeed, some things are better ungoverned.
In our paper we argue that compute is a particularly promising node for AI governance. We also highlight the risks of compute governance and offer suggestions for how to mitigate them. This post summarizes our findings and key takeaways, while also offering some of our own commentary…(More)”