Governing the Unknown


Article by Kaushik Basu: “Technology is changing the world faster than policymakers can devise new ways to cope with it. As a result, societies are becoming polarized, inequality is rising, and authoritarian regimes and corporations are doctoring reality and undermining democracy.

For ordinary people, there is ample reason to be “a little bit scared,” as OpenAI CEO Sam Altman recently put it. Major advances in artificial intelligence raise concerns about education, work, warfare, and other risks that could destabilize civilization long before climate change does. To his credit, Altman is urging lawmakers to regulate his industry.

In confronting this challenge, we must keep two concerns in mind. The first is the need for speed. If we take too long, we may find ourselves closing the barn door after the horse has bolted. That is what happened with the 1968 Nuclear Non-Proliferation Treaty: It came 23 years too late. If we had managed to establish some minimal rules after World War II, the NPT’s ultimate goal of nuclear disarmament might have been achievable.

The other concern involves deep uncertainty. This is such a new world that even those working on AI do not know where their inventions will ultimately take us. A law enacted with the best intentions can still backfire. When America’s founders drafted the Second Amendment conferring the “right to keep and bear arms,” they could not have known how firearms technology would change in the future, thereby changing the very meaning of the word “arms.” Nor did they foresee how their descendants would fail to realize this even after seeing the change.

But uncertainty does not justify fatalism. Policymakers can still effectively govern the unknown as long as they keep certain broad considerations in mind. For example, one idea that came up during a recent Senate hearing was to create a licensing system whereby only select corporations would be permitted to work on AI.

This approach comes with some obvious risks of its own. Licensing can often be a step toward cronyism, so we would also need new laws to deter politicians from abusing the system. Moreover, slowing your country’s AI development with additional checks does not mean that others will adopt similar measures. In the worst case, you may find yourself facing adversaries wielding precisely the kind of malevolent tools that you eschewed. That is why AI is best regulated multilaterally, even if that is a tall order in today’s world…(More)”.