Why picking citizens at random could be the best way to govern the A.I. revolution


Article by Hélène Landemore, Andrew Sorota, and Audrey Tang: “Testifying before Congress last month about the risks of artificial intelligence, Sam Altman, the OpenAI CEO behind the massively popular large language model (LLM) ChatGPT, and Gary Marcus, a psychology professor at NYU famous for his positions against A.I. utopianism, both agreed on one point: They called for the creation of a government agency comparable to the FDA to regulate A.I. Marcus also suggested scientific experts should be given early access to new A.I. prototypes to be able to test them before they are released to the public.

Strikingly, however, neither of them mentioned the public, namely the billions of ordinary citizens around the world that the A.I. revolution, in all its uncertainty, is sure to affect. Don’t they also deserve to be included in decisions about the future of this technology?

We believe a global, democratic approach–not an exclusively technocratic one–is the only adequate answer to what is a global political and ethical challenge. Sam Altman himself stated in an earlier interview that in his “dream scenario,” a global deliberation involving all humans would be used to figure out how to govern A.I.

There are already proofs of concept for the various elements that a global, large-scale deliberative process would require in practice. By drawing on these diverse and complementary examples, we can turn this dream into a reality.

Deliberations based on random selection have grown in popularity on the local and national levels, with close to 600 cases documented by the OECD in the last 20 years. Their appeal lies in capturing a unique array of voices and lived experiences, thereby generating policy recommendations that better track the preferences of the larger population and are more likely to be accepted. Famous examples include the 2012 and 2016 Irish citizens’ assemblies on marriage equality and abortion, which led to successful referendums and constitutional change, as well as the 2019 and 2022 French citizens’ conventions on climate justice and end-of-life issues.

Taiwan has successfully experimented with mass consultations through digital platforms like Pol.is, which employs machine learning to identify consensus among vast numbers of participants. Digitally engaged participation has helped aggregate public opinion on hundreds of polarizing issues in Taiwan–such as regulating Uber–involving half of its 23.5 million people. Digital participation can also augment other smaller-scale forms of citizen deliberations, such as those taking place in person or based on random selection…(More)”.

How existential risk became the biggest meme in AI


Article by Will Douglas Heaven: “Who’s afraid of the big bad bots? A lot of people, it seems. The number of high-profile names that have now made public pronouncements or signed open letters warning of the catastrophic dangers of artificial intelligence is striking.

Hundreds of scientists, business leaders, and policymakers have spoken up, from deep learning pioneers Geoffrey Hinton and Yoshua Bengio to the CEOs of top AI firms, such as Sam Altman and Demis Hassabis, to the California congressman Ted Lieu and the former president of Estonia Kersti Kaljulaid.

The starkest assertion, signed by all those figures and many more, is a 22-word statement put out two weeks ago by the Center for AI Safety (CAIS), an agenda-pushing research organization based in San Francisco. It proclaims: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The wording is deliberate. “If we were going for a Rorschach-test type of statement, we would have said ‘existential risk’ because that can mean a lot of things to a lot of different people,” says CAIS director Dan Hendrycks. But they wanted to be clear: this was not about tanking the economy. “That’s why we went with ‘risk of extinction’ even though a lot of us are concerned with various other risks as well,” says Hendrycks.

We’ve been here before: AI doom follows AI hype. But this time feels different. The Overton window has shifted. What were once extreme views are now mainstream talking points, grabbing not only headlines but the attention of world leaders. “The chorus of voices raising concerns about AI has simply gotten too loud to be ignored,” says Jenna Burrell, director of research at Data and Society, an organization that studies the social implications of technology.

What’s going on? Has AI really become (more) dangerous? And why are the people who ushered in this tech now the ones raising the alarm?   

It’s true that these views split the field. Last week, Yann LeCun, chief scientist at Meta and joint recipient with Hinton and Bengio of the 2018 Turing Award, called the doomerism “preposterously ridiculous.” Aidan Gomez, CEO of the AI firm Cohere, said it was “an absurd use of our time.”

Others scoff too. “There’s no more evidence now than there was in 1950 that AI is going to pose these existential risks,” says Signal president Meredith Whittaker, who is cofounder and former director of the AI Now Institute, a research lab that studies the social and policy implications of artificial intelligence. “Ghost stories are contagious—it’s really exciting and stimulating to be afraid.”

“It is also a way to skim over everything that’s happening in the present day,” says Burrell. “It suggests that we haven’t seen real or serious harm yet.”…(More)”.

An algorithm intended to reduce poverty in Jordan disqualifies people in need


Article by Tate Ryan-Mosley: “An algorithm funded by the World Bank to determine which families should get financial assistance in Jordan likely excludes people who should qualify, according to an investigation published this morning by Human Rights Watch. 

The algorithmic system, called Takaful, ranks families applying for aid from least poor to poorest using a secret calculus that assigns weights to 57 socioeconomic indicators. Applicants say that the calculus is not reflective of reality, however, and oversimplifies people’s economic situation, sometimes inaccurately or unfairly. Takaful has cost over $1 billion, and the World Bank is funding similar projects in eight other countries in the Middle East and Africa. 

Human Rights Watch identified several fundamental problems with the algorithmic system that resulted in bias and inaccuracies. Applicants are asked how much water and electricity they consume, for example, as two of the indicators that feed into the ranking system. The report’s authors conclude that these are not necessarily reliable indicators of poverty. Some families interviewed believed the fact that they owned a car affected their ranking, even if the car was old and necessary for transportation to work. 

The report reads, “This veneer of statistical objectivity masks a more complicated reality: the economic pressures that people endure and the ways they struggle to get by are frequently invisible to the algorithm.”..(More)”.

Systems Thinking, Big Data and Public Policy


Article by Mauricio Covarrubias: “Systems thinking and big data analysis are two fundamental tools in the formulation of public policies due to their potential to provide a more comprehensive and evidence-based understanding of the problems and challenges that a society faces.

Systems thinking is important in the formulation of public policies because it allows for a holistic and integrated approach to addressing the complex challenges and issues that a society faces. According to Ilona Kickbusch and David Gleicher, “Addressing wicked problems requires a high level of systems thinking. If there is a single lesson to be drawn from the first decade of the 21st century, it is that surprise, instability and extraordinary change will continue to be regular features of our lives.”

Public policies often involve multiple stakeholders, interrelated factors and unintended consequences, which require a deep understanding of how the system as a whole operates. Systems thinking enables policymakers to identify the key factors that influence a problem and how they relate to each other, enabling them to develop solutions that more effectively address the issues. Instead of trying to address a problem in isolation, systems thinking considers the problem as part of a whole and seeks solutions that address the root causes.

Additionally, systems thinking helps policymakers anticipate the unintended consequences of their decisions and actions. By understanding how different components of the system interact, they can predict the possible side effects of a policy in other areas. This can help avoid decisions that have unintended consequences…(More)”.

Augmented Reality Is Coming for Cities


Article by Greg Lindsay: “It’s still early in the metaverse, however — no killer app has yet emerged, and the financial returns on disruption are falling as interest rates rise.

Already, a handful of companies have come forward to partner with cities instead of fighting them. For example, InCitu uses AR to visualize the building envelopes of planned projects in New York City, Buffalo, and beyond in hopes of winning over skeptical communities through seeing-is-believing. The startup recently partnered with Washington, DC’s Department of Buildings to aid its civic engagement efforts. Another of its partners is Snap, the Gen Z social media giant currently currying favor with cities and civic institutions as it pivots to AR for its next act…

For cities to gain the metaverse they want tomorrow, they will need to invest the scarce staff time and resources today. That means building a coalition of the willing among Apple, Google, Niantic, Snap and others; throwing their weight behind open standards through participation in umbrella groups such as the Metaverse Standards Forum; and becoming early, active participants in each of the major platforms in order to steer traffic toward designated testbeds and away from highly trafficked areas.

It’s a tall order for cities grappling with a pandemic crisis, drug-and-mental-health crisis, and climate crisis all at once, but a necessary one to prevent the metaverse (of all things!) from becoming the next one…(More)”.

The A.I. Revolution Will Change Work. Nobody Agrees How.


Sarah Kessler in The New York Times: “In 2013, researchers at Oxford University published a startling number about the future of work: 47 percent of all United States jobs, they estimated, were “at risk” of automation “over some unspecified number of years, perhaps a decade or two.”

But a decade later, unemployment in the country is at record low levels. The tsunami of grim headlines back then — like “The Rich and Their Robots Are About to Make Half the World’s Jobs Disappear” — look wildly off the mark.

But the study’s authors say they didn’t actually mean to suggest doomsday was near. Instead, they were trying to describe what technology was capable of.

It was the first stab at what has become a long-running thought experiment, with think tanks, corporate research groups and economists publishing paper after paper to pinpoint how much work is “affected by” or “exposed to” technology.

In other words: If cost of the tools weren’t a factor, and the only goal was to automate as much human labor as possible, how much work could technology take over?

When the Oxford researchers, Carl Benedikt Frey and Michael A. Osborne, were conducting their study, IBM Watson, a question-answering system powered by artificial intelligence, had just shocked the world by winning “Jeopardy!” Test versions of autonomous vehicles were circling roads for the first time. Now, a new wave of studies follows the rise of tools that use generative A.I.

In March, Goldman Sachs estimated that the technology behind popular A.I. tools such as DALL-E and ChatGPT could automate the equivalent of 300 million full-time jobs. Researchers at Open AI, the maker of those tools, and the University of Pennsylvania found that 80 percent of the U.S. work force could see an effect on at least 10 percent of their tasks.

“There’s tremendous uncertainty,” said David Autor, a professor of economics at the Massachusetts Institute of Technology, who has been studying technological change and the labor market for more than 20 years. “And people want to provide those answers.”

But what exactly does it mean to say that, for instance, the equivalent of 300 million full-time jobs could be affected by A. I.?

It depends, Mr. Autor said. “Affected could mean made better, made worse, disappeared, doubled.”…(More)”.

Politicians love to appeal to common sense – but does it trump expertise?


Essay by Magda Osman: “Politicians love to talk about the benefits of “common sense” – often by pitting it against the words of “experts and elites”. But what is common sense? Why do politicians love it so much? And is there any evidence that it ever trumps expertise? Psychology provides a clue.

We often view common sense as an authority of collective knowledge that is universal and constant, unlike expertise. By appealing to the common sense of your listeners, you therefore end up on their side, and squarely against the side of the “experts”. But this argument, like an old sock, is full of holes.

Experts have gained knowledge and experience in a given speciality. In which case politicians are experts as well. This means a false dichotomy is created between the “them” (let’s say scientific experts) and “us” (non-expert mouthpieces of the people).

Common sense is broadly defined in research as a shared set of beliefs and approaches to thinking about the world. For example, common sense is often used to justify that what we believe is right or wrong, without coming up with evidence.

But common sense isn’t independent of scientific and technological discoveries. Common sense versus scientific beliefs is therefore also a false dichotomy. Our “common” beliefs are informed by, and inform, scientific and technology discoveries…

The idea that common sense is universal and self-evident because it reflects the collective wisdom of experience – and so can be contrasted with scientific discoveries that are constantly changing and updated – is also false. And the same goes for the argument that non-experts tend to view the world the same way through shared beliefs, while scientists never seem to agree on anything.

Just as scientific discoveries change, common sense beliefs change over time and across cultures. They can also be contradictory: we are told “quit while you are ahead” but also “winners never quit”, and “better safe than sorry” but “nothing ventured nothing gained”…(More)”

Will Democracies Stand Up to Big Brother?


Article by Simon Johnson, Daron Acemoglu and Sylvia Barmack: “Rapid advances in AI and AI-enhanced surveillance tools have created an urgent need for international norms and coordination to set sensible standards. But with oppressive authoritarian regimes unlikely to cooperate, the world’s democracies should start preparing to play economic hardball…Fiction writers have long imagined scenarios in which every human action is monitored by some malign centralized authority. But now, despite their warnings, we find ourselves careening toward a dystopian future worthy of George Orwell’s 1984. The task of assessing how to protect our rights – as consumers, workers, and citizens – has never been more urgent.

One sensible proposal is to limit patents on surveillance technologies to discourage their development and overuse. All else being equal, this could tilt the development of AI-related technologies away from surveillance applications – at least in the United States and other advanced economies, where patent protections matter, and where venture capitalists will be reluctant to back companies lacking strong intellectual-property rights. But even if such sensible measures are adopted, the world will remain divided between countries with effective safeguards on surveillance and those without them. We therefore also need to consider the legitimate basis for trade between these emergent blocs.

AI capabilities have leapt forward over the past 18 months, and the pace of further development is unlikely to slow. The public release of ChatGPT in November 2022 was the generative-AI shot heard round the world. But just as important has been the equally rapid increase in governments and corporations’ surveillance capabilities. Since generative AI excels at pattern matching, it has made facial recognition remarkably accurate (though not without some major flaws). And the same general approach can be used to distinguish between “good” and problematic behavior, based simply on how people move or comport themselves.

Such surveillance technically leads to “higher productivity,” in the sense that it augments an authority’s ability to compel people to do what they are supposed to be doing. For a company, this means performing jobs at what management considers to be the highest productivity level. For a government, it means enforcing the law or otherwise ensuring compliance with those in power.

Unfortunately, a millennium of experience has established that increased productivity does not necessarily lead to improvements in shared prosperity. Today’s AI-powered surveillance allows overbearing managers and authoritarian political leaders to enforce their rules more effectively. But while productivity may increase, most people will not benefit…(More)”

There’s a model for governing AI. Here it is.


Article by Jacinda Ardern: “…On March 15, 2019, a terrorist took the lives of 51 members of New Zealand’s Muslim community in Christchurch. The attacker livestreamed his actions for 17 minutes, and the images found their way onto social media feeds all around the planet. Facebook alone blocked or removed 1.5 million copies of the video in the first 24 hours; in that timeframe, YouTube measured one upload per second.

Afterward, New Zealand was faced with a choice: accept that such exploitation of technology was inevitable or resolve to stop it. We chose to take a stand.

We had to move quickly. The world was watching our response and that of social media platforms. Would we regulate in haste? Would the platforms recognize their responsibility to prevent this from happening again?

New Zealand wasn’t the only nation grappling with the connection between violent extremism and technology. We wanted to create a coalition and knew that France had started to work in this space — so I reached out, leader to leader. In my first conversation with President Emmanuel Macron, he agreed there was work to do and said he was keen to join us in crafting a call to action.

We asked industry, civil society and other governments to join us at the table to agree on a set of actions we could all commit to. We could not use existing structures and bureaucracies because they weren’t equipped to deal with this problem.

Within two months of the attack, we launched the Christchurch Call to Action, and today it has more than 120 members, including governments, online service providers and civil society organizations — united by our shared objective to eliminate terrorist and other violent extremist content online and uphold the principle of a free, open and secure internet.

The Christchurch Call is a large-scale collaboration, vastly different from most top-down approaches. Leaders meet annually to confirm priorities and identify areas of focus, allowing the project to act dynamically. And the Call Secretariat — made up of officials from France and New Zealand — convenes working groups and undertakes diplomatic efforts throughout the year. All members are invited to bring their expertise to solve urgent online problems.

While this multi-stakeholder approach isn’t always easy, it has created change. We have bolstered the power of governments and communities to respond to attacks like the one New Zealand experienced. We have created new crisis-response protocols — which enabled companies to stop the 2022 Buffalo attack livestream within two minutes and quickly remove footage from many platforms. Companies and countries have enacted new trust and safety measures to prevent livestreaming of terrorist and other violent extremist content. And we have strengthened the industry-founded Global Internet Forum to Counter Terrorism with dedicated funding, staff and a multi-stakeholder mission.

We’re also taking on some of the more intransigent problems. The Christchurch Call Initiative on Algorithmic Outcomes, a partnership with companies and researchers, was intended to provide better access to the kind of data needed to design online safety measures to prevent radicalization to violence. In practice, it has much wider ramifications, enabling us to reveal more about the ways in which AI and humans interact.

From its start, the Christchurch Call anticipated the emerging challenges of AI and carved out space to address emerging technologies that threaten to foment violent extremism online. The Christchurch Call is actively tackling these AI issues.

Perhaps the most useful thing the Christchurch Call can add to the AI governance debate is the model itself. It is possible to bring companies, government officials, academics and civil society together not only to build consensus but also to make progress. It’s possible to create tools that address the here and now and also position ourselves to face an unknown future. We need this to deal with AI…(More)”.

Revisiting the Behavioral Revolution in Economics 


Article by Antara Haldar: “But the impact of the behavioral revolution outside of microeconomics remains modest. Many scholars are still skeptical about incorporating psychological insights into economics, a field that often models itself after the natural sciences, particularly physics. This skepticism has been further compounded by the widely publicized crisis of replication in psychology.

Macroeconomists, who study the aggregate functioning of economies and explore the impact of factors such as output, inflation, exchange rates, and monetary and fiscal policy, have, in particular, largely ignored the behavioral trend. Their indifference seems to reflect the belief that individual idiosyncrasies balance out, and that the quirky departures from rationality identified by behavioral economists must offset each other. A direct implication of this approach is that quantitative analyses predicated on value-maximizing behavior, such as the dynamic stochastic general equilibrium models that dominate policymaking, need not be improved.

The validity of these assumptions, however, remains uncertain. During banking crises such as the Great Recession of 2008 or the ongoing crisis triggered by the recent collapse of Silicon Valley Bank, the reactions of economic actors – particularly financial institutions and investors – appear to be driven by herd mentality and what John Maynard Keynes referred to as “animal spirits.”…

The roots of economics’ resistance to the behavioral sciences run deep. Over the past few decades, the field has acknowledged exceptions to the prevailing neoclassical paradigm, such as Elinor Ostrom’s solutions to the tragedy of the commons and Akerlof, Michael Spence, and Joseph E. Stiglitz’s work on asymmetric information (all four won the Nobel Prize). At the same time, economists have refused to update the discipline’s core assumptions.

This state of affairs can be likened to an imperial government that claims to uphold the rule of law in its colonies. By allowing for a limited release of pressure at the periphery of the paradigm, economists have managed to prevent significant changes that might undermine the entire system. Meanwhile, the core principles of the prevailing economic model remain largely unchanged.

For economics to reflect human behavior, much less influence it, the discipline must actively engage with human psychology. But as the list of acknowledged exceptions to the neoclassical framework grows, each subsequent breakthrough becomes a potentially existential challenge to the field’s established paradigm, undermining the seductive parsimony that has been the source of its power.

By limiting their interventions to nudges, behavioral economists hoped to align themselves with the discipline. But in doing so, they delivered a ratings-conscious “made for TV” version of a revolution. As Gil Scott-Heron famously reminded us, the real thing will not be televised….(More)”.