Meta Ran a Giant Experiment in Governance. Now It’s Turning to AI


Article by Aviv Ovadya: “Late last month, Meta quietly announced the results of an ambitious, near-global deliberative “democratic” process to inform decisions around the company’s responsibility for the metaverse it is creating. This was not an ordinary corporate exercise. It involved over 6,000 people who were chosen to be demographically representative across 32 countries and 19 languages. The participants spent many hours in conversation in small online group sessions and got to hear from non-Meta experts about the issues under discussion. Eighty-two percent of the participants said that they would recommend this format as a way for the company to make decisions in the future.

Meta has now publicly committed to running a similar process for generative AI, a move that aligns with the huge burst of interest in democratic innovation for governing or guiding AI systems. In doing so, Meta joins Google, DeepMind, OpenAI, Anthropic, and other organizations that are starting to explore approaches based on the kind of deliberative democracy that I and others have been advocating for. (Disclosure: I am on the application advisory committee for the OpenAI Democratic inputs to AI grant.) Having seen the inside of Meta’s process, I am excited about this as a valuable proof of concept for transnational democratic governance. But for such a process to truly be democratic, participants would need greater power and agency, and the process itself would need to be more public and transparent.

I first got to know several of the employees responsible for setting up Meta’s Community Forums (as these processes came to be called) in the spring of 2019 during a more traditional external consultation with the company to determine its policy on “manipulated media.” I had been writing and speaking about the potential risks of what is now called generative AI and was asked (alongside other experts) to provide input on the kind of policies Meta should develop to address issues such as misinformation that could be exacerbated by the technology.

At around the same time, I first learned about representative deliberations—an approach to democratic decisionmaking that has taken off like wildfire, with increasingly high-profile citizen assemblies and deliberative polls all over the world. The basic idea is that governments bring difficult policy questions back to the public to decide. Instead of a referendum or elections, a representative microcosm of the public is selected via lottery. That group is brought together for days or even weeks (with compensation) to learn from experts, stakeholders, and each other before coming to a final set of recommendations…(More)”.

AI tools are designing entirely new proteins that could transform medicine


Article by Ewen Callaway: “OK. Here we go.” David Juergens, a computational chemist at the University of Washington (UW) in Seattle, is about to design a protein that, in 3-billion-plus years of tinkering, evolution has never produced.

On a video call, Juergens opens a cloud-based version of an artificial intelligence (AI) tool he helped to develop, called RFdiffusion. This neural network, and others like it, are helping to bring the creation of custom proteins — until recently a highly technical and often unsuccessful pursuit — to mainstream science.

These proteins could form the basis for vaccines, therapeutics and biomaterials. “It’s been a completely transformative moment,” says Gevorg Grigoryan, the co-founder and chief technical officer of Generate Biomedicines in Somerville, Massachusetts, a biotechnology company applying protein design to drug development.

The tools are inspired by AI software that synthesizes realistic images, such as the Midjourney software that, this year, was famously used to produce a viral image of Pope Francis wearing a designer white puffer jacket. A similar conceptual approach, researchers have found, can churn out realistic protein shapes to criteria that designers specify — meaning, for instance, that it’s possible to speedily draw up new proteins that should bind tightly to another biomolecule. And early experiments show that when researchers manufacture these proteins, a useful fraction do perform as the software suggests.

The tools have revolutionized the process of designing proteins in the past year, researchers say. “It is an explosion in capabilities,” says Mohammed AlQuraishi, a computational biologist at Columbia University in New York City, whose team has developed one such tool for protein design. “You can now create designs that have sought-after qualities.”

“You’re building a protein structure customized for a problem,” says David Baker, a computational biophysicist at UW whose group, which includes Juergens, developed RFdiffusion. The team released the software in March 2023, and a paper describing the neural network appears this week in Nature1. (A preprint version was released in late 2022, at around the same time that several other teams, including AlQuraishi’s2 and Grigoryan’s3, reported similar neural networks)…(More)”.

Weather Warning Inequity: Lack of Data Collection Stations Imperils Vulnerable People


Article by Chelsea Harvey: “Devastating floods and landslides triggered by extreme downpours killed hundreds of people in Rwanda and the Democratic Republic of Congo in May, when some areas saw more than 7 inches of rain in a day.

Climate change is intensifying rainstorms throughout much of the world, yet scientists haven’t been able to show that the event was influenced by warming.

That’s because they don’t have enough data to investigate it.

Weather stations are sparse across Africa, making it hard for researchers to collect daily information on rainfall and other weather variables. The data that does exist often isn’t publicly available.

“The main issue in some countries in Africa is funding,” said Izidine Pinto, a senior researcher on weather and climate at the Royal Netherlands Meteorological Institute. “The meteorological offices don’t have enough funding.”

There’s often too little money to build or maintain weather stations, and strapped-for-cash governments often choose to sell the data they do collect rather than make it free to researchers.

That’s a growing problem as the planet warms and extreme weather worsens. Reliable forecasts are needed for early warning systems that direct people to take shelter or evacuate before disasters strike. And long-term climate data is necessary for scientists to build computer models that help make predictions about the future.

The science consortium World Weather Attribution is the latest research group to run into problems. It investigates the links between climate change and individual extreme weather events all over the globe. In the last few months alone, the organization has demonstrated the influence of global warming on extreme heat in South Asia and the Mediterranean, floods in Italy, and drought in eastern Africa.

Most of its research finds that climate change is making weather events more likely to occur or more intense.

The group recently attempted to investigate the influence of climate change on the floods in Rwanda and Congo. But the study was quickly mired in challenges.

The team was able to acquire some weather station data, mainly in Rwanda, Joyce Kimutai, a research associate at Imperial College London and a co-author of the study, said at a press briefing announcing the findings Thursday. But only a few stations provided sufficient data, making it impossible to define the event or to be certain that climate model simulations were accurate…(More)”.

The Benefits of Statistical Noise


Article by Ruth Schmidt: “The year was 1999. Chicago’s public housing was in distress, with neglect and gang activity hastening the decline of already depressed neighborhoods. In response, the city launched the Plan for Transformation to offer relief to residents and rejuvenate the city’s public housing system: residents would be temporarily relocated during demolition, after which the real estate would be repurposed for a mixed-income community. Once the building phase was completed, former residents were to receive vouchers to move back into their safer and less stigmatized old neighborhood.

But a billion dollars and over 20 years later, the jury is still out about the plan’s effectiveness and side effects. While many residents do now live in safer, more established communities, many had to move multiple times before settling, or remain in high-poverty, highly segregated neighborhoods. And the idealized notion of former residents as “moving on up” in a free market system rewarded those who knew how to play the game—like private real estate developers—over those with little practice. Some voices were drowned out.

Chicago’s Plan for Transformation shared the same challenges—cost, time, a diverse set of stakeholders—as many similar large-scale civic initiatives. But it also highlights another equally important issue that’s often hidden in plain sight: informational “noise.”

Noise, defined as extraneous data that intrudes on fair and consistent decision-making, is nearly uniformly considered a negative influence on judgment that can lead experts to reach variable findings in contexts as wide-ranging as medicine, public policy, court decisions, and insurance claims. In fact, Daniel Kahneman himself has suggested that for all the attention to bias, noise in decision-making may actually be an equal-opportunity contributor to irrational judgment.

Kahneman and his colleagues have used the metaphor of a target to explain how both noise and bias result in inaccurate judgments, failing to predictably hit the bull’s-eye in different ways. Where bias looks like a tight cluster of shots that all consistently miss the mark, the erratic judgments caused by noise look like a scattershot combination of precise hits and wild misses…(More)”.

Asymmetries: participatory democracy after AI


Article by Gianluca Sgueo in Grand Continent (FR): “When it comes to AI, the scientific community expresses divergent opinions. Some argue that it could enable democratic governments to develop more effective and possibly more inclusive policies. Policymakers who use AI to analyse and process large volumes of digital data would be in a good position to make decisions that are closer to the needs and expectations of communities of citizens. In the view of those who view ‘government by algorithms’ favourably, AI creates the conditions for more effective and regular democratic interaction between public actors and civil society players. Other authors, on the other hand, emphasise the many critical issues raised by the ‘implantation’ of such a complex technology in political and social systems that are already highly complex and problematic. Some authors believe that AI could undermine even democratic values, by perpetuating and amplifying social inequalities and distrust in democratic institutions – thus weakening the foundations of the social contract. But if everyone is right, is no one right? Not necessarily. These two opposing conceptions give us food for thought about the relationship between algorithms and democracies…(More)”.

ChatGPT took people by surprise – here are four technologies that could make a difference next


Article by Fabian Stephany and Johann Laux: “…There are some AI technologies waiting on the sidelines right now that hold promise. The four we think are waiting in the wings are next-level GPT, humanoid robots, AI lawyers, and AI-driven science. Our choices appear ready from a technological point of view, but whether they satisfy all three of the criteria we’ve mentioned is another matter. We chose these four because they were the ones that kept coming up in our investigations into progress in AI technologies.

1. AI legal help

The startup company DoNotPay claims to have built a legal chatbot – built on LLM technology – that can advise defendants in court.

The company recently said it would let its AI system help two defendants fight speeding tickets in real-time. Connected via an earpiece, the AI can listen to proceedings and whisper legal arguments into the ear of the defendant, who then repeats them out loud to the judge.

After criticism and a lawsuit for practising law without a license, the startup postponed the AI’s courtroom debut. The potential for the technology will thus not be decided by technological or economic constraints, but by the authority of the legal system.

Lawyers are well-paid professionals and the costs of litigation are high, so the economic potential for automation is huge. However, the US legal system currently seems to oppose robots representing humans in court.

2. AI scientific support

Scientists are increasingly turning to AI for insights. Machine learning, where an AI system improves at what it does over time, is being employed to identify patterns in data. This enables the systems to propose novel scientific hypotheses – proposed explanations for phenomena in nature. These may even be capable of surpassing human assumptions and biases.

For example, researchers at the University of Liverpool used a machine learning system called a neural network to rank chemical combinations for battery materials, guiding their experiments and saving time.

The complexity of neural networks means that there are gaps in our understanding of how they actually make decisions – the so-called black box problem. Nevertheless, there are techniques that can shed light on the logic behind their answers and this can lead to unexpected discoveries.

While AI cannot currently formulate hypotheses independently, it can inspire scientists to approach problems from new perspectives…(More)”.

COVID-19 digital contact tracing worked — heed the lessons for future pandemics


Article by Marcel Salathé: “During the first year of the COVID-19 pandemic, around 50 countries deployed digital contact tracing. When someone tested positive for SARS-CoV-2, anyone who had been in close proximity to that person (usually for 15 minutes or more) would be notified as long as both individuals had installed the contact-tracing app on their devices.

Digital contact tracing received much media attention, and much criticism, in that first year. Many worried that the technology provided a way for governments and technology companies to have even more control over people’s lives than they already do. Others dismissed the apps as a failure, after public-health authorities hit problems in deploying them.

Three years on, the data tell a different story.

The United Kingdom successfully integrated a digital contact-tracing app with other public-health programmes and interventions, and collected data to assess the app’s effectiveness. Several analyses now show that, even with the challenges of introducing a new technology during an emergency, and despite relatively low uptake, the app saved thousands of lives. It has also become clearer that many of the problems encountered elsewhere were not to do with the technology itself, but with integrating a twenty-first-century technology into what are largely twentieth-century public-health infrastructures…(More)”.

How should a robot explore the Moon? A simple question shows the limits of current AI systems


Article by Sally Cripps, Edward Santow, Nicholas Davis, Alex Fischer and Hadi Mohasel Afshar: “..Ultimately, AI systems should help humans make better, more accurate decisions. Yet even the most impressive and flexible of today’s AI tools – such as the large language models behind the likes of ChatGPT – can have the opposite effect.

Why? They have two crucial weaknesses. They do not help decision-makers understand causation or uncertainty. And they create incentives to collect huge amounts of data and may encourage a lax attitude to privacy, legal and ethical questions and risks…

ChatGPT and other “foundation models” use an approach called deep learning to trawl through enormous datasets and identify associations between factors contained in that data, such as the patterns of language or links between images and descriptions. Consequently, they are great at interpolating – that is, predicting or filling in the gaps between known values.

Interpolation is not the same as creation. It does not generate knowledge, nor the insights necessary for decision-makers operating in complex environments.

However, these approaches require huge amounts of data. As a result, they encourage organisations to assemble enormous repositories of data – or trawl through existing datasets collected for other purposes. Dealing with “big data” brings considerable risks around security, privacy, legality and ethics.

In low-stakes situations, predictions based on “what the data suggest will happen” can be incredibly useful. But when the stakes are higher, there are two more questions we need to answer.

The first is about how the world works: “what is driving this outcome?” The second is about our knowledge of the world: “how confident are we about this?”…(More)”.

Assembly required


Article by Claudia Chwalsiz: “What is the role of political leadership in a new democratic paradigm defined by citizen participation, representation by lot and deliberation? What is or should be the role and relationship of politicians and political parties with citizens? What does a new approach to activating citizenship (in its broad sense) through practice and education entail? These are some questions that I am grappling with, having worked on democratic innovation and citizens’ assemblies for over a decade, with my views evolving greatly over time.

First, a definition. A citizens’ assembly is a bit like jury duty for policy. It is a broadly representative group of people selected by lottery (sortition) who meet for at least four to six days over a few months to learn about an issue, weigh trade-offs, listen to one another and find common ground on shared recommendations.

To take a recent example, the French Citizens’ Assembly on End of Life comprised 184 members, selected by lot, who deliberated for 27 days over the course of four months. Their mandate was to recommend whether, and if so how, existing legislation about assisted dying, euthanasia and related end-of-life matters should be amended. The assembly heard from more than 60 experts, deliberated with one another, and found 92% consensus on 67 recommendations, which they formulated and delivered to President Emmanuel Macron on 3 April 2023. As of November 2021, the Organisation for Economic Co-operation and Development (OECD) has counted almost 600 citizens’ assemblies for public decision-making around the world, addressing complex issues from drug policy reform to biodiversity loss, urban planning decisions, climate change, infrastructure investment, constitutional issues such as abortion and more.

I believe citizens’ assemblies are a key part of the way forward. I believe the lack of agency people feel to be shaping their lives and their communities is at the root of the democratic crisis – leading to ever-growing numbers of people exiting the formal political system entirely, or else turning to extremes (they often have legitimate analysis of the problems we face, but are not offering genuine solutions, and are often dangerous in their perpetuation of divisiveness and sometimes even violence). This is also related to a feeling of a lack of dignity and belonging, perpetuated in a culture where people look down on others with moral superiority, and humiliation abounds, as Amanda Ripley explains in her work on ‘high conflict’. She distinguishes ‘high conflict’ from ‘good conflict’, which is respectful, necessary, and generative, and occurs in settings where there is openness and curiosity. In this context, our current democratic institutions are fuelling divisions, their legitimacy is weakened, and trust is faltering in all directions (of people in government, of government in people and of people in one another)…(More)”.

How to Regulate AI? Start With the Data


Article by Susan Ariel Aaronson: “We live in an era of data dichotomy. On one hand, AI developers rely on large data sets to “train” their systems about the world and respond to user questions. These data troves have become increasingly valuable and visible. On the other hand, despite the import of data, U.S. policy makers don’t view data governance as a vehicle to regulate AI.  

U.S. policy makers should reconsider that perspective. As an example, the European Union, and more than 30 other countries, provide their citizens with a right not to be subject to automated decision making without explicit consent. Data governance is clearly an effective way to regulate AI.

Many AI developers treat data as an afterthought, but how AI firms collect and use data can tell you a lot about the quality of the AI services they produce. Firms and researchers struggle to collect, classify, and label data sets that are large enough to reflect the real world, but then don’t adequately clean (remove anomalies or problematic data) and check their data. Also, few AI developers and deployers divulge information about the data they use to train AI systems. As a result, we don’t know if the data that underlies many prominent AI systems is complete, consistent, or accurate. We also don’t know where that data comes from (its provenance). Without such information, users don’t know if they should trust the results they obtain from AI. 

The Washington Post set out to document this problem. It collaborated with the Allen Institute for AI to examine Google’s C4 data set, a widely used and large learning model built on data scraped by bots from 15 million websites. Google then filters the data, but it understandably can’t filter the entire data set.  

Hence, this data set provides sufficient training data, but it also presents major risks for those firms or researchers who rely on it. Web scraping is generally legal in most countries as long as the scraped data isn’t used to cause harm to society, a firm, or an individual. But the Post found that the data set contained swaths of data from sites that sell pirated or counterfeit data, which the Federal Trade Commission views as harmful. Moreover, to be legal, the scraped data should not include personal data obtained without user consent or proprietary data obtained without firm permission. Yet the Post found large amounts of personal data in the data sets as well as some 200 million instances of copyrighted data denoted with the copyright symbol.

Reliance on scraped data sets presents other risks. Without careful examination of the data sets, the firms relying on that data and their clients cannot know if it contains incomplete or inaccurate data, which in turn could lead to problems of bias, propaganda, and misinformation. But researchers cannot check data accuracy without information about data provenance. Consequently, the firms that rely on such unverified data are creating some of the AI risks regulators hope to avoid. 

It makes sense for Congress to start with data as it seeks to govern AI. There are several steps Congress could take…(More)”.