De Gruyter Handbook of Citizens’ Assemblies


Book edited by Min Reuchamps, Julien Vrydagh and Yanina Welp: “Citizens’ Assemblies (CAs) are flourishing around the world. Quite often composed of randomly selected citizens, CAs, arguably, come as a possible answer to contemporary democratic challenges. Democracies worldwide are indeed confronted with a series of disruptive phenomena such as a widespread perception of distrust and growing polarization as well as low performance. Many actors seek to reinvigorate democracy with citizen participation and deliberation. CAs are expected to have the potential to meet this twofold objective. But, despite deliberative and inclusive qualities of CAs, many questions remain open. The increasing popularity of CAs call for a holistic reflection and evaluation on their origins, current uses and future directions.

The De Gruyter Handbook of Citizens’ Assemblies showcases the state of the art around the study of CAs and opens novel perspectives informed by multidisciplinary research and renewed thinking about deliberative participatory processes. It discusses the latest theoretical, empirical, and methodological scientific developments on CAs and offers a unique resource for scholars, decision-makers, practitioners, and curious citizens to better understand the qualities, purposes, promises but also pitfalls of CAs…(More)”.

Connecting After Chaos: Social Media and the Extended Aftermath of Disaster


Book by Stephen F. Ostertag: “Natural disasters and other such catastrophes typically attract large-scale media attention and public concern in their immediate aftermath. However, rebuilding efforts can take years or even decades, and communities are often left to repair physical and psychological damage on their own once public sympathy fades away. Connecting After Chaos tells the story of how people restored their lives and society in the months and years after disaster, focusing on how New Orleanians used social media to cope with trauma following Hurricane Katrina.

Stephen F. Ostertag draws on almost a decade of research to create a vivid portrait of life in “settling times,” a term he defines as a distinct social condition of prolonged insecurity and uncertainty after disasters. He portrays this precarious state through the story of how a group of strangers began blogging in the wake of Katrina, and how they used those blogs to put their lives and their city back together. In the face of institutional failure, weak authority figures, and an abundance of chaos, the people of New Orleans used social media to gain information, foster camaraderie, build support networks, advocate for and against proposed policies, and cope with trauma. In the efforts of these bloggers, Ostertag finds evidence of the capacity of this and other forms of cultural work to motivate, guide, and energize collective action aimed at weathering the constant instability of extended recovery periods. Connecting After Chaos is both a compelling story of a community in crisis and a broader argument for the power of social media and cultural cooperation to create order when chaos abounds…(More)”.

Meta Ran a Giant Experiment in Governance. Now It’s Turning to AI


Article by Aviv Ovadya: “Late last month, Meta quietly announced the results of an ambitious, near-global deliberative “democratic” process to inform decisions around the company’s responsibility for the metaverse it is creating. This was not an ordinary corporate exercise. It involved over 6,000 people who were chosen to be demographically representative across 32 countries and 19 languages. The participants spent many hours in conversation in small online group sessions and got to hear from non-Meta experts about the issues under discussion. Eighty-two percent of the participants said that they would recommend this format as a way for the company to make decisions in the future.

Meta has now publicly committed to running a similar process for generative AI, a move that aligns with the huge burst of interest in democratic innovation for governing or guiding AI systems. In doing so, Meta joins Google, DeepMind, OpenAI, Anthropic, and other organizations that are starting to explore approaches based on the kind of deliberative democracy that I and others have been advocating for. (Disclosure: I am on the application advisory committee for the OpenAI Democratic inputs to AI grant.) Having seen the inside of Meta’s process, I am excited about this as a valuable proof of concept for transnational democratic governance. But for such a process to truly be democratic, participants would need greater power and agency, and the process itself would need to be more public and transparent.

I first got to know several of the employees responsible for setting up Meta’s Community Forums (as these processes came to be called) in the spring of 2019 during a more traditional external consultation with the company to determine its policy on “manipulated media.” I had been writing and speaking about the potential risks of what is now called generative AI and was asked (alongside other experts) to provide input on the kind of policies Meta should develop to address issues such as misinformation that could be exacerbated by the technology.

At around the same time, I first learned about representative deliberations—an approach to democratic decisionmaking that has taken off like wildfire, with increasingly high-profile citizen assemblies and deliberative polls all over the world. The basic idea is that governments bring difficult policy questions back to the public to decide. Instead of a referendum or elections, a representative microcosm of the public is selected via lottery. That group is brought together for days or even weeks (with compensation) to learn from experts, stakeholders, and each other before coming to a final set of recommendations…(More)”.

AI tools are designing entirely new proteins that could transform medicine


Article by Ewen Callaway: “OK. Here we go.” David Juergens, a computational chemist at the University of Washington (UW) in Seattle, is about to design a protein that, in 3-billion-plus years of tinkering, evolution has never produced.

On a video call, Juergens opens a cloud-based version of an artificial intelligence (AI) tool he helped to develop, called RFdiffusion. This neural network, and others like it, are helping to bring the creation of custom proteins — until recently a highly technical and often unsuccessful pursuit — to mainstream science.

These proteins could form the basis for vaccines, therapeutics and biomaterials. “It’s been a completely transformative moment,” says Gevorg Grigoryan, the co-founder and chief technical officer of Generate Biomedicines in Somerville, Massachusetts, a biotechnology company applying protein design to drug development.

The tools are inspired by AI software that synthesizes realistic images, such as the Midjourney software that, this year, was famously used to produce a viral image of Pope Francis wearing a designer white puffer jacket. A similar conceptual approach, researchers have found, can churn out realistic protein shapes to criteria that designers specify — meaning, for instance, that it’s possible to speedily draw up new proteins that should bind tightly to another biomolecule. And early experiments show that when researchers manufacture these proteins, a useful fraction do perform as the software suggests.

The tools have revolutionized the process of designing proteins in the past year, researchers say. “It is an explosion in capabilities,” says Mohammed AlQuraishi, a computational biologist at Columbia University in New York City, whose team has developed one such tool for protein design. “You can now create designs that have sought-after qualities.”

“You’re building a protein structure customized for a problem,” says David Baker, a computational biophysicist at UW whose group, which includes Juergens, developed RFdiffusion. The team released the software in March 2023, and a paper describing the neural network appears this week in Nature1. (A preprint version was released in late 2022, at around the same time that several other teams, including AlQuraishi’s2 and Grigoryan’s3, reported similar neural networks)…(More)”.

Just Citation


Paper by Amanda Levendowski: “Contemporary citation practices are often unjust. Data cartels, like Google, Westlaw, and Lexis, prioritize profits and efficiency in ways that threaten people’s autonomy, particularly that of pregnant people and immigrants. Women and people of color have been legal scholars for more than a century, yet colleagues consistently under-cite and under-acknowledge their work. Other citations frequently lead to materials that cannot be accessed by disabled people, poor people or the public due to design, paywalls or link rot. Yet scholars and students often understand citation practices as “just” citation and perpetuate these practices unknowingly. This Article is an intervention. Using an intersectional feminist framework for understanding how cyberlaws oppress and liberate oppressed, an emerging movement known as feminist cyberlaw, this Article investigates problems posed by prevailing citation practices and introduces practical methods that bring citation into closer alignment with the feminist values of safety, equity, and accessibility. Escaping data cartels, engaging marginalized scholars, embracing free and public resources, and ensuring that those resources remain easily available represent small, radical shifts that promote just citation. This Article provides powerful, practical tools for pursuing all of them…(More)”.

ChatGPT took people by surprise – here are four technologies that could make a difference next


Article by Fabian Stephany and Johann Laux: “…There are some AI technologies waiting on the sidelines right now that hold promise. The four we think are waiting in the wings are next-level GPT, humanoid robots, AI lawyers, and AI-driven science. Our choices appear ready from a technological point of view, but whether they satisfy all three of the criteria we’ve mentioned is another matter. We chose these four because they were the ones that kept coming up in our investigations into progress in AI technologies.

1. AI legal help

The startup company DoNotPay claims to have built a legal chatbot – built on LLM technology – that can advise defendants in court.

The company recently said it would let its AI system help two defendants fight speeding tickets in real-time. Connected via an earpiece, the AI can listen to proceedings and whisper legal arguments into the ear of the defendant, who then repeats them out loud to the judge.

After criticism and a lawsuit for practising law without a license, the startup postponed the AI’s courtroom debut. The potential for the technology will thus not be decided by technological or economic constraints, but by the authority of the legal system.

Lawyers are well-paid professionals and the costs of litigation are high, so the economic potential for automation is huge. However, the US legal system currently seems to oppose robots representing humans in court.

2. AI scientific support

Scientists are increasingly turning to AI for insights. Machine learning, where an AI system improves at what it does over time, is being employed to identify patterns in data. This enables the systems to propose novel scientific hypotheses – proposed explanations for phenomena in nature. These may even be capable of surpassing human assumptions and biases.

For example, researchers at the University of Liverpool used a machine learning system called a neural network to rank chemical combinations for battery materials, guiding their experiments and saving time.

The complexity of neural networks means that there are gaps in our understanding of how they actually make decisions – the so-called black box problem. Nevertheless, there are techniques that can shed light on the logic behind their answers and this can lead to unexpected discoveries.

While AI cannot currently formulate hypotheses independently, it can inspire scientists to approach problems from new perspectives…(More)”.

AI and the automation of work


Essay by Benedict Evans: “…We should start by remembering that we’ve been automating work for 200 years. Every time we go through a wave of automation, whole classes of jobs go away, but new classes of jobs get created. There is frictional pain and dislocation in that process, and sometimes the new jobs go to different people in different places, but over time the total number of jobs doesn’t go down, and we have all become more prosperous.

When this is happening to your own generation, it seems natural and intuitive to worry that this time, there aren’t going to be those new jobs. We can see the jobs that are going away, but we can’t predict what the new jobs will be, and often they don’t exist yet. We know (or should know), empirically, that there always have been those new jobs in the past, and that they weren’t predictable either: no-one in 1800 would have predicted that in 1900 a million Americans would work on ‘railways’ and no-one in 1900 would have predicted ‘video post-production’ or ‘software engineer’ as employment categories. But it seems insufficient to take it on faith that this will happen now just because it always has in the past. How do you know it will happen this time? Is this different?

At this point, any first-year economics student will tell us that this is answered by, amongst other things, the ‘Lump of Labour’ fallacy.

The Lump of Labour fallacy is the misconception that there is a fixed amount of work to be done, and that if some work is taken by a machine then there will be less work for people. But if it becomes cheaper to use a machine to make, say, a pair of shoes, then the shoes are cheaper, more people can buy shoes and they have more money to spend on other things besides, and we discover new things we need or want, and new jobs. The efficient gain isn’t confined to the shoe: generally, it ripples outward through the economy and creates new prosperity and new jobs. So, we don’t know what the new jobs will be, but we have a model that says, not just that there always have been new jobs, but why that is inherent in the process. Don’t worry about AI!The most fundamental challenge to this model today, I think, is to say that no, what’s really been happening for the last 200 years of automation is that we’ve been moving up the scale of human capability…(More)”.

Open data for AI: what now?


UNESCO Report: “…A vast amount of data on environment, industry, agriculture health about the world is now being collected through automatic processes, including sensors. Such data may be readily available, but also are potentially too big for humans to handle or analyse effectively, nonetheless they could serve as input to AI systems. AI and data science techniques have demonstrated great capacity to analyse large amounts of data, as currently illustrated by generative AI systems, and help uncover formerly unknown hidden patterns to deliver actionable information in real-time. However, many contemporary AI systems run on proprietary datasets, but data that fulfil the criteria of open data would benefit AI systems further and mitigate potential hazards of the systems such as lacking fairness, accountability, and transparency.

The aim of these guidelines is to apprise Member States of the value of open data, and to outline how data are curated and opened. Member States are encouraged not only to support openness of high-quality data, but also to embrace the use of AI technologies and facilitate capacity building, training and education in this regard, including inclusive open data as well as AI literacy…(More)”.

COVID-19 digital contact tracing worked — heed the lessons for future pandemics


Article by Marcel Salathé: “During the first year of the COVID-19 pandemic, around 50 countries deployed digital contact tracing. When someone tested positive for SARS-CoV-2, anyone who had been in close proximity to that person (usually for 15 minutes or more) would be notified as long as both individuals had installed the contact-tracing app on their devices.

Digital contact tracing received much media attention, and much criticism, in that first year. Many worried that the technology provided a way for governments and technology companies to have even more control over people’s lives than they already do. Others dismissed the apps as a failure, after public-health authorities hit problems in deploying them.

Three years on, the data tell a different story.

The United Kingdom successfully integrated a digital contact-tracing app with other public-health programmes and interventions, and collected data to assess the app’s effectiveness. Several analyses now show that, even with the challenges of introducing a new technology during an emergency, and despite relatively low uptake, the app saved thousands of lives. It has also become clearer that many of the problems encountered elsewhere were not to do with the technology itself, but with integrating a twenty-first-century technology into what are largely twentieth-century public-health infrastructures…(More)”.

Why Citizen-Driven Policy Making Is No Longer A Fringe Idea


Article by Tatjana Buklijas: “Deliberative democracy is a term that would have been met with blank stares in academic and political circles just a few decades ago.

Yet this approach, which examines ways to directly connect citizens with decision-making processes, has now become central to many calls for government reform across the world. 

This surge in interest was firstly driven by the 2008 financial crisis. After the banking crash, there was a crisis of trust in democratic institutions. In Europe and the United States, populist political movements helped drive public feeling to become increasingly anti-establishment. 

The second was the perceived inability of representative democracy to effectively respond to long-term, intergenerational challenges, such as climate change and environmental decline. 

Within the past few years, hundreds of citizens’ assemblies, juries and other forms of ‘minipublics’ have met to learn, deliberate and produce recommendations on topics from housing shortages and covid-19 policies, to climate action.

One of the most recent assemblies in the United Kingdom was the People’s Plan for Nature that produced a vision for the future of nature, and the actions society must take to protect and renew it. 

When it comes to climate action, experts argue that we need to move beyond showpiece national and international goal-setting, and bring decision-making closer to home. 

Scholars say that that local and regional minipublics should be used much more frequently to produce climate policies, as this is where citizens experience the impact of the changing climate and act to make everyday changes.

While some policymakers are critical of deliberative democracy and see these processes as redundant to the existing deliberative bodies, such a national parliaments, others are more supportive. They view them as a way to get a better understanding of both what the public both thinks, and also how they might choose to implement change, after being given the chance to learn and deliberate on key questions.

Research has shown that the cognitive diversity of minipublics ensure a better quality of decision-making, in comparison to the more experienced, but also more homogenous traditional decision-making bodies…(More)”.