Can We Trust Social Science Yet?


Essay by Ryan Briggs: “Everyone likes the idea of evidence-based policy, but it’s hard to realize it when our most reputable social science journals are still publishing poor quality research.

Ideally, policy and program design is a straightforward process: a decision-maker faces a problem, turns to peer-reviewed literature, and selects interventions shown to work. In reality, that’s rarely how things unfold. The popularity of “evidence-based medicine” and other “evidence-based” topics highlights our desire for empirical approaches — but would the world actually improve if those in power consistently took social science  evidence seriously? It brings me no joy to tell you that, at present, I think the answer is usually “no.”

Given the current state of evidence production in the social sciences, I believe that many — perhaps most — attempts to use social scientific evidence to inform policy will not lead to better outcomes. This is not because of politics or the challenges of scaling small programs. The problem is more immediate. Much of social science research is of poor quality, and sorting the trustworthy work from bad work is difficult, costly, and time-consuming.

But it is necessary. If you were to randomly select an empirical paper published in the past decade — including any studies from the top journals in political science or economics — there is a high chance that its findings may be inaccurate. And not just off by a little: possibly two times as large, or even incorrectly signed. As an academic, this bothers me. I think it should bother you, too. So let me explain why this happens…(More)”.

Simulating Human Behavior with AI Agents


Brief by The Stanford Institute for Human-Centered AI (HAI): “…we introduce an AI agent architecture that simulates more than 1,000 real people. The agent architecture—built by combining the transcripts of two-hour, qualitative interviews with a large language model (LLM) and scored against social science benchmarks—successfully replicated real individuals’ responses to survey questions 85% as accurately as participants replicate their own answers across surveys staggered two weeks apart. The generative agents performed comparably in predicting people’s personality traits and experiment outcomes and were less biased than previously used simulation tools.

This architecture underscores the benefits of using generative agents as a research tool to glean new insights into real-world individual behavior. However, researchers and policymakers must also mitigate the risks of using generative agents in such contexts, including harms related to over-reliance on agents, privacy, and reputation…(More)”.

We still don’t know how much energy AI consumes


Article by Sasha Luccioni: “…The AI Energy Score project, a collaboration between Salesforce, Hugging FaceAI developer Cohere and Carnegie Mellon University, is an attempt to shed more light on the issue by developing a standardised approach. The code is open and available for anyone to access and contribute to. The goal is to encourage the AI community to test as many models as possible.

By examining 10 popular tasks (such as text generation or audio transcription) on open-source AI models, it is possible to isolate the amount of energy consumed by the computer hardware that runs them. These are assigned scores ranging between one and five stars based on their relative efficiency. Between the most and least efficient AI models in our sample, we found a 62,000-fold difference in the power required. 

Since the project was launched in February a new tool compares the energy use of chatbot queries with everyday activities like phone charging or driving as a way to help users understand the environmental impacts of the tech they use daily.

The tech sector is aware that AI emissions put its climate commitments in danger. Both Microsoft and Google no longer seem to be meeting their net zero targets. So far, however, no Big Tech company has agreed to use the methodology to test its own AI models.

It is possible that AI models will one day help in the fight against climate change. AI systems pioneered by companies like DeepMind are already designing next-generation solar panels and battery materials, optimising power grid distribution and reducing the carbon intensity of cement production.

Tech companies are moving towards cleaner energy sources too. Microsoft is investing in the Three Mile Island nuclear power plant and Alphabet is engaging with more experimental approaches such as small modular nuclear reactors. In 2024, the technology sector contributed to 92 per cent of new clean energy purchases in the US. 

But greater clarity is needed. OpenAI, Anthropic and other tech companies should start disclosing the energy consumption of their models. If they resist, then we need legislation that would make such disclosures mandatory.

As more users interact with AI systems, they should be given the tools to understand how much energy each request consumes. Knowing this might make them more careful about using AI for superfluous tasks like looking up a nation’s capital. Increased transparency would also be an incentive for companies developing AI-powered services to select smaller, more sustainable models that meet their specific needs, rather than defaulting to the largest, most energy-intensive options…(More)”.

Gen Z’s new side hustle: selling data


Article by Erica Pandey: “Many young people are more willing than their parents to share personal data, giving companies deeper insight into their lives.

Why it matters: Selling data is becoming the new selling plasma.

Case in point: Generation Lab, a youth polling company, is launching a new product, Verb.AI, today — betting that buying this data is the future of polling.

  • “We think corporations have extracted user data without fairly compensating people for their own data,” says Cyrus Beschloss, CEO of Generation Lab. “We think users should know exactly what data they’re giving us and should feel good about what they’re receiving in return.”

How it works: Generation Lab offers people cash — $50 or more per month, depending on use and other factors — to download a tracker onto their phones.

  • The product takes about 90 seconds to download, and once it’s on your phone, it tracks things like what you browse, what you buy, which streaming apps you use — all anonymously. There are also things it doesn’t track, like activity on your bank account.
  • Verb then uses that data to create a digital twin of you that lives in a central database and knows your preferences…(More)”.

Designing Shared Data Futures: Engaging young people on how to re-use data responsibly for health and well-being


Report by Hannah Chafetz, Sampriti Saxena, Tracy Jo Ingram, Andrew J. Zahuranec, Jennifer Requejo and Stefaan Verhulst: “When young people are engaged in data decisions for or about them, they not only become more informed about this data, but can also contribute to new policies and programs that improve their health and well-being. However, oftentimes young people are left out of these discussions and are unaware of the data that organizations collect.

In October 2023, The Second Lancet Commission on Adolescent Health and well-being, the United Nations Children’s Fund (UNICEF), and The GovLab at New York University hosted six Youth Solutions Labs (or co-design workshops) with over 120 young people from 36 countries around the world. In addition to co-designing solutions to five key issues impacting their health and well-being, we sought to understand current sentiments around the re-use of data on those issues. The Labs provided several insights about young people’s preferences regarding: 1) the purposes for which data should be re-used to improve health and well-being, 2) the types and sources of data that should and should not be re-used, 3) who should have access to previously collected data, and 4) under what circumstances data re-use should take place. Additionally, participants provided suggestions of what ethical and responsible data re-use looks like to them and how young people can participate in decision making processes. In this paper, we elaborate on these findings and provide a series of recommendations to accelerate responsible data re-use for the health and well-being of young people…(More)”.

Public AI White Paper – A Public Alternative to Private AI Dominance


White paper by the Bertelsmann Stiftung and Open Future: “Today, the most advanced AI systems are developed and controlled by a small number of private companies. These companies hold power not only over the models themselves but also over key resources such as computing infrastructure. This concentration of power poses not only economic risks but also significant democratic challenges.

The Public AI White Paper presents an alternative vision, outlining how open and public-interest approaches to AI can be developed and institutionalized. It advocates for a rebalancing of power within the AI ecosystem – with the goal of enabling societies to shape AI actively, rather than merely consume it…(More)”.

What Happens When AI-Generated Lies Are More Compelling than the Truth?


Essay by Nicholas Carr: “…In George Orwell’s 1984, the functionaries in Big Brother’s Ministry of Truth spend their days rewriting historical records, discarding inconvenient old facts and making up new ones. When the truth gets hazy, tyrants get to define what’s true. The irony here is sharp. Artificial intelligence, perhaps humanity’s greatest monument to logical thinking, may trigger a revolution in perception that overthrows the shared values of reason and rationality we inherited from the Enlightenment.

In 1957, a Russian scientist-turned-folklorist named Yuri Mirolyubov published a translation of an ancient manuscript—a thousand years old, he estimated—in a Russian-language newspaper in San Francisco. Mirolyubov’s Book of Veles told stirring stories of the god Veles, a prominent deity in pre-Christian Slavic mythology. A shapeshifter, magician, and trickster, Veles would visit the mortal world in the form of a bear, sowing mischief wherever he went.

Mirolyubov claimed that the manuscript, written on thin wooden boards bound with leather straps, had been discovered by a Russian soldier in a bombed-out Ukrainian castle in 1919. The soldier had photographed the boards and given the pictures to Mirolyubov, who translated the work into modern Russian. Mirolyubov illustrated his published translation with one of the photographs, though the original boards, he said, had disappeared mysteriously during the Second World War. Though historians and linguists soon dismissed the folklorist’s Book of Veles as a hoax, its renown spread. Today, it’s revered as a holy text by certain neo-pagan and Slavic nationalist cults.

Mythmaking, more than truth seeking, is what seems likely to define the future of media and of the public square.

Myths are works of art. They provide a way of understanding the world that appeals not to reason but to emotion, not to the conscious mind but to the subconscious one. What is most pleasing to our sensibilities—what is most beautiful to us—is what feels most genuine, most worthy of belief. History and psychology both suggest that, in politics as in art, generative AI will succeed in fulfilling the highest aspiration of its creators: to make the virtual feel more authentic than the real…(More)”

How Media Ownership Matters


Book by Rodney Benson, Mattias Hessérus, Timothy Neff, and Julie Sedel: “Does it matter who owns and funds the media? As journalists and management consultants set off in search of new business models, there’s a pressing need to understand anew the economic underpinnings of journalism and its role in democratic societies.

How Media Ownership Matters provides a fresh approach to understanding news media power, moving beyond the typical emphasis on market concentration or media moguls. Through a comparative analysis of the US, Sweden, and France, as well as interviews of news executives and editors and an original collection of industry data, this book maps and analyzes four ownership models: market, private, civil society, and public. Highlighting the effects of organizational logics, funding, and target audiences on the content of news, the authors identify both the strengths and weaknesses various forms of ownership have in facilitating journalism that meets the democratic ideals of reasoned, critical, and inclusive public debate. Ultimately, How Media Ownership Matters provides a roadmap to understanding how variable forms of ownership are shaping the future of journalism and democracy…(More)”.

Government ‘With’ The People 


Article by Nathan Gardels: “The rigid polarization that has gripped our societies and eroded trust in each other and in governing institutions feeds the appeal of authoritarian strongmen. Poised as tribunes of the people, they promise to lay down the law (rather than be constrained by it) and put the house in order not by bridging divides, but by targeting scapegoats and persecuting political adversaries who don’t conform to their ideological and cultural worldview.

The alternative to this course of illiberal democracy is the exact opposite: engaging citizens directly in governance through non-partisan platforms that encourage and enable deliberation, negotiation and compromise, to reach consensus across divides. Even as politics is tilting the other way at the national level, this approach of participation without populism is gaining traction from the bottom up.

The embryonic forms of this next step in democratic innovation, such as citizens’ assemblies or virtual platforms for bringing the public together and listening at scale, have so far been mostly advisory to the powers-that-be, with no guarantee that citizen input will have a binding impact on legislation or policy formation. That is beginning to change….

Claudia Chwalisz, who heads DemocracyNext, has spelled out the key elements of this innovative process that make it a model for others elsewhere:

  • Implementation should be considered from the start, not as an afterthought. The format of the final recommendations, the process for final approval, and the time needed to ensure this part of the process does not get neglected need to be considered in the early design stages of the assembly.
  • Dedicated time and resources for transforming recommendations into legislation are also crucial for successful implementation. Bringing citizens, politicians, and civil servants together in the final stages can help bridge the gap between recommendations and action. While it has been more typical for citizens’ assemblies to draft recommendations that they then hand onward to elected officials and civil servants, who review them and then respond to the citizens’ assembly, the Parisian model demonstrates another way.
  • Collaborative workshops where consensus amongst the triad of actors is needed adds more time to the process, but ensures that there is a high level of consensus for the final output, and reduces the time that would have been needed for officials to review and respond to the citizens’ assembly’s recommendations.
  • Formal institutional integration of citizens’ assemblies through legal measures can help ensure their recommendations are taken seriously and ensures the assembly’s continuity regardless of shifts in government. The citizens’ assembly has become a part of Paris’s democratic architecture, as have other permanent citizens’ assemblies elsewhere. While one-off assemblies typically depend on political will at a moment in time and risk becoming politicized — i.e. in being associated with the party that initially launched the first one — an institutionalized citizens’ assembly anchored in policy and political decision-making helps to set the foundation for a new institution that can endure.
  • It is also important that there is regular engagement with all political parties and stakeholders throughout the process. This helps build cross-partisan support for final recommendations, as well as more sustainable support for the enduring nature of the permanent citizens assembly.”…(More)”.

Addressing Digital Harms in Conflict


Report by Henriette Litta and Peter Bihr: “…takes stock and looks to the future: What does openness mean in the digital age? Is the concept still up to date? The study traces the development of Openness and analyses current challenges. It is based on interviews with experts and extensive literature research. The key insights at a glance are:

  • Give Openness a purpose.
  • Protect Openness by adding guard rails.
  • Open innovation and infrastructure need investments.
  • Openness is not neutral.
  • Market domination needs to be curtailed…(More)”.