Can We Trust Social Science Yet?


Essay by Ryan Briggs: “Everyone likes the idea of evidence-based policy, but it’s hard to realize it when our most reputable social science journals are still publishing poor quality research.

Ideally, policy and program design is a straightforward process: a decision-maker faces a problem, turns to peer-reviewed literature, and selects interventions shown to work. In reality, that’s rarely how things unfold. The popularity of “evidence-based medicine” and other “evidence-based” topics highlights our desire for empirical approaches — but would the world actually improve if those in power consistently took social science  evidence seriously? It brings me no joy to tell you that, at present, I think the answer is usually “no.”

Given the current state of evidence production in the social sciences, I believe that many — perhaps most — attempts to use social scientific evidence to inform policy will not lead to better outcomes. This is not because of politics or the challenges of scaling small programs. The problem is more immediate. Much of social science research is of poor quality, and sorting the trustworthy work from bad work is difficult, costly, and time-consuming.

But it is necessary. If you were to randomly select an empirical paper published in the past decade — including any studies from the top journals in political science or economics — there is a high chance that its findings may be inaccurate. And not just off by a little: possibly two times as large, or even incorrectly signed. As an academic, this bothers me. I think it should bother you, too. So let me explain why this happens…(More)”.

Simulating Human Behavior with AI Agents


Brief by The Stanford Institute for Human-Centered AI (HAI): “…we introduce an AI agent architecture that simulates more than 1,000 real people. The agent architecture—built by combining the transcripts of two-hour, qualitative interviews with a large language model (LLM) and scored against social science benchmarks—successfully replicated real individuals’ responses to survey questions 85% as accurately as participants replicate their own answers across surveys staggered two weeks apart. The generative agents performed comparably in predicting people’s personality traits and experiment outcomes and were less biased than previously used simulation tools.

This architecture underscores the benefits of using generative agents as a research tool to glean new insights into real-world individual behavior. However, researchers and policymakers must also mitigate the risks of using generative agents in such contexts, including harms related to over-reliance on agents, privacy, and reputation…(More)”.

We still don’t know how much energy AI consumes


Article by Sasha Luccioni: “…The AI Energy Score project, a collaboration between Salesforce, Hugging FaceAI developer Cohere and Carnegie Mellon University, is an attempt to shed more light on the issue by developing a standardised approach. The code is open and available for anyone to access and contribute to. The goal is to encourage the AI community to test as many models as possible.

By examining 10 popular tasks (such as text generation or audio transcription) on open-source AI models, it is possible to isolate the amount of energy consumed by the computer hardware that runs them. These are assigned scores ranging between one and five stars based on their relative efficiency. Between the most and least efficient AI models in our sample, we found a 62,000-fold difference in the power required. 

Since the project was launched in February a new tool compares the energy use of chatbot queries with everyday activities like phone charging or driving as a way to help users understand the environmental impacts of the tech they use daily.

The tech sector is aware that AI emissions put its climate commitments in danger. Both Microsoft and Google no longer seem to be meeting their net zero targets. So far, however, no Big Tech company has agreed to use the methodology to test its own AI models.

It is possible that AI models will one day help in the fight against climate change. AI systems pioneered by companies like DeepMind are already designing next-generation solar panels and battery materials, optimising power grid distribution and reducing the carbon intensity of cement production.

Tech companies are moving towards cleaner energy sources too. Microsoft is investing in the Three Mile Island nuclear power plant and Alphabet is engaging with more experimental approaches such as small modular nuclear reactors. In 2024, the technology sector contributed to 92 per cent of new clean energy purchases in the US. 

But greater clarity is needed. OpenAI, Anthropic and other tech companies should start disclosing the energy consumption of their models. If they resist, then we need legislation that would make such disclosures mandatory.

As more users interact with AI systems, they should be given the tools to understand how much energy each request consumes. Knowing this might make them more careful about using AI for superfluous tasks like looking up a nation’s capital. Increased transparency would also be an incentive for companies developing AI-powered services to select smaller, more sustainable models that meet their specific needs, rather than defaulting to the largest, most energy-intensive options…(More)”.

Gen Z’s new side hustle: selling data


Article by Erica Pandey: “Many young people are more willing than their parents to share personal data, giving companies deeper insight into their lives.

Why it matters: Selling data is becoming the new selling plasma.

Case in point: Generation Lab, a youth polling company, is launching a new product, Verb.AI, today — betting that buying this data is the future of polling.

  • “We think corporations have extracted user data without fairly compensating people for their own data,” says Cyrus Beschloss, CEO of Generation Lab. “We think users should know exactly what data they’re giving us and should feel good about what they’re receiving in return.”

How it works: Generation Lab offers people cash — $50 or more per month, depending on use and other factors — to download a tracker onto their phones.

  • The product takes about 90 seconds to download, and once it’s on your phone, it tracks things like what you browse, what you buy, which streaming apps you use — all anonymously. There are also things it doesn’t track, like activity on your bank account.
  • Verb then uses that data to create a digital twin of you that lives in a central database and knows your preferences…(More)”.

Public AI White Paper – A Public Alternative to Private AI Dominance


White paper by the Bertelsmann Stiftung and Open Future: “Today, the most advanced AI systems are developed and controlled by a small number of private companies. These companies hold power not only over the models themselves but also over key resources such as computing infrastructure. This concentration of power poses not only economic risks but also significant democratic challenges.

The Public AI White Paper presents an alternative vision, outlining how open and public-interest approaches to AI can be developed and institutionalized. It advocates for a rebalancing of power within the AI ecosystem – with the goal of enabling societies to shape AI actively, rather than merely consume it…(More)”.

What Happens When AI-Generated Lies Are More Compelling than the Truth?


Essay by Nicholas Carr: “…In George Orwell’s 1984, the functionaries in Big Brother’s Ministry of Truth spend their days rewriting historical records, discarding inconvenient old facts and making up new ones. When the truth gets hazy, tyrants get to define what’s true. The irony here is sharp. Artificial intelligence, perhaps humanity’s greatest monument to logical thinking, may trigger a revolution in perception that overthrows the shared values of reason and rationality we inherited from the Enlightenment.

In 1957, a Russian scientist-turned-folklorist named Yuri Mirolyubov published a translation of an ancient manuscript—a thousand years old, he estimated—in a Russian-language newspaper in San Francisco. Mirolyubov’s Book of Veles told stirring stories of the god Veles, a prominent deity in pre-Christian Slavic mythology. A shapeshifter, magician, and trickster, Veles would visit the mortal world in the form of a bear, sowing mischief wherever he went.

Mirolyubov claimed that the manuscript, written on thin wooden boards bound with leather straps, had been discovered by a Russian soldier in a bombed-out Ukrainian castle in 1919. The soldier had photographed the boards and given the pictures to Mirolyubov, who translated the work into modern Russian. Mirolyubov illustrated his published translation with one of the photographs, though the original boards, he said, had disappeared mysteriously during the Second World War. Though historians and linguists soon dismissed the folklorist’s Book of Veles as a hoax, its renown spread. Today, it’s revered as a holy text by certain neo-pagan and Slavic nationalist cults.

Mythmaking, more than truth seeking, is what seems likely to define the future of media and of the public square.

Myths are works of art. They provide a way of understanding the world that appeals not to reason but to emotion, not to the conscious mind but to the subconscious one. What is most pleasing to our sensibilities—what is most beautiful to us—is what feels most genuine, most worthy of belief. History and psychology both suggest that, in politics as in art, generative AI will succeed in fulfilling the highest aspiration of its creators: to make the virtual feel more authentic than the real…(More)”

How Media Ownership Matters


Book by Rodney Benson, Mattias Hessérus, Timothy Neff, and Julie Sedel: “Does it matter who owns and funds the media? As journalists and management consultants set off in search of new business models, there’s a pressing need to understand anew the economic underpinnings of journalism and its role in democratic societies.

How Media Ownership Matters provides a fresh approach to understanding news media power, moving beyond the typical emphasis on market concentration or media moguls. Through a comparative analysis of the US, Sweden, and France, as well as interviews of news executives and editors and an original collection of industry data, this book maps and analyzes four ownership models: market, private, civil society, and public. Highlighting the effects of organizational logics, funding, and target audiences on the content of news, the authors identify both the strengths and weaknesses various forms of ownership have in facilitating journalism that meets the democratic ideals of reasoned, critical, and inclusive public debate. Ultimately, How Media Ownership Matters provides a roadmap to understanding how variable forms of ownership are shaping the future of journalism and democracy…(More)”.

From Software to Society — Openness in a changing world


Report by Henriette Litta and Peter Bihr: “…takes stock and looks to the future: What does openness mean in the digital age? Is the concept still up to date? The study traces the development of openness and analyses current challenges. It is based on interviews with experts and extensive literature research. The key insights at a glance are:

Give Openness a purpose. Especially in times of increasing injustice, surveillance and power monopolies, a clear framework for meaningful openness is needed, as this is often lacking. Companies market ‘open’ products without enabling co-creation. Political actors invoke openness without strengthening democratic control. This is particularly evident when dealing with AI. AI systems are complex and are often dominated by a few tech companies – which makes opening them up a fundamental challenge. The dominance of some tech companies is also massively exploited, which can lead to the censorship of other opinions.

Protect Openness by adding guard rails. Those who demand openness must also be prepared to get involved in political disputes – against a market monopoly, for example. According to Litta and Bihr, this requires new licence models that include obligations to return and share, as well as stricter enforcement of antitrust law and data protection. Openness therefore needs rules…(More)”.

Federated learning for children’s data


Article by Roy Saurabh: “Across the world, governments are prioritizing the protection of citizens’ data – especially that of children. New laws, dedicated data protection authorities, and digital infrastructure initiatives reflect a growing recognition that data is not just an asset, but a foundation for public trust. 

Yet a major challenge remains: how can governments use sensitive data to improve outcomes – such as in education – without undermining the very privacy protections they are committed to uphold?

One promising answer lies in federated, governance-aware approaches to data use. But realizing this potential requires more than new technology; it demands robust data governance frameworks designed from the outset.

Data governance: The missing link

In many countries, ministries of education, health, and social protection each hold pieces of the puzzle that together could provide a more complete picture of children’s learning and well-being. For example, a child’s school attendance, nutritional status, and family circumstances all shape their ability to thrive, yet these records are kept in separate systems.

Efforts to combine such data often run into legal and technical barriers. Centralized data lakes raise concerns about consent, security, and compliance with privacy laws. In fact, many international standards stress the principle of data minimization – the idea that personal information should not be gathered or combined unnecessarily. 

“In many countries, ministries of education, health, and social protection each hold pieces of the puzzle that together could provide a more complete picture of children’s learning and well-being.”

This is where the right data governance frameworks become essential. Effective governance defines clear rules about how data can be accessed, shared, and used – specifying who has the authority, what purposes are permitted, and how rights are protected. These frameworks make it possible to collaborate with data responsibly, especially when it comes to children…(More)”

How to Break Down Silos and Collaborate Across Government


Blog by Jessica MacLeod: “…To help public sector leaders navigate these cultural barriers, I use a simple but powerful framework: Clarity, Care, and Challenge. It’s built from research, experience, and what I’ve seen actually shift how teams work. You can read more about the framework in my previous article on high-performing teams. Here’s how this framework relates to breaking down silos:

  • Clarity → How We Work:
    Clear priorities, aligned expectations, and a shared understanding of how individual work connects to the bigger picture.
  • Care → How We Relate:
    Trust, psychological safety, and strong collaboration.
  • Challenge → How We Achieve:
    Stretch goals, high standards, and a culture that encourages innovation and growth.

Silos thrive in ambiguity. If no one can see the work, understand the language, or map who owns what, collaboration dies on arrival.

When I work with public sector teams, one of the first things I look for is how visible the work is. Can people across departments explain where things stand on a project today? Or what the context is behind a project? Do they know who’s accountable? Can they locate the latest draft of the work without digging through three email chains?

Often, the answer is no, and it’s not because people aren’t trying. It’s because our systems are optimized for siloed visibility, not shared clarity.

Here’s what that looks like in practice:

  • A particular acronym means one thing to IT, another to leadership, and something entirely different to community stakeholders.
  • “Launch” for one team means public announcement. For another, it means testing a feature with a pilot group.
  • Documents live in private folders, on individual desktops, or in tools that don’t talk to each other…(More)”.