How Being Watched Changes How You Think


Article by Simon Makin: “In 1785 English philosopher Jeremy Bentham designed the perfect prison: Cells circle a tower from which an unseen guard can observe any inmate at will. As far as a prisoner knows, at any given time, the guard may be watching—or may not be. Inmates have to assume they’re constantly observed and behave accordingly. Welcome to the Panopticon.

Many of us will recognize this feeling of relentless surveillance. Information about who we are, what we do and buy and where we go is increasingly available to completely anonymous third parties. We’re expected to present much of our lives to online audiences and, in some social circles, to share our location with friends. Millions of effectively invisible closed-circuit television (CCTV) cameras and smart doorbells watch us in public, and we know facial recognition with artificial intelligence can put names to faces.

So how does being watched affect us? “It’s one of the first topics to have been studied in psychology,” says Clément Belletier, a psychologist at University of Clermont Auvergne in France. In 1898 psychologist Norman Triplett showed that cyclists raced harder in the presence of others. From the 1970s onward, studies showed how we change our overt behavior when we are watched to manage our reputation and social consequences.

But being watched doesn’t just change our behavior; decades of research show it also infiltrates our mind to impact how we think. And now a new study reveals how being watched affects unconscious processing in our brain. In this era of surveillance, researchers say, the findings raise concerns about our collective mental health…(More)”.

Citizen Centricity in Public Policy Making


Book by Naci Karkin and Volkan Göçoğlu: “The book explores and positions citizen centricity within conventional public administration and public policy analysis theories and approaches. It seeks to define an appropriate perspective while utilizing popular, independent, and standalone concepts from the literature that support citizen centricity. Additionally, it illustrates the implementation part with practical cases. It ultimately presents a novel and descriptive approach to provide insights into how citizen centricity can be applied in practice. This descriptive novel approach has three essential components: a base and two pillars. The foundation includes new-age public policy making approaches and complexity theory. The first column reflects the conceptual dimension, which comprises supporting concepts from the literature on citizen centricity. The second column represents the practical dimension, a structure supported by academic research that provides practical cases and inspiration for future applications. The descriptive novel approach accepts citizen centricity as a fundamental approach in public policy making and aims to create a new awareness in the academic community on the subject. Additionally, the book provides refreshed conceptual and theoretical backgrounds, along with tangible participatory models and frameworks, benefiting academics, professionals, and graduate students…(More)”.

Where Cloud Meets Cement


Report by Hanna Barakat, Chris Cameron, Alix Dunn and Prathm Juneja, and Emma Prest: “This report examines the global expansion of data centers driven by AI and cloud computing, highlighting both their economic promises and the often-overlooked social and environmental costs. Through case studies across five countries, it investigates how governments and tech companies influence development, how communities resist harmful effects, and what support is needed for effective advocacy…(More)”.

Designing Shared Data Futures: Engaging young people on how to re-use data responsibly for health and well-being


Report by Hannah Chafetz, Sampriti Saxena, Tracy Jo Ingram, Andrew J. Zahuranec, Jennifer Requejo and Stefaan Verhulst: “When young people are engaged in data decisions for or about them, they not only become more informed about this data, but can also contribute to new policies and programs that improve their health and well-being. However, oftentimes young people are left out of these discussions and are unaware of the data that organizations collect.

In October 2023, The Second Lancet Commission on Adolescent Health and well-being, the United Nations Children’s Fund (UNICEF), and The GovLab at New York University hosted six Youth Solutions Labs (or co-design workshops) with over 120 young people from 36 countries around the world. In addition to co-designing solutions to five key issues impacting their health and well-being, we sought to understand current sentiments around the re-use of data on those issues. The Labs provided several insights about young people’s preferences regarding: 1) the purposes for which data should be re-used to improve health and well-being, 2) the types and sources of data that should and should not be re-used, 3) who should have access to previously collected data, and 4) under what circumstances data re-use should take place. Additionally, participants provided suggestions of what ethical and responsible data re-use looks like to them and how young people can participate in decision making processes. In this paper, we elaborate on these findings and provide a series of recommendations to accelerate responsible data re-use for the health and well-being of young people…(More)”.

Why Generative AI Isn’t Transforming Government (Yet) — and What We Can Do About It


Article by Tiago C. Peixoto: “A few weeks ago, I reached out to a handful of seasoned digital services practitioners, NGOs, and philanthropies with a simple question: Where are the compelling generative AI (GenAI) use cases in public-sector workflows? I wasn’t looking for better search or smarter chatbots. I wanted examples of automation of real public workflows – something genuinely interesting and working. The responses, though numerous, were underwhelming.

That question has gained importance amid a growing number of reports forecasting AI’s transformative impact on government. The Alan Turing Institute, for instance, published a rigorous study estimating the potential of AI to help automate over 140 million government transactions in the UK. The Tony Blair Institute also weighed in, suggesting that a substantive portion of public-sector work could be automated. While the report helped bring welcome attention to the issue, its use of GPT-4 to assess task automatability has sparked a healthy discussion about how best to evaluate feasibility. Like other studies in this area, both reports highlight potential – but stop short of demonstrating real service automation.

Without testing technologies in real service environments – where workflows, incentives, and institutional constraints shape outcomes – and grounding each pilot in clear efficiency or well-being metrics, estimates risk becoming abstractions that underestimate feasibility.

This pattern aligns with what Arvind Narayanan and Sayash Kapoor argue in “AI as Normal Technology:” the impact of AI is realized only when methods translate into applications and diffuse through real-world systems. My own review, admittedly non-representative, confirms their call for more empirical work on the innovation-diffusion lag.

In the public sector, the gap between capability and impact is not only wide but also structural…(More)”

We still don’t know how much energy AI consumes


Article by Sasha Luccioni: “…The AI Energy Score project, a collaboration between Salesforce, Hugging FaceAI developer Cohere and Carnegie Mellon University, is an attempt to shed more light on the issue by developing a standardised approach. The code is open and available for anyone to access and contribute to. The goal is to encourage the AI community to test as many models as possible.

By examining 10 popular tasks (such as text generation or audio transcription) on open-source AI models, it is possible to isolate the amount of energy consumed by the computer hardware that runs them. These are assigned scores ranging between one and five stars based on their relative efficiency. Between the most and least efficient AI models in our sample, we found a 62,000-fold difference in the power required. 

Since the project was launched in February a new tool compares the energy use of chatbot queries with everyday activities like phone charging or driving as a way to help users understand the environmental impacts of the tech they use daily.

The tech sector is aware that AI emissions put its climate commitments in danger. Both Microsoft and Google no longer seem to be meeting their net zero targets. So far, however, no Big Tech company has agreed to use the methodology to test its own AI models.

It is possible that AI models will one day help in the fight against climate change. AI systems pioneered by companies like DeepMind are already designing next-generation solar panels and battery materials, optimising power grid distribution and reducing the carbon intensity of cement production.

Tech companies are moving towards cleaner energy sources too. Microsoft is investing in the Three Mile Island nuclear power plant and Alphabet is engaging with more experimental approaches such as small modular nuclear reactors. In 2024, the technology sector contributed to 92 per cent of new clean energy purchases in the US. 

But greater clarity is needed. OpenAI, Anthropic and other tech companies should start disclosing the energy consumption of their models. If they resist, then we need legislation that would make such disclosures mandatory.

As more users interact with AI systems, they should be given the tools to understand how much energy each request consumes. Knowing this might make them more careful about using AI for superfluous tasks like looking up a nation’s capital. Increased transparency would also be an incentive for companies developing AI-powered services to select smaller, more sustainable models that meet their specific needs, rather than defaulting to the largest, most energy-intensive options…(More)”.

What Happens When AI-Generated Lies Are More Compelling than the Truth?


Essay by Nicholas Carr: “…In George Orwell’s 1984, the functionaries in Big Brother’s Ministry of Truth spend their days rewriting historical records, discarding inconvenient old facts and making up new ones. When the truth gets hazy, tyrants get to define what’s true. The irony here is sharp. Artificial intelligence, perhaps humanity’s greatest monument to logical thinking, may trigger a revolution in perception that overthrows the shared values of reason and rationality we inherited from the Enlightenment.

In 1957, a Russian scientist-turned-folklorist named Yuri Mirolyubov published a translation of an ancient manuscript—a thousand years old, he estimated—in a Russian-language newspaper in San Francisco. Mirolyubov’s Book of Veles told stirring stories of the god Veles, a prominent deity in pre-Christian Slavic mythology. A shapeshifter, magician, and trickster, Veles would visit the mortal world in the form of a bear, sowing mischief wherever he went.

Mirolyubov claimed that the manuscript, written on thin wooden boards bound with leather straps, had been discovered by a Russian soldier in a bombed-out Ukrainian castle in 1919. The soldier had photographed the boards and given the pictures to Mirolyubov, who translated the work into modern Russian. Mirolyubov illustrated his published translation with one of the photographs, though the original boards, he said, had disappeared mysteriously during the Second World War. Though historians and linguists soon dismissed the folklorist’s Book of Veles as a hoax, its renown spread. Today, it’s revered as a holy text by certain neo-pagan and Slavic nationalist cults.

Mythmaking, more than truth seeking, is what seems likely to define the future of media and of the public square.

Myths are works of art. They provide a way of understanding the world that appeals not to reason but to emotion, not to the conscious mind but to the subconscious one. What is most pleasing to our sensibilities—what is most beautiful to us—is what feels most genuine, most worthy of belief. History and psychology both suggest that, in politics as in art, generative AI will succeed in fulfilling the highest aspiration of its creators: to make the virtual feel more authentic than the real…(More)”

How Media Ownership Matters


Book by Rodney Benson, Mattias Hessérus, Timothy Neff, and Julie Sedel: “Does it matter who owns and funds the media? As journalists and management consultants set off in search of new business models, there’s a pressing need to understand anew the economic underpinnings of journalism and its role in democratic societies.

How Media Ownership Matters provides a fresh approach to understanding news media power, moving beyond the typical emphasis on market concentration or media moguls. Through a comparative analysis of the US, Sweden, and France, as well as interviews of news executives and editors and an original collection of industry data, this book maps and analyzes four ownership models: market, private, civil society, and public. Highlighting the effects of organizational logics, funding, and target audiences on the content of news, the authors identify both the strengths and weaknesses various forms of ownership have in facilitating journalism that meets the democratic ideals of reasoned, critical, and inclusive public debate. Ultimately, How Media Ownership Matters provides a roadmap to understanding how variable forms of ownership are shaping the future of journalism and democracy…(More)”.

From Software to Society — Openness in a changing world


Report by Henriette Litta and Peter Bihr: “…takes stock and looks to the future: What does openness mean in the digital age? Is the concept still up to date? The study traces the development of openness and analyses current challenges. It is based on interviews with experts and extensive literature research. The key insights at a glance are:

Give Openness a purpose. Especially in times of increasing injustice, surveillance and power monopolies, a clear framework for meaningful openness is needed, as this is often lacking. Companies market ‘open’ products without enabling co-creation. Political actors invoke openness without strengthening democratic control. This is particularly evident when dealing with AI. AI systems are complex and are often dominated by a few tech companies – which makes opening them up a fundamental challenge. The dominance of some tech companies is also massively exploited, which can lead to the censorship of other opinions.

Protect Openness by adding guard rails. Those who demand openness must also be prepared to get involved in political disputes – against a market monopoly, for example. According to Litta and Bihr, this requires new licence models that include obligations to return and share, as well as stricter enforcement of antitrust law and data protection. Openness therefore needs rules…(More)”.

Government ‘With’ The People 


Article by Nathan Gardels: “The rigid polarization that has gripped our societies and eroded trust in each other and in governing institutions feeds the appeal of authoritarian strongmen. Poised as tribunes of the people, they promise to lay down the law (rather than be constrained by it) and put the house in order not by bridging divides, but by targeting scapegoats and persecuting political adversaries who don’t conform to their ideological and cultural worldview.

The alternative to this course of illiberal democracy is the exact opposite: engaging citizens directly in governance through non-partisan platforms that encourage and enable deliberation, negotiation and compromise, to reach consensus across divides. Even as politics is tilting the other way at the national level, this approach of participation without populism is gaining traction from the bottom up.

The embryonic forms of this next step in democratic innovation, such as citizens’ assemblies or virtual platforms for bringing the public together and listening at scale, have so far been mostly advisory to the powers-that-be, with no guarantee that citizen input will have a binding impact on legislation or policy formation. That is beginning to change….

Claudia Chwalisz, who heads DemocracyNext, has spelled out the key elements of this innovative process that make it a model for others elsewhere:

  • Implementation should be considered from the start, not as an afterthought. The format of the final recommendations, the process for final approval, and the time needed to ensure this part of the process does not get neglected need to be considered in the early design stages of the assembly.
  • Dedicated time and resources for transforming recommendations into legislation are also crucial for successful implementation. Bringing citizens, politicians, and civil servants together in the final stages can help bridge the gap between recommendations and action. While it has been more typical for citizens’ assemblies to draft recommendations that they then hand onward to elected officials and civil servants, who review them and then respond to the citizens’ assembly, the Parisian model demonstrates another way.
  • Collaborative workshops where consensus amongst the triad of actors is needed adds more time to the process, but ensures that there is a high level of consensus for the final output, and reduces the time that would have been needed for officials to review and respond to the citizens’ assembly’s recommendations.
  • Formal institutional integration of citizens’ assemblies through legal measures can help ensure their recommendations are taken seriously and ensures the assembly’s continuity regardless of shifts in government. The citizens’ assembly has become a part of Paris’s democratic architecture, as have other permanent citizens’ assemblies elsewhere. While one-off assemblies typically depend on political will at a moment in time and risk becoming politicized — i.e. in being associated with the party that initially launched the first one — an institutionalized citizens’ assembly anchored in policy and political decision-making helps to set the foundation for a new institution that can endure.
  • It is also important that there is regular engagement with all political parties and stakeholders throughout the process. This helps build cross-partisan support for final recommendations, as well as more sustainable support for the enduring nature of the permanent citizens assembly.”…(More)”.