The Behavioral Scientists Working Toward a More Peaceful World


Interview by Heather Graci: “…Nation-level data doesn’t help us understand community-level conflict. Without understanding community-level conflict, it becomes much harder to design policies to prevent it.

Cikara: “So much of the data that we have is at the level of the nation, when our effects are all happening at very local levels. You see these reports that say, “In Germany, 14 percent of the population is immigrants.” It doesn’t matter at the national level, because they’re not distributed evenly across the geography. That means that some communities are going to be at greater risk for conflict than others. But that sort of local variation and sensitivity to it, at least heretofore, has really been missing from the conversation on the research side. Even when you’re in the same place, in the same country within the same state, the same canton, there can still be a ton of variation from neighborhood to neighborhood. 

“The other thing that we know matters a lot is not just the diversity of these neighborhoods but the segregation of them. It turns out that these kinds of prejudices and violence are less likely to break out in those places where it’s both diverse and people are interdigitated with how they live. So it’s not just the numbers, it’s also the spatial organization. 

“For example, in Singapore, because so much of the real estate is state-owned, they make it so that people who are coming from different countries can’t cluster together because they assign them to live separate from one another in order to prevent these sorts of enclaves. All these structural and meta-level organizational features have really, really important inputs for intergroup dynamics and psychology.”..(More)”.

Why policy failure is a prerequisite for innovation in the public sector


Blog by Philipp Trein and Thenia Vagionaki: “In our article entitled, “Why policy failure is a prerequisite for innovation in the public sector,” we explore the relationship between policy failure and innovation within public governance. Drawing inspiration from the “Innovator’s Dilemma,”—a theory from the management literature—we argue that the very nature of policymaking, characterized by myopia of voters, blame avoidance by decisionmakers, and the complexity (ill-structuredness) of societal challenges, has an inherent tendency to react with innovation only after failure of existing policies.  

Our analysis implies that we need to be more critical of what the policy process can achieve in terms of public sector innovation. Cognitive limitations tend to lead to a misperception of problems and inaccurate assessment of risks by decision makers according to the “Innovator’s Dilemma”.  This problem implies that true innovation (non-trivial policy changes) are unlikely to happen before an existing policy has failed visibly. However, our perspective does not want to paint a gloomy picture for public policy making but rather offers a more realistic interpretation of what public sector innovation can achieve. As a consequence, learning from experts in the policy process should be expected to correct failures in public sector problem-solving during the political process, rather than raise expectations beyond what is possible. 

The potential impact of our findings is profound. For practitioners and policymakers, this insight offers a new lens through which to evaluate the failure and success of public policies. Our work advocates a paradigm shift in how we perceive, manage, and learn from policy failures in the public sector, and for the expectations we have towards learning and the use of evidence in policymaking. By embracing the limitations of innovation in public policy, we can better manage expectations and structure the narrative regarding the capacity of public policy to address collective problems…(More)”.


The Character of Consent


Book by Meg Leta Jones about The History of Cookies and the Future of Technology Policy: “Consent pop-ups continually ask us to download cookies to our computers, but is this all-too-familiar form of privacy protection effective? No, Meg Leta Jones explains in The Character of Consent, rather than promote functionality, privacy, and decentralization, cookie technology has instead made the internet invasive, limited, and clunky. Good thing, then, that the cookie is set for retirement in 2024. In this eye-opening book, Jones tells the little-known story of this broken consent arrangement, tracing it back to the major transnational conflicts around digital consent over the last twenty-five years. What she finds is that the policy controversy is not, in fact, an information crisis—it’s an identity crisis.

Instead of asking how people consent, Jones asks who exactly is consenting and to what. Packed into those cookie pop-ups, she explains, are three distinct areas of law with three different characters who can consent. Within (mainly European) data protection law, the data subject consents. Within communication privacy law, the user consents. And within consumer protection law, the privacy consumer consents. These areas of law have very different histories, motivations, institutional structures, expertise, and strategies, so consent—and the characters who can consent—plays a unique role in those areas of law….(More)”.

Now we are all measuring impact — but is anything changing?


Article by Griffith Centre for Systems Innovation: “…Increasingly the landscape of Impact Measurement is crowded, dynamic and contains a diversity of frameworks and approaches — which can mean we end up feeling like we’re looking at alphabet soup.

As we’ve traversed this landscape we’ve tried to make sense of it in various ways, and have begun to explore a matrix to represent the constellation of frameworks, approaches and models we’ve encountered in the process. As shown below, the matrix has two axes:

The horizontal axis provides us with a “time” delineation. Dividing the left and right sides between retrospective (ex post) and prospective (ex-ante) approaches to measuring impact.

More specifically the retrospective quadrants include approaches/frameworks/models that ask about events in the past: What impact did we have? While the prospective quadrants include approaches that ask about the possible future: What impact will we have?

The vertical axis provides us with a “purpose” delineation. Dividing the upper and lower parts between Impact Measurement + Management and Evaluation

The top-level quadrants, Impact Measurement + Management, focus on methods that count quantifiable data (i.e. time, dollars, widgets). These frameworks tend to measure outputs from activities/interventions. They tend to ask the question what happened or what could happen and rely significantly on quantitative data.

The bottom-level Evaluation quadrants include a range of approaches that look at a broader range of questions beyond counting. They include questions like: what changed and why? What was or might the interrelationships between changes be? They tend to draw on a mixture of quantitative and qualitative data to create a more cohesive understanding of changes that occurred, are occurring or could occur.

A word of warning: As with all frameworks, this matrix is a “construct” — a way for us to engage in sense-making and to critically discuss how impact measurement is being undertaken in our current context. We are sharing this as a starting point for a broader discussion. We welcome feedback, reflections, and challenges around how we have represented different approaches — we are not seeking a ‘true representation’, but rather, a starting point for dialogue about how all the methods that now abound are connected, entangled and constructed…(More)”

Can Artificial Intelligence Bring Deliberation to the Masses?


Chapter by Hélène Landemore: “A core problem in deliberative democracy is the tension between two seemingly equally important conditions of democratic legitimacy: deliberation, on the one hand, and mass participation, on the other. Might artificial intelligence help bring quality deliberation to the masses? The answer is a qualified yes. The chapter first examines the conundrum in deliberative democracy around the trade-off between deliberation and mass participation by returning to the seminal debate between Joshua Cohen and Jürgen Habermas. It then turns to an analysis of the 2019 French Great National Debate, a low-tech attempt to involve millions of French citizens in a two-month-long structured exercise of collective deliberation. Building on the shortcomings of this process, the chapter then considers two different visions for an algorithm-powered form of mass deliberation—Mass Online Deliberation (MOD), on the one hand, and Many Rotating Mini-publics (MRMs), on the other—theorizing various ways artificial intelligence could play a role in them. To the extent that artificial intelligence makes the possibility of either vision more likely to come to fruition, it carries with it the promise of deliberation at the very large scale….(More)”

A Generation of AI Guinea Pigs


Article by Caroline Mimbs Nyce: “This spring, the Los Angeles Unified School District—the second-largest public school district in the United States—introduced students and parents to a new “educational friend” named Ed. A learning platform that includes a chatbot represented by a small illustration of a smiling sun, Ed is being tested in 100 schools within the district and is accessible at all hours through a website. It can answer questions about a child’s courses, grades, and attendance, and point users to optional activities.

As Superintendent Alberto M. Carvalho put it to me, “AI is here to stay. If you don’t master it, it will master you.” Carvalho says he wants to empower teachers and students to learn to use AI safely. Rather than “keep these assets permanently locked away,” the district has opted to “sensitize our students and the adults around them to the benefits, but also the challenges, the risks.” Ed is just one manifestation of that philosophy; the school district also has a mandatory Digital Citizenship in the Age of AI course for students ages 13 and up.

Ed is, according to three first graders I spoke with this week at Alta Loma Elementary School, very good. They especially like it when Ed awards them gold stars for completing exercises. But even as they use the program, they don’t quite understand it. When I asked them if they know what AI is, they demurred. One asked me if it was a supersmart robot…(More)”.

Cryptographers Discover a New Foundation for Quantum Secrecy


Article by Ben Brubaker: “…Say you want to send a private message, cast a secret vote or sign a document securely. If you do any of these tasks on a computer, you’re relying on encryption to keep your data safe. That encryption needs to withstand attacks from codebreakers with their own computers, so modern encryption methods rely on assumptions about what mathematical problems are hard for computers to solve.

But as cryptographers laid the mathematical foundations for this approach to information security in the 1980s, a few researchers discovered that computational hardness wasn’t the only way to safeguard secrets. Quantum theory, originally developed to understand the physics of atoms, turned out to have deep connections to information and cryptography. Researchers found ways to base the security of a few specific cryptographic tasks directly on the laws of physics. But these tasks were strange outliers — for all others, there seemed to be no alternative to the classical computational approach.

By the end of the millennium, quantum cryptography researchers thought that was the end of the story. But in just the past few years, the field has undergone another seismic shift.

“There’s been this rearrangement of what we believe is possible with quantum cryptography,” said Henry Yuen, a quantum information theorist at Columbia University.

In a string of recent papers, researchers have shown that most cryptographic tasks could still be accomplished securely even in hypothetical worlds where practically all computation is easy. All that matters is the difficulty of a special computational problem about quantum theory itself.

“The assumptions you need can be way, way, way weaker,” said Fermi Ma, a quantum cryptographer at the Simons Institute for the Theory of Computing in Berkeley, California. “This is giving us new insights into computational hardness itself.”…(More)”.

Governing with Artificial Intelligence


OECD Report: “OECD countries are increasingly investing in better understanding the potential value of using Artificial Intelligence (AI) to improve public governance. The use of AI by the public sector can increase productivity, responsiveness of public services, and strengthen the accountability of governments. However, governments must also mitigate potential risks, building an enabling environment for trustworthy AI. This policy paper outlines the key trends and policy challenges in the development, use, and deployment of AI in and by the public sector. First, it discusses the potential benefits and specific risks associated with AI use in the public sector. Second, it looks at how AI in the public sector can be used to improve productivity, responsiveness, and accountability. Third, it provides an overview of the key policy issues and presents examples of how countries are addressing them across the OECD…(More)”.

Handbook of Public Participation in Impact Assessment


Book edited by Tanya Burdett and A. John Sinclair: “… provides a clear overview of how to achieve meaningful public participation in impact assessment (IA). It explores conceptual elements, including the democratic core of public participation in IA, as well as practical challenges, such as data sharing, with diverse perspectives from 39 leading academics and practitioners.

Critically examining how different engagement frameworks have evolved over time, this Handbook underlines the ways in which tokenistic approaches and wider planning and approvals structures challenge the implementation of meaningful public participation. Contributing authors discuss the impact of international agreements, legislation and regulatory regimes, and review commonly used professional association frameworks such as the International Association for Public Participation core values for practice. They demonstrate through case studies what meaningful public participation looks like in diverse regional contexts, addressing the intentions of being purposeful, inclusive, transformative and proactive. By emphasising the strength of community engagement, the Handbook argues that public participation in IA can contribute to enhanced democracy and sustainability for all…(More)”.

Mapping Behavioral Public Policy


Book by Paolo Belardinelli: “This book provides a new perspective on behavioral public policy. The field of behavioral public policy has been dominated by the concept of ‘nudging’ over the last decade. As this book demonstrates, however, ‘nudging’ is one of many behavioral techniques that practitioners and policymakers can utilize in order to achieve their goals. The book discusses the advantages and disadvantages of these alternative techniques, and demonstrates empirically how the impact of ‘nudging’ and ‘non-nudging’ interventions are often dependent on varying political contexts and the degree of trust that citizens have toward policymakers. In doing so, it addresses the important question of how citizens understand and approve of the use of behavioral techniques by governments. The book will appeal to all those interested in public management, public policy, behavioral psychology, and ‘nudging’. ..(More)”.