The Devil’s Advocate: What Happens When Dissent Becomes Digital


Article by Anthea Roberts: “But what if the devil’s advocate wasn’t human at all? What if it was an AI agent—faceless, rank-agnostic, apolitically neutral? A devil without a career to lose. Here’s where the inversion occurs: artificial intelligence enabling more genuine human conversation.

At Dragonfly Thinking, we’ve been experimenting with this concept. We call this Devil’s Advocate your Critical Friend. It’s an AI agent designed to do what humans find personally difficult and professionally dangerous: provide systematic criticism without career consequences.

The magic isn’t in the AI’s intelligence. It’s in how removing the human face transforms the social dynamics of dissent.

When critical feedback comes from an AI, no one’s promotion is at risk. The criticism can be thorough without being insubordinate. Teams can engage with substance rather than navigating office politics.

The AI might note: “Previous digital transformations show 73% failure rate when legacy system dependencies exceed 40%. This proposal shows significant dependencies.” It’s the AI saying what the tech lead knows but can’t safely voice, at least not alone.

Does criticism from code carry less weight because there’s no skin in the game? Counterintuitively, we’ve found the opposite. Without perceived motives or political agendas, the criticism becomes clearer, more digestible.

Ritualizing Productive Dissent

Imagine every major initiative automatically triggering AI analysis. Not optional. Built in like a financial review.

The ritual unfolds:

Monday, 2 PM: The transformation strategy is pitched. Energy builds. Heads nod. The vision is compelling.

Tuesday, 9 AM: An email arrives: “Devil’s Advocate Analysis – Digital Transformation Initiative.” Sender: DA-System. Twelve pages of systematic critique. People read alone, over coffee. Some sections sting. Others confirm private doubts.

Wednesday, 10 AM: The team reconvenes. Printouts are marked up. The tech lead says, “Section 3.2 about integration dependencies—we need to address this.” The ops head adds, “The adoption curve analysis on page 8 matches what we saw in Phoenix.”

Thursday: A revised strategy goes forward. Not perfect, but honest about assumptions and clear about risks.

When criticism is ritualized and automated, it stops being personal. It becomes data…(More)”.

A New Paradigm for Fueling AI for the Public Good


Article by Kevin T. Frazier: “Imagine receiving this email in the near future: “Thank you for sharing data with the American Data Collective on May 22, 2025. After first sharing your workout data with SprintAI, a local startup focused on designing shoes for differently abled athletes, your data donation was also sent to an artificial intelligence research cluster hosted by a regional university. Your donation is on its way to accelerate artificial intelligence innovation and support researchers and innovators addressing pressing public needs!”

That is exactly the sort of message you could expect to receive if we made donations of personal data akin to blood donations—a pro-social behavior that may not immediately serve a donor’s individual needs but may nevertheless benefit the whole of the community. This vision of a future where data flow toward the public good is not science fiction—it is a tangible possibility if we address a critical bottleneck faced by innovators today.

Creating the data equivalent of blood banks may not seem like a pressing need or something that people should voluntarily contribute to, given widespread concerns about a few large artificial intelligence (AI) companies using data for profit-driven and, arguably, socially harmful ends. This narrow conception of the AI ecosystem fails to consider the hundreds of AI research initiatives and startups that have a desperate need for high-quality data. I was fortunate enough to meet leaders of those nascent AI efforts at Meta’s Open Source AI Summit in Austin, Texas. For example, I met with Matt Schwartz, who leads a startup that leans on AI to glean more diagnostic information from colonoscopies. I also connected with Edward Chang, a professor of neurological surgery at the University of California, San Francisco Weill Institute for Neurosciences, who relies on AI tools to discover new information on how and why our brains work. I also got to know Corin Wagen, whose startup is helping companies “find better molecules faster.” This is a small sample of the people leveraging AI for objectively good outcomes. They need your help. More specifically, they need your data.

A tragic irony shapes our current data infrastructure. Most of us share mountains of data with massive and profitable private parties—smartwatch companies, diet apps, game developers, and social media companies. Yet, AI labs, academic researchers, and public interest organizations best positioned to leverage our data for the common good are often those facing the most formidable barriers to acquiring the necessary quantity, quality, and diversity of data. Unlike OpenAI, they are not going to use bots to scrape the internet for data. Unlike Google and Meta, they cannot rely on their own social media platforms and search engines to act as perpetual data generators. And, unlike Anthropic, they lack the funds to license data from media outlets. So, while commercial entities amass vast datasets, frequently as a byproduct of consumer services and proprietary data acquisition strategies, mission-driven AI initiatives dedicated to public problems find themselves in a state of chronic data scarcity. This is not merely a hurdle—it is a systemic bottleneck choking off innovation where society needs it most, delaying or even preventing the development of AI tools that could significantly improve lives.

Individuals are, quite rightly, increasingly hesitant to share their personal information, with concerns about privacy, security, and potential misuse being both rampant and frequently justified by past breaches and opaque practices. Yet, in a striking contradiction, troves of deeply personal data are continuously siphoned by app developers, by tech platforms, and, often opaquely, by an extensive network of data brokers. This practice often occurs with minimal transparency and without informed consent concerning the full lifecycle and downstream uses of that data. This lack of transparency extends to how algorithms trained on this data make decisions that can impact individuals’ lives—from loan applications to job prospects—often without clear avenues for recourse or understanding, potentially perpetuating existing societal biases embedded in historical data…(More)”.

Protecting young digital citizens


Blog by Pascale Raulin-Serrier: “…As digital tools become more deeply embedded in children’s lives, many young users are unaware of the long-term consequences of sharing personal information online through apps, games, social media platforms and even educational tools. The large-scale collection of data related to their preferences, identity or lifestyle may be used for targeted advertising or profiling. This affects not only their immediate online experiences but can also have lasting consequences, including greater risks of discrimination and exclusion. These concerns underscore the urgent need for stronger safeguards, greater transparency and a child-centered approach to data governance.

CNIL’s initiatives to promote children’s privacy

In response to these challenges, the CNIL introduced eight recommendations in 2021 to provide practical guidance for children, parents and other stakeholders in the digital economy. These are built around several key pillars to promote and protect children’s privacy:

1. Providing specific safeguards

Children have distinct digital rights and must be able to exercise them fully. Under the European General Data Protection Regulation (GDPR), they benefit from special protections, including the right to be forgotten and, in some cases, the ability to consent to the processing of their data.In France, children can only register for social networks or online gaming platforms if they are over 15, or with parental consent if they are younger. CNIL helps hold platforms accountable by offering clear recommendations on how to present terms of service and collect consent in ways that are accessible and understandable to children.

2. Balancing autonomy and protection

The needs and capacities of a 6-year-old child differ greatly from those of a 16-year-old adolescent. It is essential to consider this diversity in online behaviour, maturity and the evolving ability to make informed decisions. The CNIL emphasizes  the importance of offering children a digital environment that strikes a balance between protection and autonomy. It also advocates for digital citizenship education to empower young people with the tools they need to manage their privacy responsibly…(More)”. See also Responsible Data for Children.

Scientific Publishing: Enough is Enough


Blog by Seemay Chou: “In Abundance, Ezra Klein and Derek Thompson make the case that the biggest barriers to progress today are institutional. They’re not because of physical limitations or intellectual scarcity. They’re the product of legacy systems — systems that were built with one logic in mind, but now operate under another. And until we go back and address them at the root, we won’t get the future we say we want.

I’m a scientist. Over the past five years, I’ve experimented with science outside traditional institutes. From this vantage point, one truth has become inescapable. The journal publishing system — the core of how science is currently shared, evaluated, and rewarded — is fundamentally broken. And I believe it’s one of the legacy systems that prevents science from meeting its true potential for society.

It’s an unpopular moment to critique the scientific enterprise given all the volatility around its funding. But we do have a public trust problem. The best way to increase trust and protect science’s future is for scientists to have the hard conversations about what needs improvement. And to do this transparently. In all my discussions with scientists across every sector, exactly zero think the journal system works well. Yet we all feel trapped in a system that is, by definition, us.

I no longer believe that incremental fixes are enough. Science publishing must be built anew. I help oversee billions of dollars in funding across several science and technology organizations. We are expanding our requirement that all scientific work we fund will not go towards traditional journal publications. Instead, research we support should be released and reviewed more openly, comprehensively, and frequently than the status quo.

This policy is already in effect at Arcadia Science and Astera Institute, and we’re actively funding efforts to build journal alternatives through both Astera and The Navigation Fund. We hope others cross this line with us, and below I explain why every scientist and science funder should strongly consider it…(More)”.

Reliable data facilitates better policy implementation


Article by Ganesh Rao and Parul Agarwal: “Across India, state government departments are at the forefront of improving human capabilities through education, health, and nutrition programmes. Their ability to do so effectively depends on administrative (or admin) data1 collected and maintained by their staff. This data is collected as part of regular operations and informs both day-to-day decision-making and long-term policy. While policymaking can draw on (reasonably reliable) sample surveys alone, effective implementation of schemes and services requires accurate individual-level admin data. However, unreliable admin data can be a severe constraint, forcing bureaucrats to rely on intuition, experience, and informed guesses. Improving the reliability of admin data can greatly enhance state capacity, thereby improving governance and citizen outcomes.  

There has been some progress on this front in recent years. For instance, the Jan Dhan-Aadhaar-Mobile (JAM) trinity has significantly improved direct benefit transfer (DBT) mechanisms by ensuring that certain recipient data is reliable. However, challenges remain in accurately capturing the well-being of targeted citizens. Despite significant investments in the digitisation of data collection and management systems, persistent reliability issues undermine the government’s efforts to build a data-driven decision-making culture…

There is growing evidence of serious quality issues in admin data. At CEGIS, we have conducted extensive analyses of admin data across multiple states, uncovering systemic issues in key indicators across sectors and platforms. These quality issues compound over time, undermining both micro-level service delivery and macro-level policy planning. This results in distorted budget allocations, gaps in service provision, and weakened frontline accountability…(More)”.

Project Push creates an archive of news alerts from around the world


Article by Neel Dhanesha: “A little over a year ago, Matt Taylor began to feel like he was getting a few too many push notifications from the BBC News app.

It’s a feeling many of us can probably relate to. Many people, myself included, have turned off news notifications entirely in the past few months. Taylor, however, went in the opposite direction.

Instead of turning off notifications, he decided to see how the BBC — the most popular news app in the U.K., where Taylor lives —  compared to other news organizations around the world. So he dug out an old Google Pixel phone, downloaded 61 news apps onto it, and signed up for push notifications on all of them.

As notifications roll in, a custom-built script (made with the help of ChatGPT) uploads their text to a server and a Bluesky page, providing a near real-time view of push notifications from services around the world. Taylor calls it Project Push.

People who work in news “take the front page very seriously,” said Taylor, a product manager at the Financial Times who built Project Push in his spare time. “There are lots of editors who care a lot about that, but actually one of the most important people in the newsroom is the person who decides that they’re going to press a button that sends an immediate notification to millions of people’s phones.”

The Project Push feed is a fascinating portrait of the news today. There are the expected alerts — breaking news, updates to ongoing stories like the wars in Gaza and Ukraine, the latest shenanigans in Washington — but also:

— Updates on infrastructure plans that, without the context, become absolutely baffling (a train will instead be a bus?).

— Naked attempts to increase engagement.

— Culture updates that some may argue aren’t deserving of a push alert from the Associated Press.

— Whatever this is.

Taylor tells me he’s noticed some geographic differences in how news outlets approach push notifications. Publishers based in Asia and the Middle East, for example, send far more notifications than European or American ones; CNN Indonesia alone pushed about 17,000 of the 160,000 or so notifications Project Push has logged over the past year…(More)”.

Engagement Integrity: Ensuring Legitimacy at a time of AI-Augmented Participation


Article by Stefaan G. Verhulst: “As participatory practices are increasingly tech-enabled, ensuring engagement integrity is becoming more urgent. While considerable scholarly and policy attention has been paid to information integrity (OECD, 2024; Gillwald et al., 2024; Wardle & Derakhshan, 2017; Ghosh & Scott, 2018), including concerns about disinformation, misinformation, and computational propaganda, the integrity of engagement itself — how to ensure collective decision-making is not tech manipulated — remains comparatively under-theorized and under-protected. I define engagement integrity as the procedural fairness and resistance to manipulation of tech-enabled deliberative and participatory processes.

My definition is different from prior discussions of engagement integrity, which mainly emphasized ethical standards when scientists engage with the public (e.g., in advisory roles, communication, or co-research). The concept is particularly salient in light of recent innovations that aim to lower the transaction costs of engagement using artificial intelligence (AI) (Verhulst, 2018). From AI-facilitated citizen assemblies (Simon et al., 2023) to natural language processing (NLP) -enhanced policy proposal platforms (Grobbink & Peach, 2020) to automated analysis of unstructured direct democracy proposals (Grobbink & Peach, 2020) to large-scale deliberative polls augmented with agentic AI (Mulgan, 2022), these developments promise to enhance inclusion, scalability, and sense-making. However, they also create new attack surfaces and vectors of influence that could undermine legitimacy.

This concern is not speculative…(More)”.

Unlock Your City’s Hidden Solutions


Article by Andreas Pawelke, Basma Albanna and Damiano Cerrone: “Cities around the world face urgent challenges — from climate change impacts to rapid urbanization and infrastructure strain. Municipal leaders struggle with limited budgets, competing priorities, and pressure to show quick results, making traditional approaches to urban transformation increasingly difficult to implement.

Every city, however, has hidden success stories — neighborhoods, initiatives, or communities that are achieving remarkable results despite facing similar challenges as their peers.

These “positive deviants” often remain unrecognized and underutilized, yet they contain the seeds of solutions that are already adapted to local contexts and constraints.

Data-Powered Positive Deviance (DPPD) combines urban data, advanced analytics, and community engagement to systematically uncover these bright spots and amplify their impact. This new approach offers a pathway to urban transformation that is not only evidence-based but also cost-effective and deeply rooted in local realities.

DPPD is particularly valuable in resource-constrained environments, where expensive external solutions often fail to take hold. By starting with what’s already working, cities can make strategic investments that build on existing strengths rather than starting from scratch. Leveraging AI tools that improve community engagement, the approach becomes even more powerful — enabling cities to envision potential futures, and engage citizens in meaningful co-creation…(More)”

The Next Wave of Innovation Districts


Article by Bruce Katz and Julie Wagner: “A next wave of innovation districts is gaining momentum given the structural changes underway in the global economy. The examples cited above telegraph where existing innovation districts are headed and explain why new districts are forming. The districts highlighted and many others are responding to fast-changing and highly volatile macro forces and the need to de-riskdecarbonize, and diversify talent.

The next wave of innovation districts is distinctive for multiple reasons.

  • The sectors leveraging this innovation geography expand way beyond the traditional focus on life sciences to include advanced manufacturing for military and civilian purposes.
  • The deeper emphasis on decarbonization is driving the use of basic and applied R&D to invent new clean technology products and solutions as well as organizing energy generation and distribution within the districts themselves to meet crucial carbon targets.
  • The stronger emphasis on the diversification of talent includes the upskilling of workers for new production activities and a broader set of systems to drive inclusive innovation to address long-standing inequities.
  • The districts are attracting a broader group of stakeholders, including manufacturing companies, utilities, university industrial design and engineering departments and hard tech startups.
  • The districts ultimately are looking to engage a wider base of investors given the disparate resources and traditions of capitalization that support defense tech, clean tech, med tech and other favored forms of innovation.

Some regions or states are also seeking ways to connect a constellation of districts and other economic hubs to harness the imperative to innovate accentuated by these and other macro forces. The state of South Australia is one such example. It has prioritized several innovation hubs across this region to foster South Australia’s knowledge and innovation ecosystem, as well as identify emerging economic clusters in industry sectors of global competitiveness to advance the broader economy…(More)”.

The EU’s AI Power Play: Between Deregulation and Innovation


Article by Raluca Csernatoni: “From the outset, the European Union (EU) has positioned itself as a trailblazer in AI governance with the world’s first comprehensive legal framework for AI systems in use, the AI Act. The EU’s approach to governing artificial intelligence (AI) has been characterized by a strong precautionary and ethics-driven philosophy. This ambitious regulation reflects the EU’s long-standing approach of prioritizing high ethical standards and fundamental rights in tech and digital policies—a strategy of fostering both excellence and trust in human-centric AI models. Yet, framed as essential to keep pace with U.S. and Chinese AI giants, the EU has recently taken a deregulatory turn that risks trading away democratic safeguards, without addressing systemic challenges to AI innovation.

The EU now stands at a crossroads: it can forge ahead with bold, home-grown AI innovation underpinned by robust regulation, or it can loosen its ethical guardrails, only to find itself stripped of both technological autonomy and regulatory sway. While Brussels’s recent deregulatory turn is framed as a much needed competitiveness boost, the real obstacles to Europe’s digital renaissance lie elsewhere: persistent underfunding, siloed markets, and reliance on non-EU infrastructures…(More)”