How to build a Collective Mind that speaks for humanity in real-time


Blog by Louis Rosenberg: “This begs the question — could large human groups deliberate in real-time with the efficiency of fish schools and quickly reach optimized decisions?

For years this goal seemed impossible. That’s because conversational deliberations have been shown to be most productive in small groups of 4 to 7 people and quickly degrade as groups grow larger. This is because the “airtime per person” gets progressively squeezed and the wait-time to respond to others steadily increases. By 12 to 15 people, the conversational dynamics change from thoughtful debate to a series of monologues that become increasingly disjointed. By 20 people, the dialog ceases to be a conversation at all. This problem seemed impenetrable until recent advances in Generative AI opened up new solutions.

The resulting technology is called Conversational Swarm Intelligence and it promises to allow groups of almost any size (200, 2000, or even 2 million people) to discuss complex problems in real-time and quickly converge on solutions with significantly amplified intelligence. The first step is to divide the population into small subgroups, each sized for thoughtful dialog. For example, a 1000-person group could be divided into 200 subgroups of 5, each routed into their own chat room or video conferencing session. Of course, this does not create a single unified conversation — it creates 200 parallel conversations…(More)”.

Doing science backwards


Article by Stuart Ritchie: “…Usually, the process of publishing such a study would look like this: you run the study; you write it up as a paper; you submit it to a journal; the journal gets some other scientists to peer-review it; it gets published – or if it doesn’t, you either discard it, or send it off to a different journal and the whole process starts again.

That’s standard operating procedure. But it shouldn’t be. Think about the job of the peer-reviewer: when they start their work, they’re handed a full-fledged paper, reporting on a study and a statistical analysis that happened at some point in the past. It’s all now done and, if not fully dusted, then in a pretty final-looking form.

What can the reviewer do? They can check the analysis makes sense, sure; they can recommend new analyses are done; they can even, in extreme cases, make the original authors go off and collect some entirely new data in a further study – maybe the data the authors originally presented just aren’t convincing or don’t represent a proper test of the hypothesis.

Ronald Fisher described the study-first, review-later process in 1938:

To consult the statistician [or, in our case, peer-reviewer] after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.

Clearly this isn’t the optimal, most efficient way to do science. Why don’t we review the statistics and design of a study right at the beginning of the process, rather than at the end?

This is where Registered Reports come in. They’re a new (well, new-ish) way of publishing papers where, before you go to the lab, or wherever you’re collecting data, you write down your plan for your study and send it off for peer-review. The reviewers can then give you genuinely constructive criticism – you can literally construct your experiment differently depending on their suggestions. You build consensus—between you, the reviewers, and the journal editor—on the method of the study. And then, once everyone agrees on what a good study of this question would look like, you go off and do it. The key part is that, at this point, the journal agrees to publish your study, regardless of what the results might eventually look like…(More)”.

AI-Ready FAIR Data: Accelerating Science through Responsible AI and Data Stewardship


Article by Sean Hill: “Imagine a future where scientific discovery is unbound by the limitations of data accessibility and interoperability. In this future, researchers across all disciplines — from biology and chemistry to astronomy and social sciences — can seamlessly access, integrate, and analyze vast datasets with the assistance of advanced artificial intelligence (AI). This world is one where AI-ready data empowers scientists to unravel complex problems at unprecedented speeds, leading to breakthroughs in medicine, environmental conservation, technology, and more. The vision of a truly FAIR (Findable, Accessible, Interoperable, Reusable) and AI-ready data ecosystem, underpinned by Responsible AI (RAI) practices and the pivotal role of data stewards, promises to revolutionize the way science is conducted, fostering an era of rapid innovation and global collaboration…(More)”.

Real Chaos, Today! Are Randomized Controlled Trials a good way to do economics?


Article by Maia Mindel: “A few weeks back, there was much social media drama about this a paper titled: “Social Media and Job Market Success: A Field Experiment on Twitter” (2024) by Jingyi Qiu, Yan Chen, Alain Cohn, and Alvin Roth (recipient of the 2012 Nobel Prize in Economics). The study posted job market papers by economics PhDs, and then assigned prominent economists (who had volunteered) to randomly promote half of them on their profiles(more detail on this paper in a bit).

The “drama” in question was generally: “it is immoral to throw dice around on the most important aspect of a young economist’s career”, versus “no it’s not”. This, of course, awakened interest in a broader subject: Randomized Controlled Trials, or RCTs.

R.C.T. T.O. G.O.

Let’s go back to the 1600s – bloodletting was a common way to cure diseases. Did it work? Well, doctor Joan Baptista van Helmont had an idea: randomly divvy up a few hundred invalids into two groups, one of which got bloodletting applied, and another one that didn’t.

While it’s not clear this experiment ever happened, it sets up the basic principle of the randomized control trial: the idea here is that, to study the effects of a treatment, (in a medical context, a medicine; in an economics context, a policy), a sample group is divided between two: the control group, which does not receive any treatment, and the treatment group, which does. The modern randomized controlled (or control) trial has three “legs”: it’s randomized because who’s in each group gets chosen at random, it’s controlled because there’s a group that doesn’t get the treatment to serve as a counterfactual, and it’s a trial because you’re not developing “at scale” just yet.

Why could it be important to randomly select people for economic studies? Well, you want the only difference, on average, between the two groups to be whether or not they get the treatment. Consider military service: it’s regularly trotted out that drafting kids would reduce crime rates. Is this true? Well, the average person who is exempted from the draft could be, systematically, different than the average person who isn’t – for example, people who volunteer could be from wealthier families who are more patriotic, or poorer families who need certain benefits; or they could have physical disabilities that impede their labor market participation, or wealthier university students who get a deferral. But because many countries use lotteries to allocate draftees versus non draftees, you can get a group of people who are randomly assigned to the draft, and who on average should be similar enough to each other. One study in particular, about Argentina’s mandatory military service in pretty much all of the 20th century, finds that being conscripted raises the crime rate relative to people who didn’t get drafted through the lottery. This doesn’t mean that soldiers have higher crime rates than non soldiers, because of selection issues – but it does provide pretty good evidence that getting drafted is not good for your non-criminal prospects…(More)”.

Connecting the dots: AI is eating the web that enabled it


Article by Tom Wheeler: “The large language models (LLMs) of generative AI that scraped their training data from websites are now using that data to eliminate the need to go to many of those same websites. Respected digital commentator Casey Newton concluded, “the web is entering a state of managed decline.” The Washington Post headline was more dire: “Web publishers brace for carnage as Google adds AI answers.”…

Created by Sir Tim Berners-Lee in 1989, the World Wide Web redefined the nature of the internet into a user-friendly linkage of diverse information repositories. “The first decade of the web…was decentralized with a long-tail of content and options,” Berners-Lee wrote this year on the occasion of its 35th anniversary.  Over the intervening decades, that vision of distributed sources of information has faced multiple challenges. The dilution of decentralization began with powerful centralized hubs such as Facebook and Google that directed user traffic. Now comes the ultimate disintegration of Berners-Lee’s vision as generative AI reduces traffic to websites by recasting their information.

The web’s open access to the world’s information trained the large language models (LLMs) of generative AI. Now, those generative AI models are coming for their progenitor.

The web allowed users to discover diverse sources of information from which to draw conclusions. AI cuts out the intellectual middleman to go directly to conclusions from a centralized source.

The AI paradigm of cutting out the middleman appears to have been further advanced in Apple’s recent announcement that it will incorporate OpenAI to enable its Siri app to provide ChatGPT-like answers. With this new deal, Apple becomes an AI-based disintermediator, not only eliminating the need to go to websites, but also potentially disintermediating the need for the Google search engine for which Apple has been paying $20 billion annually.

The AtlanticUniversity of Toronto, and Gartner studies suggest the Pew research on website mortality could be just the beginning. Generative AI’s ability to deliver conclusions cannibalizes traffic to individual websites threatening the raison d’être of all websites, especially those that are commercially supported…(More)” 

Why policy failure is a prerequisite for innovation in the public sector


Blog by Philipp Trein and Thenia Vagionaki: “In our article entitled, “Why policy failure is a prerequisite for innovation in the public sector,” we explore the relationship between policy failure and innovation within public governance. Drawing inspiration from the “Innovator’s Dilemma,”—a theory from the management literature—we argue that the very nature of policymaking, characterized by myopia of voters, blame avoidance by decisionmakers, and the complexity (ill-structuredness) of societal challenges, has an inherent tendency to react with innovation only after failure of existing policies.  

Our analysis implies that we need to be more critical of what the policy process can achieve in terms of public sector innovation. Cognitive limitations tend to lead to a misperception of problems and inaccurate assessment of risks by decision makers according to the “Innovator’s Dilemma”.  This problem implies that true innovation (non-trivial policy changes) are unlikely to happen before an existing policy has failed visibly. However, our perspective does not want to paint a gloomy picture for public policy making but rather offers a more realistic interpretation of what public sector innovation can achieve. As a consequence, learning from experts in the policy process should be expected to correct failures in public sector problem-solving during the political process, rather than raise expectations beyond what is possible. 

The potential impact of our findings is profound. For practitioners and policymakers, this insight offers a new lens through which to evaluate the failure and success of public policies. Our work advocates a paradigm shift in how we perceive, manage, and learn from policy failures in the public sector, and for the expectations we have towards learning and the use of evidence in policymaking. By embracing the limitations of innovation in public policy, we can better manage expectations and structure the narrative regarding the capacity of public policy to address collective problems…(More)”.


Now we are all measuring impact — but is anything changing?


Article by Griffith Centre for Systems Innovation: “…Increasingly the landscape of Impact Measurement is crowded, dynamic and contains a diversity of frameworks and approaches — which can mean we end up feeling like we’re looking at alphabet soup.

As we’ve traversed this landscape we’ve tried to make sense of it in various ways, and have begun to explore a matrix to represent the constellation of frameworks, approaches and models we’ve encountered in the process. As shown below, the matrix has two axes:

The horizontal axis provides us with a “time” delineation. Dividing the left and right sides between retrospective (ex post) and prospective (ex-ante) approaches to measuring impact.

More specifically the retrospective quadrants include approaches/frameworks/models that ask about events in the past: What impact did we have? While the prospective quadrants include approaches that ask about the possible future: What impact will we have?

The vertical axis provides us with a “purpose” delineation. Dividing the upper and lower parts between Impact Measurement + Management and Evaluation

The top-level quadrants, Impact Measurement + Management, focus on methods that count quantifiable data (i.e. time, dollars, widgets). These frameworks tend to measure outputs from activities/interventions. They tend to ask the question what happened or what could happen and rely significantly on quantitative data.

The bottom-level Evaluation quadrants include a range of approaches that look at a broader range of questions beyond counting. They include questions like: what changed and why? What was or might the interrelationships between changes be? They tend to draw on a mixture of quantitative and qualitative data to create a more cohesive understanding of changes that occurred, are occurring or could occur.

A word of warning: As with all frameworks, this matrix is a “construct” — a way for us to engage in sense-making and to critically discuss how impact measurement is being undertaken in our current context. We are sharing this as a starting point for a broader discussion. We welcome feedback, reflections, and challenges around how we have represented different approaches — we are not seeking a ‘true representation’, but rather, a starting point for dialogue about how all the methods that now abound are connected, entangled and constructed…(More)”

Misuse versus Missed use — the Urgent Need for Chief Data Stewards in the Age of AI


Article by Stefaan Verhulst and Richard Benjamins: “In the rapidly evolving landscape of artificial intelligence (AI), the need for and importance of Chief AI Officers (CAIO) are receiving increasing attention. One prominent example came in a recent memo on AI policy, issued by Shalanda Young, Director of the United States Office of Management and Budget. Among the most important — and prominently featured — recommendations were a call, “as required by Executive Order 14110,” for all government agencies to appoint a CAIO within 60 days of the release of the memo.

In many ways, this call is an important development; not even the EU AI Act is requiring this of public agencies. CAIOs have an important role to play in the search for a responsible use of AI for public services that would include guardrails and help protect the public good. Yet while acknowledging the need for CAIOs to safeguard a responsible use of AI, we argue that the duty of Administrations is not only to avoid negative impact, but also to create positive impact. In this sense, much work remains to be done in defining the CAIO role and considering their specific functions. In pursuit of these tasks, we further argue, policymakers and other stakeholders might benefit from looking at the role of another emerging profession in the digital ecology–that of Chief Data Stewards (CDS), which is focused on creating such positive impact for instance to help achieve the UN’s SDGs. Although the CDS position is itself somewhat in flux, we suggest that CDS can nonetheless provide a useful template for the functions and roles of CAIOs.

Image courtesy of Advertising Week

We start by explaining why CDS are relevant to the conversation over CAIOs; this is because data and data governance are foundational to AI governance. We then discuss some particular functions and competencies of CDS, showing how these can be equally applied to the governance of AI. Among the most important (if high-level) of these competencies is an ability to proactively identify opportunities in data sharing, and to balance the risks and opportunities of our data age. We conclude by exploring why this competency–an ethos of positive data responsibility that avoids overly-cautious risk aversion–is so important in the AI and data era…(More)”

Inclusive by default: strategies for more inclusive participation


Article by Luiza Jardim and Maria Lucien: “…The systemic challenges that marginalised groups face are pressing and require action. The global average age of parliamentarians is 53, highlighting a gap in youth representation. Young people already face challenges like poverty, lack of education, unemployment and multiple forms of discrimination. Additionally, some participatory formats are often unappealing to young people and pose a challenge for engaging them. Gender equity research highlights the underrepresentation of women at all levels of decision-making and governance. Despite recent improvements, gender parity in governance worldwide is still decades or even centuries away. Meanwhile, ongoing global conflicts in Ukraine, Sudan, Gaza and elsewhere, as well as the impacts of a changing climate, have driven the recent increase in the number of forcibly displaced people to more than 100 million. The engagement of these individuals in decision-making can vary greatly depending on their specific circumstances and the nature of their displacement.

Participatory and deliberative democracy can have transformative impacts on historically marginalised communities but only if they are intentionally included in program design and implementation. To start with, it’s possible to reduce the barriers to participation, such as the cost and time of transport to the participation venue, or burdens imposed by social and cultural roles in society, like childcare. During the process, mindful and attentive facilitation can help balance power dynamics and encourage participation from traditionally excluded people. This is further strengthened if the facilitation team includes and trains members of priority communities in facilitation and session planning…(More)”.

Are We Ready for the Next Pandemic? Navigating the First and Last Mile Challenges in Data Utilization


Blog by Stefaan Verhulst, Daniela Paolotti, Ciro Cattuto and Alessandro Vespignani:

“Public health officials from around the world are gathering this week in Geneva for a weeklong meeting of the 77th World Health Assembly. A key question they are examining is: Are we ready for the next pandemic? As we have written elsewhere, regarding access to and re-use of data, particularly non-traditional data, for pandemic preparedness and response: we are not. Below, we list ten recommendations to advance access to and reuse of non-traditional data for pandemics, drawing on input from a high-level workshop, held in Brussels, within the context of the ESCAPE program…(More)”

As