Cutting through complexity using collective intelligence


Blog by the UK Policy Lab: “In November 2021 we established a Collective Intelligence Lab (CILab), with the aim of improving policy outcomes by tapping into collective intelligence (CI). We define CI as the diversity of thought and experience that is distributed across groups of people, from public servants and domain experts to members of the public. We have been experimenting with a digital tool, Pol.is, to capture diverse perspectives and new ideas on key government priority areas. To date we have run eight debates on issues as diverse as Civil Service modernisation, fisheries management and national security. Across these debates over 2400 civil servants, subject matter experts and members of the public have participated…

From our experience using CILab on live policy issues, we have identified a series of policy use cases that echo findings from the government of Taiwan and organisations such as Nesta. These use cases include: 1) stress-testing existing policies and current thinking, 2) drawing out consensus and divergence on complex, contentious issues, and 3) identifying novel policy ideas

1) Stress-testing existing policy and current thinking

CI could be used to gauge expert and public sentiment towards existing policy ideas by asking participants to discuss existing policies and current thinking on Pol.is. This is well suited to testing public and expert opinions on current policy proposals, especially where their success depends on securing buy-in and action from stakeholders. It can also help collate views and identify barriers to effective implementation of existing policy.

From the initial set of eight CILab policy debates, we have learnt that it is sometimes useful to design a ‘crossover point’ into the process. This is where part way through a debate, statements submitted by policymakers, subject matter experts and members of the public can be shown to each other, in a bid to break down groupthink across those groups. We used this approach in a Pol.is debate on a topic relating to UK foreign policy, and think it could help test how existing policies on complex areas such as climate change or social care are perceived within and outside government…(More)”

Is digital feedback useful in impact evaluations? It depends.


Article by Lois Aryee and Sara Flanagan: “Rigorous impact evaluations are essential to determining program effectiveness. Yet, they are often time-intensive and costly, and may fail to provide the rapid feedback necessary for informing real-time decision-making and course corrections along the way that maximize programmatic impact. Capturing feedback that’s both quick and valuable can be a delicate balance.

In an ongoing impact evaluation we are conducting in Ghana, a country where smoking rates among adolescent girls are increasing with alarming health implications, we have been evaluating a social marketing campaign’s effectiveness at changing girls’ behavior and reducing smoking prevalence with support from the Bill & Melinda Gates Foundation. Although we’ve been taking a traditional approach to this impact evaluation using a year-long, in-person panel survey, we were interested in using digital feedback as a means to collect more timely data on the program’s reach and impact. To do this, we explored several rapid digital feedback approaches including social media, text message, and Interactive Voice Response (IVR) surveys to determine their ability to provide quicker, more actionable insights into the girls’ awareness of, engagement with, and feelings about the campaign. 

Digital channels seemed promising given our young, urban population of interest; however, collecting feedback this way comes with considerable trade-offs. Digital feedback poses risks to both equity and quality, potentially reducing the population we’re able to reach and the value of the information we’re able to gather. The truth is that context matters, and tailored approaches are critical when collecting feedback, just as they are when designing programs. Below are three lessons to consider when adopting digital feedback mechanisms into your impact evaluation design. 

Lesson 1: A high number of mobile connections does not mean the target population has access to mobile phones. ..

Lesson 2: High literacy rates and “official” languages do not mean most people are able to read and write easily in a particular language...

Lesson 3: Gathering data on taboo topics may benefit from a personal touch. …(More)”.

The EU wants to put companies on the hook for harmful AI


Article by Melissa Heikkilä: “The EU is creating new rules to make it easier to sue AI companies for harm. A bill unveiled this week, which is likely to become law in a couple of years, is part of Europe’s push to prevent AI developers from releasing dangerous systems. And while tech companies complain it could have a chilling effect on innovation, consumer activists say it doesn’t go far enough. 

Powerful AI technologies are increasingly shaping our lives, relationships, and societies, and their harms are well documented. Social media algorithms boost misinformation, facial recognition systems are often highly discriminatory, and predictive AI systems that are used to approve or reject loans can be less accurate for minorities.  

The new bill, called the AI Liability Directive, will add teeth to the EU’s AI Act, which is set to become EU law around the same time. The AI Act would require extra checks for “high risk” uses of AI that have the most potential to harm people, including systems for policing, recruitment, or health care. 

The new liability bill would give people and companies the right to sue for damages after being harmed by an AI system. The goal is to hold developers, producers, and users of the technologies accountable, and require them to explain how their AI systems were built and trained. Tech companies that fail to follow the rules risk EU-wide class actions.

For example, job seekers who can prove that an AI system for screening résumés discriminated against them can ask a court to force the AI company to grant them access to information about the system so they can identify those responsible and find out what went wrong. Armed with this information, they can sue. 

The proposal still needs to snake its way through the EU’s legislative process, which will take a couple of years at least. It will be amended by members of the European Parliament and EU governments and will likely face intense lobbying from tech companies, which claim that such rules could have a “chilling” effect on innovation…(More)”.

Minben 民本 as an alternative to liberal democracy


Essay by Rongxin Li: “Although theorists have proposed non-Western types of democracy, such as Asian Democracy, they have nevertheless actively marginalised these non-Western types. This is partly due to Asian Democracy’s  inextricable link with Confucian traditions – many of which people commonly assume to be anti-democratic. This worry over Confucian values does not, however, detract from the fact that scholars are deliberately ignoring non-Western types of democracy because they do not follow Western narratives. ..

Minben is a paternalistic model of democracy. It does not involve multi-party elections and, unlike in liberal democracy, disorderly public participation is not one of its priorities. Minben relies on a theory of governance that believes carefully selected elites, usually a qualified minority, can use their knowledge and the constant pursuit of virtuous conduct to deliver the common good.

Liberal democracy maintains its legitimacy through periodic and competitive elections. Minben retains its legitimacy through its ‘output’. It is results, or policy implementation, oriented. Some argue that this performance-driven democracy cannot endure because it depends on people buying into it and consistently supporting it. But we could say the same of any democratic regime. Liberal democracy’s legitimacy is not unassailable – nor is it guaranteed.

Indeed, liberal democracy and Minben have more in common than many Western theorists concede. As Yu Keping underlined, stability is paramount in Chinese Communist Party ideology. John Keane, for example, once likened government and its legitimacy to a slippery egg. The greater the social instability, which may be driven by displeasure over the performance of ruling elites, the slipperier the egg becomes for the elites in question. Both liberal democratic regimes and Minben regimes face the same problem of dealing with social turmoil. Both look to serving the people as a means to staying atop the egg…

Minben – and this may surprise certain Western theorists – does not exclude public participation and deliberation. These instruments convey public voices and concerns to the selected technocrats tasked with deciding for the people. There is representation based on consultation here. Technocrats seek to make good decisions based on full consultation and analysis of public preferences…(More)”.

What does AI Localism look like in action? A new series examining use cases on how cities govern AI


Series by Uma Kalkar, Sara Marcucci, Salwa Mansuri, and Stefaan Verhulst: “…We call local instances of AI governance ‘AI Localism.’ AI Localism refers to the governance actions—which include, but are not limited to, regulations, legislations, task forces, public committees, and locally-developed tools—taken by local decision-makers to address the use of AI within a city or regional state.

It is necessary to note, however, that the presence of AI Localism does not mean that robust national- and state-level AI policy are not needed. Whereas local governance seems fundamental in addressing local, micro-level issues, tailoring, for instance, by implementing policies for specific AI use circumstances, national AI governance should act as a key tool to complement local efforts and provide cities with a cohesive, guiding direction.

Finally, it is important to mention how AI Localism is not necessarily good governance of AI at the local level. Indeed, there have been several instances where local efforts to regulate and employ AI have encroached on public freedoms and hurt the public good….

Examining the current state of play in AI localism

To this end, The Governance Lab (The GovLab) has created the AI Localism project to collect a knowledge base and inform a taxonomy on the dimensions of local AI governance (see below). This initiative began in 2020 with the AI Localism canvas, which captures the frames under which local governance methods are developing. This series presents current examples of AI localism across the seven canvas frames of: 

  • Principles and Rights: foundational requirements and constraints of AI and algorithmic use in the public sector;
  • Laws and Policies: regulation to codify the above for public and private sectors;
  • Procurement: mandates around the use of AI in employment and hiring practices; 
  • Engagement: public involvement in AI use and limitations;
  • Accountability and Oversight: requirements for periodic reporting and auditing of AI use;
  • Transparency: consumer awareness about AI and algorithm use; and
  • Literacy: avenues to educate policymakers and the public about AI and data.

In this eight-part series, released weekly, we will present current examples of each frame of the AI localism canvas to identify themes among city- and state-led legislative actions. We end with ten lessons on AI localism for policymakers, data and AI experts, and the informed public to keep in mind as cities grow increasingly ‘smarter.’…(More)”.

Income Inequality Is Rising. Are We Even Measuring It Correctly?


Article by Jon Jachimowicz et al: “Income inequality is on the rise in many countries around the world, according to the United Nations. What’s more, disparities in global income were exacerbated by the COVID-19 pandemic, with some countries facing greater economic losses than others.

Policymakers are increasingly focusing on finding ways to reduce inequality to create a more just and equal society for all. In making decisions on how to best intervene, policymakers commonly rely on the Gini coefficient, a statistical measure of resource distribution, including wealth and income levels, within a population. The Gini coefficient measures perfect equality as zero and maximum inequality as one, with higher numbers indicating a greater concentration of resources in the hands of a few.

This measure has long dominated our understanding (pdf) of what inequality means, largely because this metric is used by governments around the world, is released by statistics bureaus in multiple countries, and is commonly discussed in news media and policy discussions alike.

In our paper, recently published in Nature Human Behaviour, we argue that researchers and policymakers rely too heavily on the Gini coefficient—and that by broadening our understanding of how we measure inequality, we can both uncover its impact and intervene to more effectively correct It…(More)”.

Macroscopes


Exhibit by Places and Spaces: “The term “macroscope” may strike many as being strange or even daunting. But actually, the term becomes friendlier when placed within the context of more familiar “scopes.” For instance, most of us have stared through a microscope. By doing so, we were able to see tiny plant or animal cells floating around before our very eyes. Similarly, many of us have peered out through a telescope into the night sky. There, we were able to see lunar craters, cloud belts on Jupiter, or the phases of Mercury. What both of these scopes have in common is that they allow the viewer to see objects that could otherwise not be perceived by the naked eye, either because they are too small or too distant.

But what if we want to better understand the complex systems or networks within which we operate and which have a profound, if often unperceived, impact on our lives? This is where macroscopes become such useful tools. They allow us to go beyond our focus on the single organism, the single social or natural phenomenon, or the single development in technology. Instead, macroscopes allow us to gather vast amounts of data about many kinds of organisms, environments, and technologies. And from that data, we can analyze and comprehend the way these elements co-exist, compete, or cooperate.

With the macroscope, we are allowed to see the “big picture,” a goal imagined in 1979 by Joël de Rosnay in his groundbreaking book, The Macroscope: A New World Scientific System. For the author, the macroscope would be the “symbol of a new way of seeing and understanding.” It was to be a tool “not used to make things larger or smaller but to observe what is at once too great, too slow, and too complex for our eyes.”

With these needs and insights in mind, the second decade of the Places & Spaces exhibit will invite and showcase interactive visualizations—our own exemplars of de Rosnay’s macroscope—that demonstrate the impact of different data cleaning, analysis, and visualization algorithms. It is the exhibit’s hope that this view of the “behind the scenes” process of data visualization will increase the ability of viewers to gain meaningful insights from such visualizations and empower people from all backgrounds to use data more effectively and endeavor to create maps that address their own needs and interests…(More)”.

Participatory Data Governance: How Small Changes Can Lead to Greater Inclusion


Essay by Kate Richards and Martina Barbero: “What the majority of participatory data governance approaches have in common is strong collaboration between public authorities and civil society organizations and representatives of communities that have been historically marginalized and excluded or who are at risk of being marginalized. This leads to better data and evidence for policy-making. For instance, a partnership between the Canadian government and First Nations communities led Statistics Canada to better understand the factors that exacerbate exclusion and capture the lived experiences of these communities. 

These practices are pivotal for increasing inclusion and accountability in data beyond the data collection stage. In fact, while inclusion at the data collection phase remains extremely important, participatory data governance approaches can be adopted at any stage of the data lifecycle.

  • Before data collection starts: Building relationships with communities at risk of being marginalized helps clarify “what to count” and how to embed the needs and aspirations of vulnerable populations in new data collection approaches. The National Department of Statistics in Colombia’s (DANE) multi-year work with Indigenous communities enabled the statistical office to change their population survey approach, leading to more inclusive data policies. 
  • After data is collectedCollaborating with civil society organizations enables public authorities to assess how and through which channels data should be shared with target communities. When the government of Buenos Aires wanted to provide information to increase access to sexual and reproductive health services, it worked with civil society to gather feedback and develop a platform that would be useful and accessible to the target population.
  • At the stage of data use: Participatory approaches for data inclusion also support greater data use, both by public authorities and by external stakeholders. In Medellin, Colombia, the availability of more granular and more inclusive data on teen pregnancy enabled the government to develop better prevention policies and establish personalized services for girls at risk, resulting in a reduction of teen pregnancies by 30%. In Rosario, Argentina, the government’s partnership with associations representing persons with disabilities led to the development of much more accessible and inclusive public portals, which in turn resulted in better access to services for all citizens…(More)”.

One Data Point Can Beat Big Data


Essay by Gerd Gigerenzer: “…In my research group at the Max Planck Institute for Human Development, we’ve studied simple algorithms (heuristics) that perform well under volatile conditions. One way to derive these rules is to rely on psychological AI: to investigate how the human brain deals with situations of disruption and change. Back in 1838, for instance, Thomas Brown formulated the Law of Recency, which states that recent experiences come to mind faster than those in the distant past and are often the sole information that guides human decision. Contemporary research indicates that people do not automatically rely on what they recently experienced, but only do so in unstable situations where the distant past is not a reliable guide for the future. In this spirit, my colleagues and I developed and tested the following “brain algorithm”:

Recency heuristic for predicting the flu: Predict that this week’s proportion of flu-related doctor visits will equal those of the most recent data, from one week ago.

Unlike Google’s secret Flu Trends algorithm, this rule is transparent and can be easily applied by everyone. Its logic can be understood. It relies on a single data point only, which can be looked up on the website of the Center for Disease Control. And it dispenses with combing through 50 million search terms and trial-and-error testing of millions of algorithms. But how well does it actually predict the flu?

Three fellow researchers and I tested the recency rule using the same eight years of data on which Google Flu Trends algorithm was tested, that is, weekly observations between March 2007 and August 2015. During that time, the proportion of flu-related visits among all doctor visits ranged between one percent and eight percent, with an average of 1.8 percent visits per week (Figure 1). This means that if every week you were to make the simple but false prediction that there are zero flu-related doctor visits, you would have a mean absolute error of 1.8 percentage points over four years. Google Flu Trends predicted much better than that, with a mean error of 0.38 percentage points (Figure 2). The recency heuristic had a mean error of only 0.20 percentage points, which is even better. If we exclude the period where the swine flu happened, that is before the first update of Google Flu Trends, the result remains essentially the same (0.38 and 0.19, respectively)….(More)”.

Academic freedom and democracy in African countries: the first study to track the connection


Article by Liisa Laakso: “There is growing interest in the state of academic freedom worldwide. A 1997 Unesco document defines it as the right of scholars to teach, discuss, research, publish, express opinions about systems and participate in academic bodies. Academic freedom is a cornerstone of education and knowledge.

Yet there is surprisingly little empirical research on the actual impact of academic freedom. Comparable measurements have also been scarce. It was only in 2020 that a worldwide index of academic freedom was launched by the Varieties of Democracy database, V-Dem, in collaboration with the Scholars at Risk Network….

My research has been on the political science discipline in African universities and its role in political developments on the continent. As part of this project, I have investigated the impact of academic freedom in the post-Cold War democratic transitions in Africa.

study I published with the Tunisian economist Hajer Kratou showed that academic freedom has a significant positive effect on democracy, when democracy is measured by indicators such as the quality of elections and executive accountability.

However, the time factor is significant. Countries with high levels of academic freedom before and at the time of their democratic transition showed high levels of democracy even 5, 10 and 15 years later. In contrast, the political situation was more likely to deteriorate in countries where academic freedom was restricted at the time of transition. The impact of academic freedom was greatest in low-income countries….(More)”