What does AI Localism look like in action? A new series examining use cases on how cities govern AI

Series by Uma Kalkar, Sara Marcucci, Salwa Mansuri, and Stefaan Verhulst: “…We call local instances of AI governance ‘AI Localism.’ AI Localism refers to the governance actions—which include, but are not limited to, regulations, legislations, task forces, public committees, and locally-developed tools—taken by local decision-makers to address the use of AI within a city or regional state.

It is necessary to note, however, that the presence of AI Localism does not mean that robust national- and state-level AI policy are not needed. Whereas local governance seems fundamental in addressing local, micro-level issues, tailoring, for instance, by implementing policies for specific AI use circumstances, national AI governance should act as a key tool to complement local efforts and provide cities with a cohesive, guiding direction.

Finally, it is important to mention how AI Localism is not necessarily good governance of AI at the local level. Indeed, there have been several instances where local efforts to regulate and employ AI have encroached on public freedoms and hurt the public good….

Examining the current state of play in AI localism

To this end, The Governance Lab (The GovLab) has created the AI Localism project to collect a knowledge base and inform a taxonomy on the dimensions of local AI governance (see below). This initiative began in 2020 with the AI Localism canvas, which captures the frames under which local governance methods are developing. This series presents current examples of AI localism across the seven canvas frames of: 

  • Principles and Rights: foundational requirements and constraints of AI and algorithmic use in the public sector;
  • Laws and Policies: regulation to codify the above for public and private sectors;
  • Procurement: mandates around the use of AI in employment and hiring practices; 
  • Engagement: public involvement in AI use and limitations;
  • Accountability and Oversight: requirements for periodic reporting and auditing of AI use;
  • Transparency: consumer awareness about AI and algorithm use; and
  • Literacy: avenues to educate policymakers and the public about AI and data.

In this eight-part series, released weekly, we will present current examples of each frame of the AI localism canvas to identify themes among city- and state-led legislative actions. We end with ten lessons on AI localism for policymakers, data and AI experts, and the informed public to keep in mind as cities grow increasingly ‘smarter.’…(More)”.

Income Inequality Is Rising. Are We Even Measuring It Correctly?

Article by Jon Jachimowicz et al: “Income inequality is on the rise in many countries around the world, according to the United Nations. What’s more, disparities in global income were exacerbated by the COVID-19 pandemic, with some countries facing greater economic losses than others.

Policymakers are increasingly focusing on finding ways to reduce inequality to create a more just and equal society for all. In making decisions on how to best intervene, policymakers commonly rely on the Gini coefficient, a statistical measure of resource distribution, including wealth and income levels, within a population. The Gini coefficient measures perfect equality as zero and maximum inequality as one, with higher numbers indicating a greater concentration of resources in the hands of a few.

This measure has long dominated our understanding (pdf) of what inequality means, largely because this metric is used by governments around the world, is released by statistics bureaus in multiple countries, and is commonly discussed in news media and policy discussions alike.

In our paper, recently published in Nature Human Behaviour, we argue that researchers and policymakers rely too heavily on the Gini coefficient—and that by broadening our understanding of how we measure inequality, we can both uncover its impact and intervene to more effectively correct It…(More)”.


Exhibit by Places and Spaces: “The term “macroscope” may strike many as being strange or even daunting. But actually, the term becomes friendlier when placed within the context of more familiar “scopes.” For instance, most of us have stared through a microscope. By doing so, we were able to see tiny plant or animal cells floating around before our very eyes. Similarly, many of us have peered out through a telescope into the night sky. There, we were able to see lunar craters, cloud belts on Jupiter, or the phases of Mercury. What both of these scopes have in common is that they allow the viewer to see objects that could otherwise not be perceived by the naked eye, either because they are too small or too distant.

But what if we want to better understand the complex systems or networks within which we operate and which have a profound, if often unperceived, impact on our lives? This is where macroscopes become such useful tools. They allow us to go beyond our focus on the single organism, the single social or natural phenomenon, or the single development in technology. Instead, macroscopes allow us to gather vast amounts of data about many kinds of organisms, environments, and technologies. And from that data, we can analyze and comprehend the way these elements co-exist, compete, or cooperate.

With the macroscope, we are allowed to see the “big picture,” a goal imagined in 1979 by Joël de Rosnay in his groundbreaking book, The Macroscope: A New World Scientific System. For the author, the macroscope would be the “symbol of a new way of seeing and understanding.” It was to be a tool “not used to make things larger or smaller but to observe what is at once too great, too slow, and too complex for our eyes.”

With these needs and insights in mind, the second decade of the Places & Spaces exhibit will invite and showcase interactive visualizations—our own exemplars of de Rosnay’s macroscope—that demonstrate the impact of different data cleaning, analysis, and visualization algorithms. It is the exhibit’s hope that this view of the “behind the scenes” process of data visualization will increase the ability of viewers to gain meaningful insights from such visualizations and empower people from all backgrounds to use data more effectively and endeavor to create maps that address their own needs and interests…(More)”.

Participatory Data Governance: How Small Changes Can Lead to Greater Inclusion

Essay by Kate Richards and Martina Barbero: “What the majority of participatory data governance approaches have in common is strong collaboration between public authorities and civil society organizations and representatives of communities that have been historically marginalized and excluded or who are at risk of being marginalized. This leads to better data and evidence for policy-making. For instance, a partnership between the Canadian government and First Nations communities led Statistics Canada to better understand the factors that exacerbate exclusion and capture the lived experiences of these communities. 

These practices are pivotal for increasing inclusion and accountability in data beyond the data collection stage. In fact, while inclusion at the data collection phase remains extremely important, participatory data governance approaches can be adopted at any stage of the data lifecycle.

  • Before data collection starts: Building relationships with communities at risk of being marginalized helps clarify “what to count” and how to embed the needs and aspirations of vulnerable populations in new data collection approaches. The National Department of Statistics in Colombia’s (DANE) multi-year work with Indigenous communities enabled the statistical office to change their population survey approach, leading to more inclusive data policies. 
  • After data is collectedCollaborating with civil society organizations enables public authorities to assess how and through which channels data should be shared with target communities. When the government of Buenos Aires wanted to provide information to increase access to sexual and reproductive health services, it worked with civil society to gather feedback and develop a platform that would be useful and accessible to the target population.
  • At the stage of data use: Participatory approaches for data inclusion also support greater data use, both by public authorities and by external stakeholders. In Medellin, Colombia, the availability of more granular and more inclusive data on teen pregnancy enabled the government to develop better prevention policies and establish personalized services for girls at risk, resulting in a reduction of teen pregnancies by 30%. In Rosario, Argentina, the government’s partnership with associations representing persons with disabilities led to the development of much more accessible and inclusive public portals, which in turn resulted in better access to services for all citizens…(More)”.

One Data Point Can Beat Big Data

Essay by Gerd Gigerenzer: “…In my research group at the Max Planck Institute for Human Development, we’ve studied simple algorithms (heuristics) that perform well under volatile conditions. One way to derive these rules is to rely on psychological AI: to investigate how the human brain deals with situations of disruption and change. Back in 1838, for instance, Thomas Brown formulated the Law of Recency, which states that recent experiences come to mind faster than those in the distant past and are often the sole information that guides human decision. Contemporary research indicates that people do not automatically rely on what they recently experienced, but only do so in unstable situations where the distant past is not a reliable guide for the future. In this spirit, my colleagues and I developed and tested the following “brain algorithm”:

Recency heuristic for predicting the flu: Predict that this week’s proportion of flu-related doctor visits will equal those of the most recent data, from one week ago.

Unlike Google’s secret Flu Trends algorithm, this rule is transparent and can be easily applied by everyone. Its logic can be understood. It relies on a single data point only, which can be looked up on the website of the Center for Disease Control. And it dispenses with combing through 50 million search terms and trial-and-error testing of millions of algorithms. But how well does it actually predict the flu?

Three fellow researchers and I tested the recency rule using the same eight years of data on which Google Flu Trends algorithm was tested, that is, weekly observations between March 2007 and August 2015. During that time, the proportion of flu-related visits among all doctor visits ranged between one percent and eight percent, with an average of 1.8 percent visits per week (Figure 1). This means that if every week you were to make the simple but false prediction that there are zero flu-related doctor visits, you would have a mean absolute error of 1.8 percentage points over four years. Google Flu Trends predicted much better than that, with a mean error of 0.38 percentage points (Figure 2). The recency heuristic had a mean error of only 0.20 percentage points, which is even better. If we exclude the period where the swine flu happened, that is before the first update of Google Flu Trends, the result remains essentially the same (0.38 and 0.19, respectively)….(More)”.

Academic freedom and democracy in African countries: the first study to track the connection

Article by Liisa Laakso: “There is growing interest in the state of academic freedom worldwide. A 1997 Unesco document defines it as the right of scholars to teach, discuss, research, publish, express opinions about systems and participate in academic bodies. Academic freedom is a cornerstone of education and knowledge.

Yet there is surprisingly little empirical research on the actual impact of academic freedom. Comparable measurements have also been scarce. It was only in 2020 that a worldwide index of academic freedom was launched by the Varieties of Democracy database, V-Dem, in collaboration with the Scholars at Risk Network….

My research has been on the political science discipline in African universities and its role in political developments on the continent. As part of this project, I have investigated the impact of academic freedom in the post-Cold War democratic transitions in Africa.

study I published with the Tunisian economist Hajer Kratou showed that academic freedom has a significant positive effect on democracy, when democracy is measured by indicators such as the quality of elections and executive accountability.

However, the time factor is significant. Countries with high levels of academic freedom before and at the time of their democratic transition showed high levels of democracy even 5, 10 and 15 years later. In contrast, the political situation was more likely to deteriorate in countries where academic freedom was restricted at the time of transition. The impact of academic freedom was greatest in low-income countries….(More)”

Data for Peace and Humanitarian Response? The Case of the Ukraine-Russia War

Article by Behruz Davletov, Uma Kalkar, Salwa Mansuri, Marine Ragnet, and Stefaan Verhulst at Data & Policy: “Since the outbreak of hostilities between Russia and Ukraine on 24 February 2022, more than 4,889 (28,081 according to the Ukrainian government) civilians have been killed and over 7 million people have been displaced. The conflict has had a significant impact on civilians, particularly women and children. In response to the crisis, local and international organizations have sought to provide immediate humanitarian assistance, and initiated numerous initiatives to monitor violations and work toward peacebuilding and conflict resolution.

As in other areas of society, data and data science have become important to tailor, conduct, and monitor emergency responses in conflict zones. Data has also become crucial to support humanitarian action and peacebuilding. For example, data collected from satellite, GPS, and drone technologies can be used to map a conflict’s evolution, understand the needs of civilians, evaluate migration patterns, analyze discourses coming from both sides, and track the delivery of assistance.

This article focuses on the role that data has played in crisis response and peacebuilding related to the Russian-Ukrainian war so as to demonstrate how data can be used for peace. We consider a variety of publicly available evidence to examine various aspects of how data is playing a role in the ongoing conflict, mainly from a humanitarian response perspective. In particular, we consider the following aspects and taxonomy of data usage:

  • Prediction: Data is used to monitor and plan for likely events and risks both prior to and during the conflict;
  • Narratives: Data plays a critical role in both constructing and countering misinformation and disinformation;
  • Infrastructure Damage: Data can be used to track and respond to infrastructure damage, as well as to associated human rights violations and migration flows;
  • Human Rights Violations and Abuses: Data is used to identify and report human rights abuses, and to help construct a legal basis for justice;
  • Migration Flows: Large-scale population flows, both within Ukraine and toward neighboring countries, are one of the defining features of the conflict. Data is being used to monitor these flows, and to target humanitarian assistance;
  • Humanitarian Response: In addition to the above, data is also being used for a wide variety of humanitarian purposes, including ensuring basic and medical supplies, and addressing the resulting mental health crisis….(More)”.

Transforming public policy with engaged scholarship: better together

Blog by Alana Cattapan & Tobin LeBlanc Haley: “The expertise of people with lived experience is receiving increased attention within policy making arenas. Yet consultation processes have, for the most part, been led by public servants, with limited resources provided for supporting the community engagement vital to the inclusion of lived experience experts in policy making. What would policy decisions look like if the voices of the communities who live with the consequences of these decisions were prioritised not only in consultation processes, but in determining priorities and policy processes from the outset? This is one of the questions we explore in our recent article published in the special issue on Transformational Change in Public Policy.

As community-engaged policy researchers, along with Leah LevacLaura Pin, Ethel Tungohan and Sarah Marie Wiebe, our attention has been focused on how to engage meaningfully and work together with the communities impacted by our research, the very communities often systematically excluded from policy processes. Across our different research programmes, we work together with people experiencing precarious housing and homelessnessmigrant workersnorthern and Indigenous womenFirst Nations, and trans and gender diverse people. The lessons we have learned in our research with these communities are useful for our work and for these communities, as well as for policy makers and other actors wanting to engage meaningfully with community stakeholders.

Our new article, “Transforming Public Policy with Engaged Scholarship: Better Together,” describes these lessons, showing how engaged scholarship can inform the meaningful inclusion of people with lived expertise in public policy making. We draw on Marianne Beaulieu, Mylaine Breton and Astrid Brouselle’s work to focus on four principles of engaged scholarship. The principles we focus on include prioritising community needs, practicing reciprocity, recognising multiple ways of knowing, and crossing disciplinary and sectoral boundaries. Using five vignettes from our own research, we link these principles to our practice, highlighting how policy makers can do the same. In one vignette, co-author Sarah Marie Wiebe describes how her research with people in Aamjiwnaang in Canada was made possible through the sustained time and effort of relationship building and learning about the lived experiences of community members. As she explains in the article, this work included sensing the pollution in the surrounding atmosphere firsthand through participation in a “toxic tour” of the community’s location next to Canada’s Chemical Valley. In another vignette, co-author Ethel Tungohan details how migrant community leaders led a study looking at migrant workers’ housing precarity, enabling more responsive forms of engagement with municipal policy makers who tend to ignore migrant workers’ housing issues….(More)”.

The Decentralized Web: Hope or Hype?

Article by Inga Trauthig: “The heavy financial losses of cryptocurrency holders in recent months have catapulted a relatively niche tech topic into public view. However, many investors originally did not emphasize economic gains as their primary motivation for supporting cryptocurrencies. A different motive was driving them: decentralization.

Cryptocurrencies, together with blockchain, belong to a broader field related to the decentralized Web (DWeb) or Web3, which is, however, characterized by some obscurity. In August 2022, many informed readers are likely to be able to explain bitcoin, but fewer will be able to explain differences between various DWeb services, or how content moderation on a new version of the internet works — or could work in future.

The DWeb currently is a movement of which some parts are heavily tied to blockchain as a revolutionary technology purported to resolve the current ills of the internet. But some in the movement disagree on the dogma of blockchain (together with incentive stimulus and game theory) as the Web’s saviour — while concurring on the basic tenet that the current internet space, Web 2.0, has been corrupted by centralization. In other words, the DWeb is a movement whose members share many ideals but differ in their approaches to achieving them. And, some parts of this movement have much broader reach than others. While bitcoin has swept the globe and managed to draw adherents in the Global North and South, social media DWeb services are still mostly used by the technological cognoscenti.

In effect, at the current stage, successes of a decentralized Web are few and far between. They relate to two main aspirations: first, the empirical (re-)decentralization of the internet, and second, an appeal to make the internet a good place (again). The latter is certainly tempting given that the Web 2.0 is regularly accused of enabling authoritarian movements and actors, or online radicalization…(More)”.

Measuring human rights: facing a necessary challenge

Essay by Eduardo Burkle: “Given the abundance of data available today, many assume the world already has enough accurate metrics on human rights performance. However, the political sensitivity of human rights has proven a significant barrier to access. Governments often avoid producing and sharing this type of information.

States’ compliance with their human rights obligations often receives a lot of attention. But there is still much discussion about how to measure it. At the same time, statistics and data increasingly drive political and bureaucratic decisions. This, in turn, brings some urgency to the task of ensuring the best possible data are available.

Establishing cross-national human rights measures is vital for research, advocacy, and policymaking. It can also have a direct effect on people’s enjoyment of human rights. Good data allow states and actors to evaluate how well their country is performing. It also lets them make comparisons that highlight which policies and institutions are truly effective in promoting human rights.

Good human rights data does more than simply evaluate how well a country is performing – it also identifies which policies and institutions are truly effective in promoting human rights

Such context makes it crucial to arm researchers, journalists, advocates, practitioners, investors, and companies with reliable information when raising human rights issues in their countries, and around the world…(More)”.