Article by Zoya Teirstein: “Earth just experienced one of its hottest, and most damaging, periods on record. Heat waves in the United States, Europe, and China; catastrophic flooding in India, Brazil, Hong Kong, and Libya; and outbreaks of malaria, dengue, and other mosquito-borne illnesses across southern Asia claimed tens of thousands of lives. The vast majority of these deaths could have been averted with the right safeguards in place.
The World Meteorological Organization, or WMO, published a report last week that shows just 11 percent of countries have the full arsenal of tools required to save lives as the impacts of climate change — including deadly weather events, infectious diseases, and respiratory illnesses like asthma — become more extreme. The United Nations climate agency predicts that significant natural disasters will hit the planet 560 times per year by the end of this decade. What’s more, countries that lack early warning systems, such as extreme heat alerts, will see eight times more climate-related deaths than countries that are better prepared. By midcentury, some 50 percent of these deaths will take place in Africa, a continent that is responsible for around 4 percent of the world’s greenhouse gas emissions each year…(More)”.
Article by Stefaan G. Verhulst: “With the UK Summit in full swing, 2023 will likely be seen as a pivotal year for AI governance, with governments promoting a global governance model: AI Globalism. For it to be relevant, flexible, and effective, any global approach will need to be informed by and complemented with local experimentation and leadership, ensuring local responsiveness: AI Localism.
Even as consumers and businesses extend their use of AI (generative AI in particular), governments are also taking notice. Determined not to be caught on the back foot, as they were with social media, regulators and policymakers around the world are exploring frameworks and institutional structures that could help maximize the benefits while minimizing the potential harms of AI. This week,the UK is hosting a high-profile AI Safety Summit, attended by political and business leaders from around the world, including Kamala Harris and Elon Musk. Similarly, US President Biden recently signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which he hailed as a “landmark executive order” to ensure “safety, security, trust, openness, and American leadership.”
Generated with DALL-E
Amid the various policy and regulatory proposals swirling around, there has been a notable emphasis on what we might call AI globalism. The UK summit has explicitly endorsed a global approach to AI safety, with coordination between the US, EU, and China core to its vision of more responsible and safe AI. This global perspective follows similar recent calls for “an AI equivalent of the IPCC” or the International Atomic Energy Agency (IAEA). Notably, such calls are emerging both from the private sector and from civil society leaders.
In many ways, a global approach makes sense. Like most technology, AI is transnational in scope, and its governance will require cross-jurisdictional coordination and harmonization. At the same time, we believe that AI globalism should be accompanied by a recognition that some of the most innovative AI initiatives are taking place in cities and municipalities and being regulated at those levels too.
We call it AI localism. In what follows, I outline a vision of a more decentralized approach to AI governance, one that would allow cities and local jurisdictions — including states — to develop and iterate governance frameworks tailored to their specific needs and challenges. This decentralized, local approach would need to take place alongside global efforts. The two would not be mutually exclusive but instead necessarily complementary…(More)”.
Blog by UK Policy Lab: “Systems change is hard work, and it takes time. The reality is that no single system map or tool is enough to get you from point A to point B, from system now to system next. Over the last year, we have explored the latest in systems change theory and applied it to policymaking. In this four part blog series, we share our reflections on the wealth of knowledge we’ve gained working on intractable issues surrounding how support is delivered for people experiencing multiple disadvantage. Along the way, we realised that we need to make new tools to support policy teams to do this deep work in the future, and to see afresh the limitations of existing mental models for change and transformation.
Policy Lab has previously written about systems mapping as a useful process for understanding the interconnected nature of factors and actors that make up policy ecosystems. Here, we share our latest experimentation on how we can generate practical ideas for long-lasting and systemic change.
This blog includes:
An overview of what we did on our latest project – including the policy context, systems change frameworks we experimented with, and the bespoke project framework we created;
Our reflections on how we carried out the project;
A matrix which provides a practical guide for you to use this approach in your own work…(More)”.
Blog by Sara Marcucci, and Stefaan Verhulst: “…In this week’s blog post, we delineate a taxonomy of anticipatory methods, categorizing them into three distinct sub-categories: Experience-based, Exploration-based, and Expertise-based methods. Our focus will be on what the practical applications of these methods are and how both traditional and non-traditional data sources play a pivotal role within each of these categories. …Experience-based methods in the realm of migration policy focus on gaining insights from the lived experiences of individuals and communities involved in migration processes. These methods allow policymakers to tap into the lived experiences, challenges, and aspirations of individuals and communities, fostering a more empathetic and holistic approach to policy development.
Through the lens of people’s experiences and viewpoints, it is possible to create and explore a multitude of scenarios. This in-depth exploration provides policy makers with a comprehensive understanding of these potential pathways, which, in turn, inform their decision-making process…(More)”.
Article by Regina Ta and Nicol Turner Lee: “Prompt-based generative artificial intelligence (AI) tools are quickly being deployed for a range of use cases, from writing emails and compiling legal cases to personalizing research essays in a wide range of educational, professional, and vocational disciplines. But language is not monolithic, and opportunities may be missed in developing generative AI tools for non-standard languages and dialects. Current applications often are not optimized for certain populations or communities and, in some instances, may exacerbate social and economic divisions. As noted by the Austrian linguist and philosopher Ludwig Wittgenstein, “The limits of my language mean the limits of my world.” This is especially true today, when the language we speak can change how we engage with technology, and the limits of our online vernacular can constrain the full and fair use of existing and emerging technologies.
As it stands now, the majority of the world’s speakers are being left behind if they are not part of one of the world’s dominant languages, such as English, French, German, Spanish, Chinese, or Russian. There are over 7,000 languages spoken worldwide, yet a plurality of content on the internet is written in English, with the largest remaining online shares claimed by Asian and European languages like Mandarin or Spanish. Moreover, in the English language alone, there are over 150 dialects beyond “standard” U.S. English. Consequently, large language models (LLMs) that train AI tools, like generative AI, rely on binary internet data that serve to increase the gap between standard and non-standard speakers, widening the digital language divide.
Among sociologists, anthropologists, and linguists, language is a source of power and one that significantly influences the development and dissemination of new tools that are dependent upon learned, linguistic capabilities. Depending on where one sits within socio-ethnic contexts, native language can internally strengthen communities while also amplifying and replicating inequalities when coopted by incumbent power structures to restrict immigrant and historically marginalized communities. For example, during the transatlantic slave trade, literacy was a weapon used by white supremacists to reinforce the dependence of Blacks on slave masters, which resulted in many anti-literacy laws being passed in the 1800s in most Confederate states…(More)”.
Article by Aaron Sankin and Surya Mattu: “A software company sold a New Jersey police department an algorithm that was right less than 1% of the time
Crime predictions generated for the police department in Plainfield, New Jersey, rarely lined up with reported crimes, an analysis by The Markup has found, adding new context to the debate over the efficacy of crime prediction software.
Geolitica, known as PredPol until a 2021 rebrand, produces software that ingests data from crime incident reports and produces daily predictions on where and when crimes are most likely to occur.
We examined 23,631 predictions generated by Geolitica between Feb. 25 to Dec. 18, 2018 for the Plainfield Police Department (PD). Each prediction we analyzed from the company’s algorithm indicated that one type of crime was likely to occur in a location not patrolled by Plainfield PD. In the end, the success rate was less than half a percent. Fewer than 100 of the predictions lined up with a crime in the predicted category, that was also later reported to police.
Diving deeper, we looked at predictions specifically for robberies or aggravated assaults that were likely to occur in Plainfield and found a similarly low success rate: 0.6 percent. The pattern was even worse when we looked at burglary predictions, which had a success rate of 0.1 percent.
“Why did we get PredPol? I guess we wanted to be more effective when it came to reducing crime. And having a prediction where we should be would help us to do that. I don’t know that it did that,” said Captain David Guarino of the Plainfield PD. “I don’t believe we really used it that often, if at all. That’s why we ended up getting rid of it.”…(More)’.
Blog by Sara Marcucci and Stefaan Verhulst: “…Migration is a dynamic phenomenon influenced by a variety of factors. As migration policies strive to keep pace with an ever-changing landscape, anticipating trends becomes increasingly pertinent. Traditionally, in the realm of anticipatory methods, a clear demarcation existed between foresight and forecast.
Forecast predominantly relies on quantitative techniques to predict future trends, utilizing historical data, mathematical models, and statistical analyses to provide numerical predictions applicable to the short-to-medium term, seeking to facilitate expedited policy making, resource allocation, and logistical planning.
Foresight methodologies conventionally leaned on qualitative insights to explore future possibilities, employing expert judgment, scenario planning, and holistic exploration to envision potential future scenarios. This qualitative approach has been characterized by a more long-term perspective, which seeks to explore a spectrum of potential futures in the long run.
More recently, this once-clear distinction between quantitative forecasting and qualitative foresight has begun to blur. New methodologies that embrace a mixed-method approach are emerging, challenging traditional paradigms and offering new pathways for understanding complex phenomena. Despite the evolution and the growing interest in these novel approaches, there currently exists no comprehensive taxonomy to guide practitioners in selecting the most appropriate method for their given objective. Moreover, due to the state-of-the-art, there is a need for primers delving into these modern methodologies, filling a gap in knowledge and resources that practitioners can leverage to enhance their forecasting and foresight endeavors…(More)”.
Article by Claudette Salinas Leyva et al: “Many of our institutions are focused on the short term. Whether corporations, government bodies, or even nonprofits, they tend to prioritize immediate returns and discount long-term value and sustainability. This myopia is behind planetary crises such as climate change and biodiversity loss and contributes to decision-making that harms the wellbeing of communities.
Policymakers worldwide are beginning to recognize the importance of governing for the long term. The United Nations is currently developing a Declaration on Future Generations to codify this approach. This collection of case studies profiles community-level institutions rooted in Indigenous traditions that focus on governing for the long term and preserving the interests of future generations…(More)”.
Article by Stefaan G. Verhulst: “We live at a moment of perhaps unprecedented global upheaval. From climate change to pandemics, from war to political disharmony, misinformation, and growing social inequality, policy and social change-makers today face not only new challenges but new types of challenges. In our increasingly complex and interconnected world, existing systems and institutions of governance, marked by hierarchical decision-making, are increasingly being replaced by overlapping nodes of multi-sector decision-making.
Data is proving critical to these new forms of decision-making, along with associated (and emerging) phenomena such as advanced analytics, machine learning, and artificial intelligence. Yet while the importance of data intelligence for policymakers is now widely recognized, there remain multiple challenges to operationalizing that insight–i.e., to move from data intelligence to decision intelligence.
DALL-E generated
In what follows, we explain what we mean by decision intelligence, and discuss why it matters. We then present six obstacles to better decision intelligence–challenges that prevent policymakers and others from translating insights into action. Finally, we end by offering one possible solution to these challenges: the concept of decision accelerator labs, operating on a hub and spoke model, and offering an innovative, interdisciplinary platform to facilitate the development of evidence-based, targeted solutions to public problems and dilemmas…(More)”.
Article by Sara Marcucci, Stefaan Verhulst, María Esther Cervantes, Elena Wüllhorst: “This blog is the first in a series that will be published weekly, dedicated to exploring innovative anticipatory methods for migration policy. Over the coming weeks, we will delve into various aspects of these methods, delving into their value, challenges, taxonomy, and practical applications.
This first blog serves as an exploration of the value proposition and challenges inherent in innovative anticipatory methods for migration policy. We delve into the various reasons why these methods hold promise for informing more resilient, and proactive migration policies. These reasons include evidence-based policy development, enabling policymakers to ground their decisions in empirical evidence and future projections. Decision-takers, users, and practitioners can benefit from anticipatory methods for policy evaluation and adaptation, resource allocation, the identification of root causes, and the facilitation of humanitarian aid through early warning systems. However, it’s vital to acknowledge the challengesassociated with the adoption and implementation of these methods, ranging from conceptual concerns such as fossilization, unfalsifiability, and the legitimacy of preemptive intervention, to practical issues like interdisciplinary collaboration, data availability and quality, capacity building, and stakeholder engagement. As we navigate through these complexities, we aim to shed light on the potential and limitations of anticipatory methods in the context of migration policy, setting the stage for deeper explorations in the coming blogs of this series…(More)”.