The AI regulations that aren’t being talked about


Article by Deloitte: “…But our research shows that this focus may be overlooking some of the most important tools already on the books. Of the 1,600+ policies we analyzed, only 11% were focused on regulating AI-adjacent issues like data privacy, cybersecurity, intellectual property, and so on (Figure 5). Even when limiting the search to only regulations, 60% were focused directly on AI and only 40% on AI-adjacent issues (Figure 5). For example, several countries have data protection agencies with regulatory powers to help protect citizens’ data privacy. But while these agencies may not have AI or machine learning named specifically in their charters, the importance of data in training and using AI models makes them an important AI-adjacent tool.

This can be problematic because directly regulating a fast-moving technology like AI can be difficult. Take the hypothetical example of removing bias from home loan decisions. Regulators could accomplish this goal by mandating that AI should have certain types of training data to ensure that the models are representative and will not produce biased results, but such an approach can become outdated when new methods of training AI models emerge. Given the diversity of different types of AI models already in use, from recurrent neural networks to generative pretrained transformers to generative adversarial networks and more, finding a single set of rules that can deliver what the public desires both now, and in the future, may be a challenge…(More)”.

Cities are ramping up to make the most of generative AI


Blog by Citylab: “Generative artificial intelligence promises to transform the way we work, and city leaders are taking note. According to a recent survey by Bloomberg Philanthropies in partnership with the Centre for Public Impact, the vast majority of mayors (96 percent) are interested in how they can use generative AI tools like ChatGPT—which rely on machine learning to identify patterns in data and create, or generate, new content after being fed prompts—to improve local government. Of those cities surveyed, 69 percent report that they are already exploring or testing the technology. Specifically, they’re interested in how it can help them more quickly and successfully address emerging challenges with traffic and transportation, infrastructure, public safety, climate, education, and more.  

Yet even as a majority of city leaders surveyed are exploring generative AI’s potential, only a small fraction of them (2 percent) are actively deploying the technology. They indicated there are a number of issues getting in the way of broader implementation, including a lack of technical expertise, budgetary constraints, and ethical considerations like security, privacy, and transparency…(More)”.

New Tools to Guide Data Sharing Agreements


Article by Andrew J. Zahuranec, Stefaan Verhulst, and Hannah Chafetz: “The process of forming a data-sharing agreement is not easy. The process involves figuring out incentives, evaluating the degree to which others are willing and able to collaborate, and defining the specific conduct that is and is not allowed. Even under the best of circumstances, these steps can be costly and time-consuming.

Today, the Open Data Policy Lab took a step to help data practitioners control these costs. Moving from Idea to Practice: Three Resources to Streamline the Creation of Data Sharing Agreements” provides data practitioners with three resources meant to support them throughout the process of developing an agreement. These include:

  • A Guide to Principled Data Sharing Agreement Negotiation by Design: A document outlining the different principles that a data practitioner might seek to uphold while negotiating an agreement;
  • The Contractual Wheel of Data Collaboration 2.0: A listing of the different kinds of data sharing agreement provisions that a data practitioner might include in an agreement;
  • A Readiness Matrix for Data Sharing Agreements: A form to evaluate the degree to which a partner can participate in a data-sharing agreement.

The resources are a result of a series of Open Data Action Labs, an initiative from the Open Data Policy Lab to define new strategies and tools that can help organizations resolve policy challenges they face. The Action Labs are built around a series of workshops (called “studios”) which given experts and stakeholders an opportunity to define the problems facing them and then ideate possible solutions in a collaborative setting. In February and March 2023, the Open Data Policy Lab and Trust Relay co-hosted conversations with experts in law, data, and smart cities on the challenge of forming a data sharing agreement. Find all the resources here.”

Climate data can save lives. Most countries can’t access it.


Article by Zoya Teirstein: “Earth just experienced one of its hottest, and most damaging, periods on record. Heat waves in the United States, Europe, and China; catastrophic flooding in IndiaBrazilHong Kong, and Libya; and outbreaks of malaria, dengue, and other mosquito-borne illnesses across southern Asia claimed tens of thousands of lives. The vast majority of these deaths could have been averted with the right safeguards in place.

The World Meteorological Organization, or WMO, published a report last week that shows just 11 percent of countries have the full arsenal of tools required to save lives as the impacts of climate change — including deadly weather events, infectious diseases, and respiratory illnesses like asthma — become more extreme. The United Nations climate agency predicts that significant natural disasters will hit the planet 560 times per year by the end of this decade. What’s more, countries that lack early warning systems, such as extreme heat alerts, will see eight times more climate-related deaths than countries that are better prepared. By midcentury, some 50 percent of these deaths will take place in Africa, a continent that is responsible for around 4 percent of the world’s greenhouse gas emissions each year…(More)”.

AI Globalism and AI Localism: Governing AI at the Local Level for Global Benefit


Article by Stefaan G. Verhulst: “With the UK Summit in full swing, 2023 will likely be seen as a pivotal year for AI governance, with governments promoting a global governance model: AI Globalism. For it to be relevant, flexible, and effective, any global approach will need to be informed by and complemented with local experimentation and leadership, ensuring local responsiveness: AI Localism.

Even as consumers and businesses extend their use of AI (generative AI in particular), governments are also taking notice. Determined not to be caught on the back foot, as they were with social media, regulators and policymakers around the world are exploring frameworks and institutional structures that could help maximize the benefits while minimizing the potential harms of AI. This week, the UK is hosting a high-profile AI Safety Summit, attended by political and business leaders from around the world, including Kamala Harris and Elon Musk. Similarly, US President Biden recently signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which he hailed as a “landmark executive order” to ensure “safety, security, trust, openness, and American leadership.”

Generated with DALL-E

Amid the various policy and regulatory proposals swirling around, there has been a notable emphasis on what we might call AI globalism. The UK summit has explicitly endorsed a global approach to AI safety, with coordination between the US, EU, and China core to its vision of more responsible and safe AI. This global perspective follows similar recent calls for “an AI equivalent of the IPCC” or the International Atomic Energy Agency (IAEA). Notably, such calls are emerging both from the private sector and from civil society leaders.

In many ways, a global approach makes sense. Like most technology, AI is transnational in scope, and its governance will require cross-jurisdictional coordination and harmonization. At the same time, we believe that AI globalism should be accompanied by a recognition that some of the most innovative AI initiatives are taking place in cities and municipalities and being regulated at those levels too.

We call it AI localism. In what follows, I outline a vision of a more decentralized approach to AI governance, one that would allow cities and local jurisdictions — including states — to develop and iterate governance frameworks tailored to their specific needs and challenges. This decentralized, local approach would need to take place alongside global efforts. The two would not be mutually exclusive but instead necessarily complementary…(More)”.

Shifting policy systems – a framework for what to do and how to do it


Blog by UK Policy Lab: “Systems change is hard work, and it takes time. The reality is that no single system map or tool is enough to get you from point A to point B, from system now to system next. Over the last year, we have explored the latest in systems change theory and applied it to policymaking. In this four part blog series, we share our reflections on the wealth of knowledge we’ve gained working on intractable issues surrounding how support is delivered for people experiencing multiple disadvantage. Along the way, we realised that we need to make new tools to support policy teams to do this deep work in the future, and to see afresh the limitations of existing mental models for change and transformation.

Policy Lab has previously written about systems mapping as a useful process for understanding the interconnected nature of factors and actors that make up policy ecosystems. Here, we share our latest experimentation on how we can generate practical ideas for long-lasting and systemic change.

This blog includes:

  • An overview of what we did on our latest project – including the policy context, systems change frameworks we experimented with, and the bespoke project framework we created;
  • Our reflections on how we carried out the project;
  • A matrix which provides a practical guide for you to use this approach in your own work…(More)”.

Towards a Taxonomy of Anticipatory Methods: Integrating Traditional and Innovative Methods for Migration Policy


Towards a Taxonomy of Anticipatory Methods: Integrating Traditional and Innovative Methods for Migration Policy

Blog by Sara Marcucci, and Stefaan Verhulst: “…In this week’s blog post, we delineate a taxonomy of anticipatory methods, categorizing them into three distinct sub-categories: Experience-based, Exploration-based, and Expertise-based methods. Our focus will be on what the practical applications of these methods are and how both traditional and non-traditional data sources play a pivotal role within each of these categories. …Experience-based methods in the realm of migration policy focus on gaining insights from the lived experiences of individuals and communities involved in migration processes. These methods allow policymakers to tap into the lived experiences, challenges, and aspirations of individuals and communities, fostering a more empathetic and holistic approach to policy development.

Through the lens of people’s experiences and viewpoints, it is possible to create and explore a multitude of scenarios. This in-depth exploration provides policy makers with a comprehensive understanding of these potential pathways, which, in turn, inform their decision-making process…(More)”.

How language gaps constrain generative AI development


Article by Regina Ta and Nicol Turner Lee: “Prompt-based generative artificial intelligence (AI) tools are quickly being deployed for a range of use cases, from writing emails and compiling legal cases to personalizing research essays in a wide range of educational, professional, and vocational disciplines. But language is not monolithic, and opportunities may be missed in developing generative AI tools for non-standard languages and dialects. Current applications often are not optimized for certain populations or communities and, in some instances, may exacerbate social and economic divisions. As noted by the Austrian linguist and philosopher Ludwig Wittgenstein, “The limits of my language mean the limits of my world.” This is especially true today, when the language we speak can change how we engage with technology, and the limits of our online vernacular can constrain the full and fair use of existing and emerging technologies.

As it stands now, the majority of the world’s speakers are being left behind if they are not part of one of the world’s dominant languages, such as English, French, German, Spanish, Chinese, or Russian. There are over 7,000 languages spoken worldwide, yet a plurality of content on the internet is written in English, with the largest remaining online shares claimed by Asian and European languages like Mandarin or Spanish. Moreover, in the English language alone, there are over 150 dialects beyond “standard” U.S. English. Consequently, large language models (LLMs) that train AI tools, like generative AI, rely on binary internet data that serve to increase the gap between standard and non-standard speakers, widening the digital language divide.

Among sociologists, anthropologists, and linguists, language is a source of power and one that significantly influences the development and dissemination of new tools that are dependent upon learned, linguistic capabilities. Depending on where one sits within socio-ethnic contexts, native language can internally strengthen communities while also amplifying and replicating inequalities when coopted by incumbent power structures to restrict immigrant and historically marginalized communities. For example, during the transatlantic slave trade, literacy was a weapon used by white supremacists to reinforce the dependence of Blacks on slave masters, which resulted in many anti-literacy laws being passed in the 1800s in most Confederate states…(More)”.

Predictive Policing Software Terrible At Predicting Crimes


Article by Aaron Sankin and Surya Mattu: “A software company sold a New Jersey police department an algorithm that was right less than 1% of the time

Crime predictions generated for the police department in Plainfield, New Jersey, rarely lined up with reported crimes, an analysis by The Markup has found, adding new context to the debate over the efficacy of crime prediction software.

Geolitica, known as PredPol until a 2021 rebrand, produces software that ingests data from crime incident reports and produces daily predictions on where and when crimes are most likely to occur.

We examined 23,631 predictions generated by Geolitica between Feb. 25 to Dec. 18, 2018 for the Plainfield Police Department (PD). Each prediction we analyzed from the company’s algorithm indicated that one type of crime was likely to occur in a location not patrolled by Plainfield PD. In the end, the success rate was less than half a percent. Fewer than 100 of the predictions lined up with a crime in the predicted category, that was also later reported to police.

Diving deeper, we looked at predictions specifically for robberies or aggravated assaults that were likely to occur in Plainfield and found a similarly low success rate: 0.6 percent. The pattern was even worse when we looked at burglary predictions, which had a success rate of 0.1 percent.

“Why did we get PredPol? I guess we wanted to be more effective when it came to reducing crime. And having a prediction where we should be would help us to do that. I don’t know that it did that,” said Captain David Guarino of the Plainfield PD. “I don’t believe we really used it that often, if at all. That’s why we ended up getting rid of it.”…(More)’.

Anticipating the Future: Shifting Paradigms


Blog by Sara Marcucci and Stefaan Verhulst: “…Migration is a dynamic phenomenon influenced by a variety of factors. As migration policies strive to keep pace with an ever-changing landscape, anticipating trends becomes increasingly pertinent. Traditionally, in the realm of anticipatory methods, a clear demarcation existed between foresight and forecast. 

  • Forecast predominantly relies on quantitative techniques to predict future trends, utilizing historical data, mathematical models, and statistical analyses to provide numerical predictions applicable to the short-to-medium term, seeking to facilitate expedited policy making, resource allocation, and logistical planning.
  • Foresight methodologies conventionally leaned on qualitative insights to explore future possibilities, employing expert judgment, scenario planning, and holistic exploration to envision potential future scenarios. This qualitative approach has been characterized by a more long-term perspective, which seeks to explore a spectrum of potential futures in the long run.

More recently, this once-clear distinction between quantitative forecasting and qualitative foresight has begun to blur. New methodologies that embrace a mixed-method approach are emerging, challenging traditional paradigms and offering new pathways for understanding complex phenomena. Despite the evolution and the growing interest in these novel approaches, there currently exists no comprehensive taxonomy to guide practitioners in selecting the most appropriate method for their given objective. Moreover, due to the state-of-the-art, there is a need for primers delving into these modern methodologies, filling a gap in knowledge and resources that practitioners can leverage to enhance their forecasting and foresight endeavors…(More)”.

Anticipating the Future: Shifting Paradigms
Anticipating the Future: Shifting Paradigms