Mapping AI Narratives at the Local Level


Article for Urban AI: “In May 2024, Nantes Métropole (France) launched a pioneering initiative titled “Nantes Débat de l’IA” (meaning “Nantes is Debating AI”). This year-long project is designed to curate the organization of events dedicated to artificial intelligence (AI) across the territory. The primary aim of this initiative is to foster dialogue among local stakeholders, enabling them to engage in meaningful discussions, exchange ideas, and develop a shared understanding of AI’s impact on the region.

Over the course of one year, the Nantes metropolitan area will host around sixty events focused on AI, bringing together a wide range of participants, including policymakers, businesses, researchers, and civil society. These events provide a platform for these diverse actors to share their perspectives, debate critical issues, and explore the potential opportunities and challenges AI presents. Through this collaborative process, the goal is to cultivate a common culture around AI, ensuring that all relevant voices are heard as the city navigates to integrate this transformative technology…(More)”.

AI Localism Repository: A Tool for Local AI Governance


About: “In a world where AI continues to be ever more entangled with our communities, cities, and decision-making processes, local governments are stepping up to address the challenges of AI governance. Today, we’re excited to announce the launch of the newly updated AI Localism Repository—a curated resource designed to help local governments, researchers, and citizens understand how AI is being governed at the state, city, or community level.

What is AI Localism?

AI Localism refers to the actions taken by local decision-makers to address AI governance in their communities. Unlike national or global policies, AI Localism offers immediate solutions tailored to specific local conditions, creating opportunities for greater effectiveness and accountability in the governance of AI.

What’s the AI Localism Repository?

The AI Localism Repository is a collection of examples of AI governance measures from around the world, focusing on how local governments are navigating the evolving landscape of AI. This resource is more than just a list of laws—it highlights innovative methods of AI governance, from the creation of expert advisory groups to the implementation of AI pilot programs.

Why AI Localism Matters

Local governments often face unique challenges in regulating AI, from ethical considerations to the social impact of AI in areas like law enforcement, housing, and employment. Yet, local initiatives are frequently overlooked by national and global AI policy observatories. The AI Localism Repository fills this gap, offering a platform for local policymakers to share their experiences and learn from one another…(More)”

Governing AI for Humanity


The United Nations Secretary-General’s High-level Advisory Body on AI’s Final Report: “This report outlines a blueprint for addressing AI-related risks and sharing its transformative potential globally, including by:​

  • ​Urging the UN to lay the foundations of the first globally inclusive and distributed architecture for AI governance based on international cooperation;​
  • Proposing seven recommendations to address gaps in current AI governance arrangements;​
  • Calling on all governments and stakeholders to work together in governing AI to foster development and protection of all human rights.​

​This includes light institutional mechanisms to complement existing efforts and foster inclusive global AI governance arrangements that are agile, adaptive and effective to keep pace with AI’s evolution.​..(More)”.

Augmenting the availability of historical GDP per capita estimates through machine learning


Paper by Philipp Koch, Viktor Stojkoski, and César A. Hidalgo: “Can we use data on the biographies of historical figures to estimate the GDP per capita of countries and regions? Here, we introduce a machine learning method to estimate the GDP per capita of dozens of countries and hundreds of regions in Europe and North America for the past seven centuries starting from data on the places of birth, death, and occupations of hundreds of thousands of historical figures. We build an elastic net regression model to perform feature selection and generate out-of-sample estimates that explain 90% of the variance in known historical income levels. We use this model to generate GDP per capita estimates for countries, regions, and time periods for which these data are not available and externally validate our estimates by comparing them with four proxies of economic output: urbanization rates in the past 500 y, body height in the 18th century, well-being in 1850, and church building activity in the 14th and 15th century. Additionally, we show our estimates reproduce the well-known reversal of fortune between southwestern and northwestern Europe between 1300 and 1800 and find this is largely driven by countries and regions engaged in Atlantic trade. These findings validate the use of fine-grained biographical data as a method to augment historical GDP per capita estimates. We publish our estimates with CI together with all collected source data in a comprehensive dataset…(More)”.

Taming Silicon Valley


Book by Gary Marcus: “On balance, will AI help humanity or harm it? AI could revolutionize science, medicine, and technology, and deliver us a world of abundance and better health. Or it could be a disaster, leading to the downfall of democracy, or even our extinction. In Taming Silicon Valley, Gary Marcus, one of the most trusted voices in AI, explains that we still have a choice. And that the decisions we make now about AI will shape our next century. In this short but powerful manifesto, Marcus explains how Big Tech is taking advantage of us, how AI could make things much worse, and, most importantly, what we can do to safeguard our democracy, our society, and our future.

Marcus explains the potential—and potential risks—of AI in the clearest possible terms and how Big Tech has effectively captured policymakers. He begins by laying out what is lacking in current AI, what the greatest risks of AI are, and how Big Tech has been playing both the public and the government, before digging into why the US government has thus far been ineffective at reining in Big Tech. He then offers real tools for readers, including eight suggestions for what a coherent AI policy should look like—from data rights to layered AI oversight to meaningful tax reform—and closes with how ordinary citizens can push for what is so desperately needed.

Taming Silicon Valley is both a primer on how AI has gotten to its problematic present state and a book of activism in the tradition of Abbie Hoffman’s Steal This Book and Thomas Paine’s Common Sense. It is a deeply important book for our perilous historical moment that every concerned citizen must read…(More)”.

Place identity: a generative AI’s perspective


Paper by Kee Moon Jang et al: “Do cities have a collective identity? The latest advancements in generative artificial intelligence (AI) models have enabled the creation of realistic representations learned from vast amounts of data. In this study, we test the potential of generative AI as the source of textual and visual information in capturing the place identity of cities assessed by filtered descriptions and images. We asked questions on the place identity of 64 global cities to two generative AI models, ChatGPT and DALL·E2. Furthermore, given the ethical concerns surrounding the trustworthiness of generative AI, we examined whether the results were consistent with real urban settings. In particular, we measured similarity between text and image outputs with Wikipedia data and images searched from Google, respectively, and compared across cases to identify how unique the generated outputs were for each city. Our results indicate that generative models have the potential to capture the salient characteristics of cities that make them distinguishable. This study is among the first attempts to explore the capabilities of generative AI in simulating the built environment in regard to place-specific meanings. It contributes to urban design and geography literature by fostering research opportunities with generative AI and discussing potential limitations for future studies…(More)”.

We finally have a definition for open-source AI


Article by Rhiannon Williams and James O’Donnell: “Open-source AI is everywhere right now. The problem is, no one agrees on what it actually is. Now we may finally have an answer. The Open Source Initiative (OSI), the self-appointed arbiters of what it means to be open source, has released a new definition, which it hopes will help lawmakers develop regulations to protect consumers from AI risks. 

Though OSI has published much about what constitutes open-source technology in other fields, this marks its first attempt to define the term for AI models. It asked a 70-person group of researchers, lawyers, policymakers, and activists, as well as representatives from big tech companies like Meta, Google, and Amazon, to come up with the working definition. 

According to the group, an open-source AI system can be used for any purpose without the need to secure permission, and researchers should be able to inspect its components and study how the system works.

It should also be possible to modify the system for any purpose—including to change its output—and to share it with others to usewith or without modificationsfor any purpose. In addition, the standard attempts to define a level of transparency for a given model’s training data, source code, and weights. 

The previous lack of an open-source standard presented a problem…(More)”.

Definitions, digital, and distance: on AI and policymaking


Article by Gavin Freeguard: “Our first question is less, ‘to what extent can AI improve public policymaking?’, but ‘what is currently wrong with policymaking?’, and then, ‘is AI able to help?’.

Ask those in and around policymaking about the problems and you’ll get a list likely to include:

  • the practice not having changed in decades (or centuries)
  • it being an opaque ‘dark art’ with little transparency
  • defaulting to easily accessible stakeholders and evidence
  • a separation between policy and delivery (and digital and other disciplines), and failure to recognise the need for agility and feedback as opposed to distinct stages
  • the challenges in measuring or evaluating the impact of policy interventions and understanding what works, with a lack of awareness, let alone sharing, of case studies elsewhere
  • difficulties in sharing data
  • the siloed nature of government complicating cross-departmental working
  • policy asks often being dictated by politics, with electoral cycles leading to short-termism, ministerial churn changing priorities and personal style, events prompting rushed reactions, or political priorities dictating ‘policy-based evidence making’
  • a rush to answers before understanding the problem
  • definitional issues about what policy actually is making it hard to get a hold of or develop professional expertise.  

If we’re defining ‘policy’ and the problem, we also need to define ‘AI’, or at least acknowledge that we are not only talking about new, shiny generative AI, but a world of other techniques for automating processes and analysing data that have been used in government for years.

So is ‘AI’ able to help? It could support us to make better use of a wider range of data more quickly; but it could privilege that which is easier to measure, strip data of vital context, and embed biases and historical assumptions. It could ‘make decisions more transparent (perhaps through capturing digital records of the process behind them, or by visualising the data that underpins a decision)’; or make them more opaque with ‘black-box’ algorithms, and distract from overcoming the very human cultural problems around greater openness. It could help synthesise submissions or generate ideas to brainstorm; or fail to compensate for deficiencies in underlying government knowledge infrastructure, and generate gibberish. It could be a tempting silver bullet for better policy; or it could paper over the cracks, while underlying technical, organisational and cultural plumbing goes unfixed. It could have real value in some areas, or cause harms in others…(More)”.

New AI standards group wants to make data scraping opt-in


Article by Kate Knibbs: “The first wave of major generative AI tools largely were trained on “publicly available” data—basically, anything and everything that could be scraped from the Internet. Now, sources of training data are increasingly restricting access and pushing for licensing agreements. With the hunt for additional data sources intensifying, new licensing startups have emerged to keep the source material flowing.

The Dataset Providers Alliance, a trade group formed this summer, wants to make the AI industry more standardized and fair. To that end, it has just released a position paper outlining its stances on major AI-related issues. The alliance is made up of seven AI licensing companies, including music copyright-management firm Rightsify, Japanese stock-photo marketplace Pixta, and generative-AI copyright-licensing startup Calliope Networks. (At least five new members will be announced in the fall.)

The DPA advocates for an opt-in system, meaning that data can be used only after consent is explicitly given by creators and rights holders. This represents a significant departure from the way most major AI companies operate. Some have developed their own opt-out systems, which put the burden on data owners to pull their work on a case-by-case basis. Others offer no opt-outs whatsoever…(More)”.

Building LLMs for the social sector: Emerging pain points


Blog by Edmund Korley: “…One of the sprint’s main tracks focused on using LLMs to enhance the impact and scale of chat services in the social sector.

Six organizations participated, with operations spanning Africa and India. Bandhu empowers India’s blue-collar workers and migrants by connecting them to jobs and affordable housing, helping them take control of their livelihoods and future stability. Digital Green enhances rural farmers’ agency with AI-driven insights to improve agricultural productivity and livelihoods. Jacaranda Health provides mothers in sub-Saharan Africa with essential information and support to improve maternal and newborn health outcomes. Kabakoo equips youth in Francophone Africa with digital skills, fostering self-reliance and economic independence. Noora Health teaches Indian patients and caregivers critical health skills, enhancing their ability to manage care. Udhyam provides micro-entrepreneurs’ with education, mentorship, and financial support to build sustainable businesses.

These organizations demonstrate diverse ways one can boost human agency: they help people in underserved communities take control of their lives, make more informed choices, and build better futures – and they are piloting AI interventions to scale these efforts…(More)”.