Rethinking ‘Checks and Balances’ for the A.I. Age


Article by Steve Lohr: “A new project, orchestrated by Stanford University and published on Tuesday, is inspired by the Federalist Papers and contends that today is a broadly similar historical moment of economic and political upheaval that calls for a rethinking of society’s institutional arrangements.

In an introduction to its collection of 12 essays, called the Digitalist Papers, the editors overseeing the project, including Erik Brynjolfsson, director of the Stanford Digital Economy Lab, and Condoleezza Rice, secretary of state in the George W. Bush administration and director of the Hoover Institution, identify their overarching concern.

“A powerful new technology, artificial intelligence,” they write, “explodes onto the scene and threatens to transform, for better or worse, all legacy social institutions.”

The most common theme in the diverse collection of essays: Citizens need to be more involved in determining how to regulate and incorporate A.I. into their lives. “To build A.I. for the people, with the people,” as one essay summed it up.

The project is being published as the technology is racing ahead. A.I. enthusiasts see a future of higher economic growth, increased prosperity and a faster pace of scientific discovery. But the technology is also raising fears of a dystopian alternative — A.I. chatbots and automated software not only replacing millions of workers, but also generating limitless misinformation and worsening political polarization. How to govern and guide A.I. in the public interest remains an open question…(More)”.

Improving Governance Outcomes Through AI Documentation: Bridging Theory and Practice 


Report by Amy Winecoff, and Miranda Bogen: “AI documentation is a foundational tool for governing AI systems, via both stakeholders within and outside AI organizations. It offers a range of stakeholders insight into how AI systems are developed, how they function, and what risks they may pose. For example, it might help internal model development, governance, compliance, and quality assurance teams communicate about and manage risk throughout the development and deployment lifecycle. Documentation can also help external technology developers determine what testing they should perform on models they incorporate into their products, or it could guide users on whether or not to adopt a technology. While documentation is essential for effective AI governance, its success depends on how well organizations tailor their documentation approaches to meet the diverse needs of stakeholders, including technical teams, policymakers, users, and other downstream consumers of the documentation.

This report synthesizes findings from an in-depth analysis of academic and gray literature on documentation, encompassing 37 proposed methods for documenting AI data, models, systems, and processes, along with 21 empirical studies evaluating the impact and challenges of implementing documentation. Through this synthesis, we identify key theoretical mechanisms through which AI documentation can enhance governance outcomes. These mechanisms include informing stakeholders about the intended use, limitations, and risks of AI systems; facilitating cross-functional collaboration by bridging different teams; prompting ethical reflection among developers; and reinforcing best practices in development and governance. However, empirical evidence offers mixed support for these mechanisms, indicating that documentation practices can be more effectively designed to achieve these goals…(More)”.

AI in Global Development Playbook


USAID Playbook: “…When used effectively and responsibly, AI holds the potential to accelerate progress on sustainable development and close digital divides, but it also poses risks that could further impede progress toward these goals. With the right enabling environment and ecosystem of actors, AI can enhance efficiency and accelerate development outcomes in sectors such as health, education, agriculture, energy, manufacturing, and delivering public services. The United States aims to ensure that the benefits of AI are shared equitably across the globe.

Distilled from consultations with hundreds of government officials, non-governmental organizations, technology firms and startups, and individuals from around the world, the AI in Global Development Playbook is a roadmap to develop the capacity, ecosystems, frameworks, partnerships, applications, and institutions to leverage safe, secure, and trustworthy AI for sustainable development.

The United States’ current efforts are grounded in the belief that AI, when developed and deployed responsibly, can be a powerful force for achieving the Sustainable Development Goals and addressing some of the world’s most urgent challenges. Looking ahead, the United States will continue to support low- and middle-income countries through funding, advocacy, and convening efforts–collectively navigating the complexities of the digital age and working toward a future in which the benefits of technological development are widely shared.

This Playbook seeks to underscore AI as a uniquely global opportunity with far-reaching impacts and potential risks. It highlights that safe, secure, and trustworthy design, deployment, and use of AI is not only possible but essential. Recognizing that international cooperation and multi-stakeholder partnerships are key in achieving progress, we invite others to contribute their expertise, resources, and perspectives to enrich and expand this framework.

The true measure of progress in responsible AI is not in the sophistication of our machines but in the quality of life the technology enhances. Together we can work toward ensuring the promise of AI is realized in service of this goal…(More)”

Artificial intelligence (AI) in action: A preliminary review of AI use for democracy support


Policy paper by Grahm Tuohy-Gaydos: “…provides a working definition of AI for Westminster Foundation for Democracy (WFD) and the broader democracy support sector. It then provides a preliminary review of how AI is being used to enhance democratic practices worldwide, focusing on several themes including: accountability and transparency, elections, environmental democracy, inclusion, openness and participation, and women’s political leadership. The paper also highlights potential risks and areas of development in the future. Finally, the paper shares five recommendations for WFD and democracy support organisations to consider advancing their ‘digital democracy’ agenda. This policy paper also offers additional information regarding AI classification and other resources for identifying good practice and innovative solutions. Its findings may be relevant to WFD staff members, international development practitioners, civil society organisations, and persons interested in using emerging technologies within governmental settings…(More)”.

China’s biggest AI model is challenging American dominance


Article by Sam Eifling: “So far, the AI boom has been dominated by U.S. companies like OpenAI, Google, and Meta. In recent months, though, a new name has been popping up on benchmarking lists: Alibaba’s Qwen. Over the past few months, variants of Qwen have been topping the leaderboards of sites that measure an AI model’s performance.

“Qwen 72B is the king, and Chinese models are dominating,” Hugging Face CEO Clem Delangue wrote in June, after a Qwen-based model first rose to the top of his company’s Open LLM leaderboard.

It’s a surprising turnaround for the Chinese AI industry, which many thought was doomed by semiconductor restrictions and limitations on computing power. Qwen’s success is showing that China can compete with the world’s best AI models — raising serious questions about how long U.S. companies will continue to dominate the field. And by focusing on capabilities like language support, Qwen is breaking new ground on what an AI model can do — and who it can be built for.

Those capabilities have come as a surprise to many developers, even those working on Qwen itself. AI developer David Ng used Qwen to build the model that topped the Open LLM leaderboard. He’s built models using Meta and Google’s technology also but says Alibaba’s gave him the best results. “For some reason, it works best on the Chinese models,” he told Rest of World. “I don’t know why.”..(More)”

Synthetic Data and Social Science Research


Paper by Jordan C. Stanley & Evan S. Totty: “Synthetic microdata – data retaining the structure of original microdata while replacing original values with modeled values for the sake of privacy – presents an opportunity to increase access to useful microdata for data users while meeting the privacy and confidentiality requirements for data providers. Synthetic data could be sufficient for many purposes, but lingering accuracy concerns could be addressed with a validation system through which the data providers run the external researcher’s code on the internal data and share cleared output with the researcher. The U.S. Census Bureau has experience running such systems. In this chapter, we first describe the role of synthetic data within a tiered data access system and the importance of synthetic data accuracy in achieving a viable synthetic data product. Next, we review results from a recent set of empirical analyses we conducted to assess accuracy in the Survey of Income & Program Participation (SIPP) Synthetic Beta (SSB), a Census Bureau product that made linked survey-administrative data publicly available. Given this analysis and our experience working on the SSB project, we conclude with thoughts and questions regarding future implementations of synthetic data with validation…(More)”

Mapping AI Narratives at the Local Level


Article for Urban AI: “In May 2024, Nantes Métropole (France) launched a pioneering initiative titled “Nantes Débat de l’IA” (meaning “Nantes is Debating AI”). This year-long project is designed to curate the organization of events dedicated to artificial intelligence (AI) across the territory. The primary aim of this initiative is to foster dialogue among local stakeholders, enabling them to engage in meaningful discussions, exchange ideas, and develop a shared understanding of AI’s impact on the region.

Over the course of one year, the Nantes metropolitan area will host around sixty events focused on AI, bringing together a wide range of participants, including policymakers, businesses, researchers, and civil society. These events provide a platform for these diverse actors to share their perspectives, debate critical issues, and explore the potential opportunities and challenges AI presents. Through this collaborative process, the goal is to cultivate a common culture around AI, ensuring that all relevant voices are heard as the city navigates to integrate this transformative technology…(More)”.

AI Localism Repository: A Tool for Local AI Governance


About: “In a world where AI continues to be ever more entangled with our communities, cities, and decision-making processes, local governments are stepping up to address the challenges of AI governance. Today, we’re excited to announce the launch of the newly updated AI Localism Repository—a curated resource designed to help local governments, researchers, and citizens understand how AI is being governed at the state, city, or community level.

What is AI Localism?

AI Localism refers to the actions taken by local decision-makers to address AI governance in their communities. Unlike national or global policies, AI Localism offers immediate solutions tailored to specific local conditions, creating opportunities for greater effectiveness and accountability in the governance of AI.

What’s the AI Localism Repository?

The AI Localism Repository is a collection of examples of AI governance measures from around the world, focusing on how local governments are navigating the evolving landscape of AI. This resource is more than just a list of laws—it highlights innovative methods of AI governance, from the creation of expert advisory groups to the implementation of AI pilot programs.

Why AI Localism Matters

Local governments often face unique challenges in regulating AI, from ethical considerations to the social impact of AI in areas like law enforcement, housing, and employment. Yet, local initiatives are frequently overlooked by national and global AI policy observatories. The AI Localism Repository fills this gap, offering a platform for local policymakers to share their experiences and learn from one another…(More)”

Governing AI for Humanity


The United Nations Secretary-General’s High-level Advisory Body on AI’s Final Report: “This report outlines a blueprint for addressing AI-related risks and sharing its transformative potential globally, including by:​

  • ​Urging the UN to lay the foundations of the first globally inclusive and distributed architecture for AI governance based on international cooperation;​
  • Proposing seven recommendations to address gaps in current AI governance arrangements;​
  • Calling on all governments and stakeholders to work together in governing AI to foster development and protection of all human rights.​

​This includes light institutional mechanisms to complement existing efforts and foster inclusive global AI governance arrangements that are agile, adaptive and effective to keep pace with AI’s evolution.​..(More)”.

Augmenting the availability of historical GDP per capita estimates through machine learning


Paper by Philipp Koch, Viktor Stojkoski, and César A. Hidalgo: “Can we use data on the biographies of historical figures to estimate the GDP per capita of countries and regions? Here, we introduce a machine learning method to estimate the GDP per capita of dozens of countries and hundreds of regions in Europe and North America for the past seven centuries starting from data on the places of birth, death, and occupations of hundreds of thousands of historical figures. We build an elastic net regression model to perform feature selection and generate out-of-sample estimates that explain 90% of the variance in known historical income levels. We use this model to generate GDP per capita estimates for countries, regions, and time periods for which these data are not available and externally validate our estimates by comparing them with four proxies of economic output: urbanization rates in the past 500 y, body height in the 18th century, well-being in 1850, and church building activity in the 14th and 15th century. Additionally, we show our estimates reproduce the well-known reversal of fortune between southwestern and northwestern Europe between 1300 and 1800 and find this is largely driven by countries and regions engaged in Atlantic trade. These findings validate the use of fine-grained biographical data as a method to augment historical GDP per capita estimates. We publish our estimates with CI together with all collected source data in a comprehensive dataset…(More)”.