Policy brief: Generative AI


Policy Brief by Ann Kristin Glenster, and Sam Gilbert: “The rapid rollout of generative AI models, and public attention to Open AI’s ChatGPT, has raised concerns about AI’s impact on the economy and society. In the UK, policy-makers are looking to large language models and other so-called foundation models as ways to potentially improve economic productivity.

This policy brief outlines which policy levers could support those goals. The authors argue that the UK should pursue becoming a global leader in applying generative AI to the economy. Rather than use public support for building new foundation models, the UK could support the growing ecosystem of startups that develop new applications for these models, creating new products and services.

This policy brief answers three key questions:

  1. What policy infrastructure and social capacity does the UK need to lead and manage deployment of responsible generative AI (over the long term)?
  2. What national capability does the UK need for large-scale AI systems in the short- and medium-term?
  3. What governance capacity does the UK need to deal with fast moving technologies, in which large uncertainties are a feature, not a bug?…(More)”.

Towards an Inclusive Data Governance Policy for the Use of Artificial Intelligence in Africa


Paper by Jake Okechukwu Effoduh, Ugochukwu Ejike Akpudo and Jude Dzevela Kong: “This paper proposes five ideas that the design of data governance policies for the inclusive use of artificial intelligence (AI) in Africa should consider. The first is for African states to carry out an assessment of their domestic strategic priorities, strengths, and weaknesses. The second is a human-centric approach to data governance which involves data processing practices that protect security of personal data and privacy of data subjects; ensures that personal data is processed in a fair, lawful, and accountable manner; minimize the harmful effect of personal data misuse or abuse on data subjects and other victims; and promote a beneficial, trusted use of personal data. The third is for the data policy to be in alignment with supranational rights-respecting AI standards like the African Charter on Human and Peoples Rights, the AU Convention on Cybersecurity and Personal Data Protection. The fourth is for states to be critical about the extent that AI systems can be relied on in certain public sectors or departments. The fifth and final proposition is for the need to prioritize the use of representative and interoperable data and ensuring a transparent procurement process for AI systems from abroad where no local options exist…(More)”

Setting Democratic Ground Rules for AI: Civil Society Strategies


Report by Beth Kerley: “…analyzes priorities, challenges, and promising civil society strategies for advancing democratic approaches to governing artificial intelligence (AI). The report is based on conversations from a private Forum workshop in Buenos Aires, Argentina that brought together Latin American and global researchers and civil society practitioners.

With recent leaps in the development of AI, we are experiencing a seismic shift in the balance of power between people and governments, posing new challenges to democratic principles such as privacy, transparency, and non-discrimination. We know that AI will shape the political world we inhabit–but how can we ensure that democratic norms and institutions shape the trajectory of AI?

Drawing on global civil society perspectives, this report surveys what stakeholders need to know about AI systems and the human relationships behind them. It delves into the obstacles– from misleading narratives to government opacity to gaps in technical expertise–that hinder democratic engagement on AI governance, and explores how new thinking, new institutions, and new collaborations can better equip societies to set democratic ground rules for AI technologies…(More)”.

Our Planet Powered by AI: How We Use Artificial Intelligence to Create a Sustainable Future for Humanity


Book by Mark Minevich: “…You’ll learn to create sustainable, effective competitive advantage by introducing previously unheard-of levels of adaptability, resilience, and innovation into your company.

Using real-world case studies from a variety of well-known industry leaders, the author explains the strategic archetypes, technological infrastructures, and cultures of sustainability you’ll need to ensure your firm’s next-level digital transformation takes root. You’ll also discover:

  • How AI can enable new business strategies, models, and ecosystems of innovation and growth
  • How to develop societal impact and powerful organizational benefits with ethical AI implementations that incorporate transparency, fairness, privacy, and reliability
  • What it means to enable all-inclusive artificial intelligence

An engaging and hands-on exploration of how to take your firm to new levels of dynamism and growth, Our Planet Powered by AI will earn a place in the libraries of managers, executives, directors, and other business and technology leaders seeking to distinguish their companies in a new age of astonishing technological advancement and fierce competition….(More)”.

Generative AI is set to transform crisis management


Article by Ben Ellencweig, Mihir Mysore, Jon Spaner: “…Generative AI presents transformative potential, especially in disaster preparedness and response, and recovery. As billion-dollar disasters become more frequent – “billion-dollar disasters” typically costing the U.S. roughly $120 billion each – and “polycrises”, or multiple crises at once proliferate (e.g. hurricanes combined with cyber disruptions), the significant impact that Generative AI can have, especially with proper leadership focus, is a focal point of interest.

Generative AI’s speed is crucial in emergencies, as it enhances information access, decision-making capabilities, and early warning systems. Beyond organizational benefits for those who adopt Generative AI, its applications include real-time data analysis, scenario simulations, sentiment analysis, and simplifying complex information access. Generative AI’s versatility offers a wide variety of promising applications in disaster relief, and opens up facing real time analyses with tangible applications in the real world. 

Early warning systems and sentiment analysis: Generative AI excels in early warning systems and sentiment analysis, by scanning accurate real-time data and response clusters. By enabling connections between disparate systems, Generative AI holds the potential to provide more accurate early warnings. Integrated with traditional and social media, Generative AI can also offer precise sentiment analysis, empowering leaders to understand public sentiment, detect bad actors, identify misinformation, and tailor communications for accurate information dissemination.

Scenario simulations: Generative AI holds the potential to enhance catastrophe modeling for better crisis assessment and resource allocation. It creates simulations for emergency planners, improving modeling for various disasters (e.g., hurricanes, floods, wildfires) using historical data such as location, community impact, and financial consequence. Often, simulators perform work “so large that it exceeds human capacity (for example, finding flooded or unusable roads across a large area after a hurricane).” …(More)”

When is a Decision Automated? A Taxonomy for a Fundamental Rights Analysis


Paper by Francesca Palmiotto: “This paper addresses the pressing issues surrounding the use of automated systems in public decision-making, with a specific focus on the field of migration, asylum, and mobility. Drawing on empirical research conducted for the AFAR project, the paper examines the potential and limitations of the General Data Protection Regulation and the proposed Artificial Intelligence Act in effectively addressing the challenges posed by automated decision making (ADM). The paper argues that the current legal definitions and categorizations of ADM fail to capture the complexity and diversity of real-life applications, where automated systems assist human decision-makers rather than replace them entirely. This discrepancy between the legal framework and practical implementation highlights the need for a fundamental rights approach to legal protection in the automation age. To bridge the gap between ADM in law and practice, the paper proposes a taxonomy that provides theoretical clarity and enables a comprehensive understanding of ADM in public decision-making. This taxonomy not only enhances our understanding of ADM but also identifies the fundamental rights at stake for individuals and the sector-specific legislation applicable to ADM. The paper finally calls for empirical observations and input from experts in other areas of public law to enrich and refine the proposed taxonomy, thus ensuring clearer conceptual frameworks to safeguard individuals in our increasingly algorithmic society…(More)”.

NYC Releases Plan to Embrace AI, and Regulate It


Article by Sarah Holder: “New York City Mayor Eric Adams unveiled a plan for adopting and regulating artificial intelligence on Monday, highlighting the technology’s potential to “improve services and processes across our government” while acknowledging the risks.

The city also announced it is piloting an AI chatbot to answer questions about opening or operating a business through its website MyCity Business.

NYC agencies have reported using more than 30 tools that fit the city’s definition of algorithmic technology, including to match students with public schools, to track foodborne illness outbreaks and to analyze crime patterns. As the technology gets more advanced, and the implications of algorithmic bias, misinformation and privacy concerns become more apparent, the city plans to set policy around new and existing applications…

New York’s strategy, developed by the Office of Technology and Innovation with the input of city agency representatives and outside technology policy experts, doesn’t itself establish any rules and regulations around AI, but lays out a timeline and blueprint for creating them. It emphasizes the need for education and buy-in both from New York constituents and city employees. Within the next year, the city plans to start to hold listening sessions with the public, and brief city agencies on how and why to use AI in their daily operations. The city has also given itself a year to start work on piloting new AI tools, and two to create standards for AI contracts….

Stefaan Verhulst, a research professor at New York University and the co-founder of The GovLab, says that especially during a budget crunch, leaning on AI offers cities opportunities to make evidence-based decisions quickly and with fewer resources. Among the potential use cases he cited are identifying areas most in need of affordable housing, and responding to public health emergencies with data…(More) (Full plan)”.

How a billionaire-backed network of AI advisers took over Washington


Article by Brendan Bordelon: “An organization backed by Silicon Valley billionaires and tied to leading artificial intelligence firms is funding the salaries of more than a dozen AI fellows in key congressional offices, across federal agencies and at influential think tanks.

The fellows funded by Open Philanthropy, which is financed primarily by billionaire Facebook co-founder and Asana CEO Dustin Moskovitz and his wife Cari Tuna, are already involved in negotiations that will shape Capitol Hill’s accelerating plans to regulate AI. And they’re closely tied to a powerful influence network that’s pushing Washington to focus on the technology’s long-term risks — a focus critics fear will divert Congress from more immediate rules that would tie the hands of tech firms.

Acting through the little-known Horizon Institute for Public Service, a nonprofit that Open Philanthropy effectively created in 2022, the group is funding the salaries of tech fellows in key Senate offices, according to documents and interviews…Current and former Horizon AI fellows with salaries funded by Open Philanthropy are now working at the Department of Defense, the Department of Homeland Security and the State Department, as well as in the House Science Committee and Senate Commerce Committee, two crucial bodies in the development of AI rules. They also populate key think tanks shaping AI policy, including the RAND Corporation and Georgetown University’s Center for Security and Emerging Technology, according to the Horizon web site…

In the high-stakes Washington debate over AI rules, Open Philanthropy has long been focused on one slice of the problem — the long-term threats that future AI systems might pose to human survival. Many AI thinkers see those as science-fiction concerns far removed from the current AI harms that Washington should address. And they worry that Open Philanthropy, in concert with its web of affiliated organizations and experts, is shifting the policy conversation away from more pressing issues — including topics some leading AI firms might prefer to keep off the policy agenda…(More)”.

Gender Reboot: Reprogramming Gender Rights in the Age of AI


Book by Eleonore Fournier-Tombs: “This book explores gender norms and women’s rights in the age of AI. The author examines how gender dynamics have evolved in the spheres of work, self-image and safety, and education, and how these might be reflected in current challenges in AI development. The book also explores opportunities in AI to address issues facing women, and how we might harness current technological developments for gender equality. Taking a narrative tone, the book is interwoven with stories and a reflection on the raising young children during the COVID-19 pandemic. It includes both expert and personal interviews to create a nuanced and multidimensional perspective on the state of women’s rights and what might be done to move forward…(More)”.

Towards a Considered Use of AI Technologies in Government 


Report by the Institute on Governance and Think Digital: “… undertook a case study-based research project, where 24 examples of AI technology projects and governance frameworks across a dozen jurisdictions were scanned. The purpose of this report is to provide policymakers and practitioners in government with an overview of controversial deployments of Artificial Intelligence (AI) technologies in the public sector, and to highlight some of the approaches being taken to govern the responsible use of these technologies in government. 

Two environmental scans make up the majority of the report. The first scan presents relevant use cases of public sector applications of AI technologies and automation, with special attention given to controversial projects and program/policy failures. The second scan surveys existing governance frameworks employed by international organizations and governments around the world. Each scan is then analyzed to determine common themes across use cases and governance frameworks respectively. The final section of the report provides risk considerations related to the use of AI by public sector institutions across use cases…(More)”.