On the Shoulders of Others: The Importance of Regulatory Learning in the Age of AI


Paper by Urs Gasser and Viktor Mayer-Schonberger: “…International harmonization of regulation is the right strategy when the appropriate regulatory ends and means are sufficiently clear to reap efficiencies of scale and scope. When this is not the case, a push for efficiency through uniformity is premature and may lead to a suboptimal regulatory lock-in: the establishment of a rule framework that is either inefficient in the use of its means to reach the intended goal, or furthers the wrong goal, or both.


A century ago, economist Joseph Schumpeter suggested that companies have two distinct strategies to achieve success. The first is to employ economies of scale and scope to lower their cost. It’s essentially a push for improved efficiency. The other strategy is to invent a new product (or production process) that may not, at least initially, be hugely efficient, but is nevertheless advantageous because demand for the new product is price inelastic. For Schumpeter this was the essence of innovation. But, as Schumpeter also argued, innovation is not a simple, linear, and predictable process. Often, it happens in fits and starts, and can’t be easily commandeered or engineered.


As innovation is hard to foresee and plan, the best way to facilitate it is to enable a wide variety of different approaches and solutions. Public policies in many countries to foster startups and entrepreneurship stems from this view. Take, for instance, the policy of regulatory sandboxing, i.e. the idea that for a limited time certain sectors should not or only lightly be regulated…(More)”.

A.I. Is Prompting an Evolution, Not an Extinction, for Coders


Article by Steve Lohr: “John Giorgi uses artificial intelligence to make artificial intelligence.

The 29-year-old computer scientist creates software for a health care start-up that records and summarizes patient visits for doctors, freeing them from hours spent typing up clinical notes.

To do so, Mr. Giorgi has his own timesaving helper: an A.I. coding assistant. He taps a few keys and the software tool suggests the rest of the line of code. It can also recommend changes, fetch data, identify bugs and run basic tests. Even though the A.I. makes some mistakes, it saves him up to an hour many days.

“I can’t imagine working without it now,” Mr. Giorgi said.

That sentiment is increasingly common among software developers, who are at the forefront of adopting A.I. agents, assistant programs tailored to help employees do their jobs in fields including customer service and manufacturing. The rapid improvement of the technology has been accompanied by dire warnings that A.I. could soon automate away millions of jobs — and software developers have been singled out as prime targets.

But the outlook for software developers is more likely evolution than extinction, according to experienced software engineers, industry analysts and academics. For decades, better tools have automated some coding tasks, but the demand for software and the people who make it has only increased.

A.I., they say, will accelerate that trend and level up the art and craft of software design.

“The skills software developers need will change significantly, but A.I. will not eliminate the need for them,” said Arnal Dayaratna, an analyst at IDC, a technology research firm. “Not anytime soon anyway.”

The outlook for software engineers offers a window into the impact that generative A.I. — the kind behind chatbots like OpenAI’s ChatGPT — is likely to have on knowledge workers across the economy, from doctors and lawyers to marketing managers and financial analysts. Predictions about the technology’s consequences vary widely, from wiping out whole swaths of the work force to hyper-charging productivity as an elixir for economic growth…(More)”.

Generative AI for data stewards: enhancing accuracy and efficiency in data governance


Paper by Ankush Reddy Sugureddy: “The quality of data becomes an essential component for the success of an organisation in a world that is largely influenced by data, where data analytics is becoming increasingly popular in the process of informing strategic decisions. The failure to improve the quality of the data can lead to undesirable outcomes such as poor decisions, ineffective strategies, dysfunctional operations, lost commercial prospects, and abrasion of the consumer. In the process of organisations shifting their focus towards transformative methods such as generative artificial intelligence, several use cases may emerge that have the potential to aid the improvement of data quality. Streamlining procedures such as data classification, metadata management, and policy enforcement can be accomplished by the incorporation of generative artificial intelligence into data governance frameworks. This, in turn, reduces the workload of human data stewards and minimises the possibility of human error. In order to ensure compliance with legal standards such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), generative artificial intelligence may analyse enormous datasets by utilising machine learning algorithms to discover patterns, inconsistencies, and compliance issues…(More)”.

California Governor Launches New Digital Democracy Tool


Article by Phil Willon: “California Gov. Gavin Newsom on Sunday announced a new digital democracy initiative that will attempt to connect residents directly with government officials in times of disaster and allow them to express their concerns about matters affecting their day-to-day lives.

The web-based initiative, called Engaged California, will go live with a focus on aiding victims of the deadly wildfires in Pacific Palisades and Altadena who are struggling to recover. For example, comments shared via the online forum could potentially prompt government action regarding insurance coverage, building standards or efforts to require utilities to bury power lines underground.

In a written statement, Newsom described the pilot program as “a town hall for the modern era — where Californians share their perspectives, concerns, and ideas geared toward finding real solutions.”


“We’re starting this effort by more directly involving Californians in the LA firestorm response and recovery,” he added. “As we recover, reimagine, and rebuild Los Angeles, we will do it together.”

The Democrat’s administration has ambitious plans for the effort that go far beyond the wildfires. Engaged California is modeled after a program in Taiwan that became an essential bridge between the public and the government at the height of the COVID-19 pandemic. The Taiwanese government has relied on it to combat online political disinformation as well…(More)”.

The Missing Pieces in India’s AI Puzzle: Talent, Data, and R&D


Article by Anirudh Suri: “This paper explores the question of whether India specifically will be able to compete and lead in AI or whether it will remain relegated to a minor role in this global competition. The paper argues that if India is to meet its larger stated ambition of becoming a global leader in AI, it will need to fill significant gaps in at least three areas urgently: talent, data, and research. Putting these three missing pieces in place can help position India extremely well to compete in the global AI race.

India’s national AI mission (NAIM), also known as the IndiaAI Mission, was launched in 2024 and rightly notes that success in the AI race requires multiple pieces of the AI puzzle to be in place.3 Accordingly, it has laid out a plan across seven elements of the “AI stack”: computing/AI infrastructure, data, talent, research and development (R&D), capital, algorithms, and applications.4

However, the focus thus far has practically been on only two elements: ensuring the availability of AI-focused hardware/compute and, to some extent, building Indic language models. India has not paid enough attention to, acted toward, and put significant resources behind three other key enabling elements of AI competitiveness, namely data, talent, and R&D…(More)”.

Introduction to the Foundations and Regulation of Generative AI


Chapter by Philipp Hacker, Andreas Engel, Sarah Hammer and Brent Mittelstadt: “… introduces The Oxford Handbook of the Foundations and Regulation of Generative AI, outlining the key themes and questions surrounding the technical development, regulatory governance, and societal implications of generative AI. It highlights the historical context of generative AI, distinguishes it from traditional AI, and explores its diverse applications across multiple domains, including text, images, music, and scientific discovery. The discussion critically assesses whether generative AI represents a paradigm shift or a temporary hype. Furthermore, the chapter extensively surveys both emerging and established regulatory frameworks, including the EU AI Act, the GDPR, privacy and personality rights, and copyright, as well as global legal responses. We conclude that, for now, the “Old Guard” of legal frameworks regulates generative AI more tightly and effectively than the “Newcomers,” but that may change as the new laws fully kick in. The chapter concludes by mapping the structure of the Handbook…(More)”

Advanced Flood Hub features for aid organizations and govern


Announcement by Alex Diaz: “Floods continue to devastate communities worldwide, and many are pursuing advancements in AI-driven flood forecasting, enabling faster, more efficient detection and response. Over the past few years, Google Research has focused on harnessing AI modeling and satellite imagery to dramatically accelerate the reliability of flood forecasting — while working with partners to expand coverage for people in vulnerable communities around the world.

Today, we’re rolling out new advanced features in Flood Hub designed to allow experts to understand flood risk in a given region via inundation history maps, and to understand how a given flood forecast on Flood Hub might propagate throughout a river basin. With the inundation history maps, Flood Hub expert users can view flood risk areas in high resolution over the map regardless of a current flood event. This is useful for cases where our flood forecasting does not include real time inundation maps or for pre-planning of humanitarian work. You can find more explanations about the inundation history maps and more in the Flood Hub Help Center…(More)”.

Patients’ Trust in Health Systems to Use Artificial Intelligence


Paper by Paige Nong and Jodyn Platt: “The growth and development of artificial intelligence (AI) in health care introduces a new set of questions about patient engagement and whether patients trust systems to use AI responsibly and safely. The answer to this question is embedded in patients’ experiences seeking care and trust in health systems. Meanwhile, the adoption of AI technology outpaces efforts to analyze patient perspectives, which are critical to designing trustworthy AI systems and ensuring patient-centered care.

We conducted a national survey of US adults to understand whether they trust their health systems to use AI responsibly and protect them from AI harms. We also examined variables that may be associated with these attitudes, including knowledge of AI, trust, and experiences of discrimination in health care….Most respondents reported low trust in their health care system to use AI responsibly (65.8%) and low trust that their health care system would make sure an AI tool would not harm them (57.7%)…(More)”.

Regulatory Markets: The Future of AI Governance


Paper by Gillian K. Hadfield, and Jack Clark: “Appropriately regulating artificial intelligence is an increasingly urgent policy challenge. Legislatures and regulators lack the specialized knowledge required to best translate public demands into legal requirements. Overreliance on industry self-regulation fails to hold producers and users of AI systems accountable to democratic demands. Regulatory markets, in which governments require the targets of regulation to purchase regulatory services from a private regulator, are proposed. This approach to AI regulation could overcome the limitations of both command-and-control regulation and self-regulation. Regulatory market could enable governments to establish policy priorities for the regulation of AI, whilst relying on market forces and industry R&D efforts to pioneer the methods of regulation that best achieve policymakers’ stated objectives…(More)”.

The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence


Handbook edited by Nathalie A. Smuha: “…provides a comprehensive overview of the legal, ethical, and policy implications of AI and algorithmic systems. As these technologies continue to impact various aspects of our lives, it is crucial to understand and assess the challenges and opportunities they present. Drawing on contributions from experts in various disciplines, the book covers theoretical insights and practical examples of how AI systems are used in society today. It also explores the legal and policy instruments governing AI, with a focus on Europe. The interdisciplinary approach of this book makes it an invaluable resource for anyone seeking to gain a deeper understanding of AI’s impact on society and how it should be regulated…(More)”.