California Governor Launches New Digital Democracy Tool


Article by Phil Willon: “California Gov. Gavin Newsom on Sunday announced a new digital democracy initiative that will attempt to connect residents directly with government officials in times of disaster and allow them to express their concerns about matters affecting their day-to-day lives.

The web-based initiative, called Engaged California, will go live with a focus on aiding victims of the deadly wildfires in Pacific Palisades and Altadena who are struggling to recover. For example, comments shared via the online forum could potentially prompt government action regarding insurance coverage, building standards or efforts to require utilities to bury power lines underground.

In a written statement, Newsom described the pilot program as “a town hall for the modern era — where Californians share their perspectives, concerns, and ideas geared toward finding real solutions.”


“We’re starting this effort by more directly involving Californians in the LA firestorm response and recovery,” he added. “As we recover, reimagine, and rebuild Los Angeles, we will do it together.”

The Democrat’s administration has ambitious plans for the effort that go far beyond the wildfires. Engaged California is modeled after a program in Taiwan that became an essential bridge between the public and the government at the height of the COVID-19 pandemic. The Taiwanese government has relied on it to combat online political disinformation as well…(More)”.

The Missing Pieces in India’s AI Puzzle: Talent, Data, and R&D


Article by Anirudh Suri: “This paper explores the question of whether India specifically will be able to compete and lead in AI or whether it will remain relegated to a minor role in this global competition. The paper argues that if India is to meet its larger stated ambition of becoming a global leader in AI, it will need to fill significant gaps in at least three areas urgently: talent, data, and research. Putting these three missing pieces in place can help position India extremely well to compete in the global AI race.

India’s national AI mission (NAIM), also known as the IndiaAI Mission, was launched in 2024 and rightly notes that success in the AI race requires multiple pieces of the AI puzzle to be in place.3 Accordingly, it has laid out a plan across seven elements of the “AI stack”: computing/AI infrastructure, data, talent, research and development (R&D), capital, algorithms, and applications.4

However, the focus thus far has practically been on only two elements: ensuring the availability of AI-focused hardware/compute and, to some extent, building Indic language models. India has not paid enough attention to, acted toward, and put significant resources behind three other key enabling elements of AI competitiveness, namely data, talent, and R&D…(More)”.

Introduction to the Foundations and Regulation of Generative AI


Chapter by Philipp Hacker, Andreas Engel, Sarah Hammer and Brent Mittelstadt: “… introduces The Oxford Handbook of the Foundations and Regulation of Generative AI, outlining the key themes and questions surrounding the technical development, regulatory governance, and societal implications of generative AI. It highlights the historical context of generative AI, distinguishes it from traditional AI, and explores its diverse applications across multiple domains, including text, images, music, and scientific discovery. The discussion critically assesses whether generative AI represents a paradigm shift or a temporary hype. Furthermore, the chapter extensively surveys both emerging and established regulatory frameworks, including the EU AI Act, the GDPR, privacy and personality rights, and copyright, as well as global legal responses. We conclude that, for now, the “Old Guard” of legal frameworks regulates generative AI more tightly and effectively than the “Newcomers,” but that may change as the new laws fully kick in. The chapter concludes by mapping the structure of the Handbook…(More)”

Advanced Flood Hub features for aid organizations and govern


Announcement by Alex Diaz: “Floods continue to devastate communities worldwide, and many are pursuing advancements in AI-driven flood forecasting, enabling faster, more efficient detection and response. Over the past few years, Google Research has focused on harnessing AI modeling and satellite imagery to dramatically accelerate the reliability of flood forecasting — while working with partners to expand coverage for people in vulnerable communities around the world.

Today, we’re rolling out new advanced features in Flood Hub designed to allow experts to understand flood risk in a given region via inundation history maps, and to understand how a given flood forecast on Flood Hub might propagate throughout a river basin. With the inundation history maps, Flood Hub expert users can view flood risk areas in high resolution over the map regardless of a current flood event. This is useful for cases where our flood forecasting does not include real time inundation maps or for pre-planning of humanitarian work. You can find more explanations about the inundation history maps and more in the Flood Hub Help Center…(More)”.

Patients’ Trust in Health Systems to Use Artificial Intelligence


Paper by Paige Nong and Jodyn Platt: “The growth and development of artificial intelligence (AI) in health care introduces a new set of questions about patient engagement and whether patients trust systems to use AI responsibly and safely. The answer to this question is embedded in patients’ experiences seeking care and trust in health systems. Meanwhile, the adoption of AI technology outpaces efforts to analyze patient perspectives, which are critical to designing trustworthy AI systems and ensuring patient-centered care.

We conducted a national survey of US adults to understand whether they trust their health systems to use AI responsibly and protect them from AI harms. We also examined variables that may be associated with these attitudes, including knowledge of AI, trust, and experiences of discrimination in health care….Most respondents reported low trust in their health care system to use AI responsibly (65.8%) and low trust that their health care system would make sure an AI tool would not harm them (57.7%)…(More)”.

Regulatory Markets: The Future of AI Governance


Paper by Gillian K. Hadfield, and Jack Clark: “Appropriately regulating artificial intelligence is an increasingly urgent policy challenge. Legislatures and regulators lack the specialized knowledge required to best translate public demands into legal requirements. Overreliance on industry self-regulation fails to hold producers and users of AI systems accountable to democratic demands. Regulatory markets, in which governments require the targets of regulation to purchase regulatory services from a private regulator, are proposed. This approach to AI regulation could overcome the limitations of both command-and-control regulation and self-regulation. Regulatory market could enable governments to establish policy priorities for the regulation of AI, whilst relying on market forces and industry R&D efforts to pioneer the methods of regulation that best achieve policymakers’ stated objectives…(More)”.

The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence


Handbook edited by Nathalie A. Smuha: “…provides a comprehensive overview of the legal, ethical, and policy implications of AI and algorithmic systems. As these technologies continue to impact various aspects of our lives, it is crucial to understand and assess the challenges and opportunities they present. Drawing on contributions from experts in various disciplines, the book covers theoretical insights and practical examples of how AI systems are used in society today. It also explores the legal and policy instruments governing AI, with a focus on Europe. The interdisciplinary approach of this book makes it an invaluable resource for anyone seeking to gain a deeper understanding of AI’s impact on society and how it should be regulated…(More)”.

AI Upgrades the Internet of Things


Article by R. Colin Johnson: “Artificial Intelligence (AI) is renovating the fast-growing Internet of Things (IoT) by migrating AI innovations, including deep neural networks, Generative AI, and large language models (LLMs) from power-hungry datacenters to the low-power Artificial Intelligence of Things (AIoT). Located at the network’s edge, there are already billions of connected devices today, plus a predicted trillion more connected devices by 2035 (according to Arm, which licenses many of their processors).

The emerging details of this AIoT development period got a boost from ACM Transactions on Sensor Networks, which recently accepted for publication “Artificial Intelligence of Things: A Survey,” a paper authored by Mi Zhang of Ohio State University and collaborators at Michigan State University, the University of Southern California, and the University of California, Los Angeles. The survey is an in-depth reference to the latest AIoT research…

The survey addresses the subject of AIoT with AI-empowered sensing modalities including motion, wireless, vision, acoustic, multi-modal, ear-bud, and GenAI-assisted sensing. The computing section covers on-device inference engines, on-device learning, methods of training by partitioning workloads among heterogeneous accelerators, offloading privacy functions, federated learning that distributes workloads while preserving anonymity, integration with LLMs, and AI-empowered agents. Connection technologies discussed include Internet over Wi-Fi and over cellular/mobile networks, visible light communication systems, LoRa (long-range chirp spread-spectrum connections), and wide-area networks.

A sampling of domain-specific AIoTs reviewed in the survey include AIoT systems for healthcare and well-being, for smart speakers, for video streaming, for video analytics, for autonomous driving, for drones, for satellites, for agriculture, for biology, and for artificial reality, virtual reality, and mixed reality…(More)”.

Figure for AIoT article

Intellectual property issues in artificial intelligence trained on scraped data


OECD Report: “Recent technological advances in artificial intelligence (AI), especially the rise of generative AI, have raised questions regarding the intellectual property (IP) landscape. As the demand for AI training data surges, certain data collection methods give rise to concerns about the protection of IP and other rights. This report provides an overview of key issues at the intersection of AI and some IP rights. It aims to facilitate a greater understanding of data scraping — a primary method for obtaining AI training data needed to develop many large language models. It analyses data scraping techniques, identifies key stakeholders, and worldwide legal and regulatory responses. Finally, it offers preliminary considerations and potential policy approaches to help guide policymakers in navigating these issues, ensuring that AI’s innovative potential is unleashed while protecting IP and other rights…(More)”.

Building AI for the pluralistic society


Paper by Aida Davani and Vinodkumar Prabhakaran: “Modern artificial intelligence (AI) systems rely on input from people. Human feedback helps train models to perform useful tasks, guides them toward safe and responsible behavior, and is used to assess their performance. While hailing the recent AI advancements, we should also ask: which humans are we actually talking about? For AI to be most beneficial, it should reflect and respect the diverse tapestry of values, beliefs, and perspectives present in the pluralistic world in which we live, not just a single “average” or majority viewpoint. Diversity in perspectives is especially relevant when AI systems perform subjective tasks, such as deciding whether a response will be perceived as helpful, offensive, or unsafe. For instance, what one value system deems as offensive may be perfectly acceptable within another set of values.

Since divergence in perspectives often aligns with socio-cultural and demographic lines, preferentially capturing certain groups’ perspectives over others in data may result in disparities in how well AI systems serve different social groups. For instance, we previously demonstrated that simply taking a majority vote from human annotations may obfuscate valid divergence in perspectives across social groups, inadvertently marginalizing minority perspectives, and consequently performing less reliably for groups marginalized in the data. How AI systems should deal with such diversity in perspectives depends on the context in which they are used. However, current models lack a systematic way to recognize and handle such contexts.

With this in mind, here we describe our ongoing efforts in pursuit of capturing diverse perspectives and building AI for the pluralistic society in which we live… (More)”.