A.I. Is Prompting an Evolution, Not an Extinction, for Coders


Article by Steve Lohr: “John Giorgi uses artificial intelligence to make artificial intelligence.

The 29-year-old computer scientist creates software for a health care start-up that records and summarizes patient visits for doctors, freeing them from hours spent typing up clinical notes.

To do so, Mr. Giorgi has his own timesaving helper: an A.I. coding assistant. He taps a few keys and the software tool suggests the rest of the line of code. It can also recommend changes, fetch data, identify bugs and run basic tests. Even though the A.I. makes some mistakes, it saves him up to an hour many days.

“I can’t imagine working without it now,” Mr. Giorgi said.

That sentiment is increasingly common among software developers, who are at the forefront of adopting A.I. agents, assistant programs tailored to help employees do their jobs in fields including customer service and manufacturing. The rapid improvement of the technology has been accompanied by dire warnings that A.I. could soon automate away millions of jobs — and software developers have been singled out as prime targets.

But the outlook for software developers is more likely evolution than extinction, according to experienced software engineers, industry analysts and academics. For decades, better tools have automated some coding tasks, but the demand for software and the people who make it has only increased.

A.I., they say, will accelerate that trend and level up the art and craft of software design.

“The skills software developers need will change significantly, but A.I. will not eliminate the need for them,” said Arnal Dayaratna, an analyst at IDC, a technology research firm. “Not anytime soon anyway.”

The outlook for software engineers offers a window into the impact that generative A.I. — the kind behind chatbots like OpenAI’s ChatGPT — is likely to have on knowledge workers across the economy, from doctors and lawyers to marketing managers and financial analysts. Predictions about the technology’s consequences vary widely, from wiping out whole swaths of the work force to hyper-charging productivity as an elixir for economic growth…(More)”.

Advanced Flood Hub features for aid organizations and govern


Announcement by Alex Diaz: “Floods continue to devastate communities worldwide, and many are pursuing advancements in AI-driven flood forecasting, enabling faster, more efficient detection and response. Over the past few years, Google Research has focused on harnessing AI modeling and satellite imagery to dramatically accelerate the reliability of flood forecasting — while working with partners to expand coverage for people in vulnerable communities around the world.

Today, we’re rolling out new advanced features in Flood Hub designed to allow experts to understand flood risk in a given region via inundation history maps, and to understand how a given flood forecast on Flood Hub might propagate throughout a river basin. With the inundation history maps, Flood Hub expert users can view flood risk areas in high resolution over the map regardless of a current flood event. This is useful for cases where our flood forecasting does not include real time inundation maps or for pre-planning of humanitarian work. You can find more explanations about the inundation history maps and more in the Flood Hub Help Center…(More)”.

Patients’ Trust in Health Systems to Use Artificial Intelligence


Paper by Paige Nong and Jodyn Platt: “The growth and development of artificial intelligence (AI) in health care introduces a new set of questions about patient engagement and whether patients trust systems to use AI responsibly and safely. The answer to this question is embedded in patients’ experiences seeking care and trust in health systems. Meanwhile, the adoption of AI technology outpaces efforts to analyze patient perspectives, which are critical to designing trustworthy AI systems and ensuring patient-centered care.

We conducted a national survey of US adults to understand whether they trust their health systems to use AI responsibly and protect them from AI harms. We also examined variables that may be associated with these attitudes, including knowledge of AI, trust, and experiences of discrimination in health care….Most respondents reported low trust in their health care system to use AI responsibly (65.8%) and low trust that their health care system would make sure an AI tool would not harm them (57.7%)…(More)”.

Regulatory Markets: The Future of AI Governance


Paper by Gillian K. Hadfield, and Jack Clark: “Appropriately regulating artificial intelligence is an increasingly urgent policy challenge. Legislatures and regulators lack the specialized knowledge required to best translate public demands into legal requirements. Overreliance on industry self-regulation fails to hold producers and users of AI systems accountable to democratic demands. Regulatory markets, in which governments require the targets of regulation to purchase regulatory services from a private regulator, are proposed. This approach to AI regulation could overcome the limitations of both command-and-control regulation and self-regulation. Regulatory market could enable governments to establish policy priorities for the regulation of AI, whilst relying on market forces and industry R&D efforts to pioneer the methods of regulation that best achieve policymakers’ stated objectives…(More)”.

The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence


Handbook edited by Nathalie A. Smuha: “…provides a comprehensive overview of the legal, ethical, and policy implications of AI and algorithmic systems. As these technologies continue to impact various aspects of our lives, it is crucial to understand and assess the challenges and opportunities they present. Drawing on contributions from experts in various disciplines, the book covers theoretical insights and practical examples of how AI systems are used in society today. It also explores the legal and policy instruments governing AI, with a focus on Europe. The interdisciplinary approach of this book makes it an invaluable resource for anyone seeking to gain a deeper understanding of AI’s impact on society and how it should be regulated…(More)”.

AI Upgrades the Internet of Things


Article by R. Colin Johnson: “Artificial Intelligence (AI) is renovating the fast-growing Internet of Things (IoT) by migrating AI innovations, including deep neural networks, Generative AI, and large language models (LLMs) from power-hungry datacenters to the low-power Artificial Intelligence of Things (AIoT). Located at the network’s edge, there are already billions of connected devices today, plus a predicted trillion more connected devices by 2035 (according to Arm, which licenses many of their processors).

The emerging details of this AIoT development period got a boost from ACM Transactions on Sensor Networks, which recently accepted for publication “Artificial Intelligence of Things: A Survey,” a paper authored by Mi Zhang of Ohio State University and collaborators at Michigan State University, the University of Southern California, and the University of California, Los Angeles. The survey is an in-depth reference to the latest AIoT research…

The survey addresses the subject of AIoT with AI-empowered sensing modalities including motion, wireless, vision, acoustic, multi-modal, ear-bud, and GenAI-assisted sensing. The computing section covers on-device inference engines, on-device learning, methods of training by partitioning workloads among heterogeneous accelerators, offloading privacy functions, federated learning that distributes workloads while preserving anonymity, integration with LLMs, and AI-empowered agents. Connection technologies discussed include Internet over Wi-Fi and over cellular/mobile networks, visible light communication systems, LoRa (long-range chirp spread-spectrum connections), and wide-area networks.

A sampling of domain-specific AIoTs reviewed in the survey include AIoT systems for healthcare and well-being, for smart speakers, for video streaming, for video analytics, for autonomous driving, for drones, for satellites, for agriculture, for biology, and for artificial reality, virtual reality, and mixed reality…(More)”.

Figure for AIoT article

Intellectual property issues in artificial intelligence trained on scraped data


OECD Report: “Recent technological advances in artificial intelligence (AI), especially the rise of generative AI, have raised questions regarding the intellectual property (IP) landscape. As the demand for AI training data surges, certain data collection methods give rise to concerns about the protection of IP and other rights. This report provides an overview of key issues at the intersection of AI and some IP rights. It aims to facilitate a greater understanding of data scraping — a primary method for obtaining AI training data needed to develop many large language models. It analyses data scraping techniques, identifies key stakeholders, and worldwide legal and regulatory responses. Finally, it offers preliminary considerations and potential policy approaches to help guide policymakers in navigating these issues, ensuring that AI’s innovative potential is unleashed while protecting IP and other rights…(More)”.

Building AI for the pluralistic society


Paper by Aida Davani and Vinodkumar Prabhakaran: “Modern artificial intelligence (AI) systems rely on input from people. Human feedback helps train models to perform useful tasks, guides them toward safe and responsible behavior, and is used to assess their performance. While hailing the recent AI advancements, we should also ask: which humans are we actually talking about? For AI to be most beneficial, it should reflect and respect the diverse tapestry of values, beliefs, and perspectives present in the pluralistic world in which we live, not just a single “average” or majority viewpoint. Diversity in perspectives is especially relevant when AI systems perform subjective tasks, such as deciding whether a response will be perceived as helpful, offensive, or unsafe. For instance, what one value system deems as offensive may be perfectly acceptable within another set of values.

Since divergence in perspectives often aligns with socio-cultural and demographic lines, preferentially capturing certain groups’ perspectives over others in data may result in disparities in how well AI systems serve different social groups. For instance, we previously demonstrated that simply taking a majority vote from human annotations may obfuscate valid divergence in perspectives across social groups, inadvertently marginalizing minority perspectives, and consequently performing less reliably for groups marginalized in the data. How AI systems should deal with such diversity in perspectives depends on the context in which they are used. However, current models lack a systematic way to recognize and handle such contexts.

With this in mind, here we describe our ongoing efforts in pursuit of capturing diverse perspectives and building AI for the pluralistic society in which we live… (More)”.

AI crawler wars threaten to make the web more closed for everyone


Article by Shayne Longpre: “We often take the internet for granted. It’s an ocean of information at our fingertips—and it simply works. But this system relies on swarms of “crawlers”—bots that roam the web, visit millions of websites every day, and report what they see. This is how Google powers its search engines, how Amazon sets competitive prices, and how Kayak aggregates travel listings. Beyond the world of commerce, crawlers are essential for monitoring web security, enabling accessibility tools, and preserving historical archives. Academics, journalists, and civil societies also rely on them to conduct crucial investigative research.  

Crawlers are endemic. Now representing half of all internet traffic, they will soon outpace human traffic. This unseen subway of the web ferries information from site to site, day and night. And as of late, they serve one more purpose: Companies such as OpenAI use web-crawled data to train their artificial intelligence systems, like ChatGPT. 

Understandably, websites are now fighting back for fear that this invasive species—AI crawlers—will help displace them. But there’s a problem: This pushback is also threatening the transparency and open borders of the web, that allow non-AI applications to flourish. Unless we are thoughtful about how we fix this, the web will increasingly be fortified with logins, paywalls, and access tolls that inhibit not just AI but the biodiversity of real users and useful crawlers…(More)”.

Sandboxes for AI


Report by Datasphere Initiative: “The Sandboxes for AI report explores the role of regulatory sandboxes in the development and governance of artificial intelligence. Originally presented as a working paper at the Global Sandbox Forum Inaugural Meeting in July 2024, the report was further refined through expert consultations and an online roundtable in December 2024. It examines sandboxes that have been announced, are under development, or have been completed, identifying common patterns in their creation, timing, and implementation. By providing insights into why and how regulators and companies should consider AI sandboxes, the report serves as a strategic guide for fostering responsible innovation.

In a rapidly evolving AI landscape, traditional regulatory processes often struggle to keep pace with technological advancements. Sandboxes offer a flexible and iterative approach, allowing policymakers to test and refine AI governance models in a controlled environment. The report identifies 66 AI, data, or technology-related sandboxes globally, with 31 specifically designed for AI innovation across 44 countries. These initiatives focus on areas such as machine learning, data-driven solutions, and AI governance, helping policymakers address emerging challenges while ensuring ethical and transparent AI development…(More)”.