Human-Centered AI


Book edited by Catherine Régis, Jean-Louis Denis, Maria Luciana Axente, and Atsuo Kishimoto: “Artificial intelligence (AI) permeates our lives in a growing number of ways. Relying solely on traditional, technology-driven approaches won’t suffice to develop and deploy that technology in a way that truly enhances human experience. A new concept is desperately needed to reach that goal. That concept is Human-Centered AI (HCAI).

With 29 captivating chapters, this book delves deep into the realm of HCAI. In Section I, it demystifies HCAI, exploring cutting-edge trends and approaches in its study, including the moral landscape of Large Language Models. Section II looks at how HCAI is viewed in different institutions—like the justice system, health system, and higher education—and how it could affect them. It examines how crafting HCAI could lead to better work. Section III offers practical insights and successful strategies to transform HCAI from theory to reality, for example, studying how using regulatory sandboxes could ensure the development of age-appropriate AI for kids. Finally, decision-makers and practitioners provide invaluable perspectives throughout the book, showcasing the real-world significance of its articles beyond academia.

Authored by experts from a variety of backgrounds, sectors, disciplines, and countries, this engaging book offers a fascinating exploration of Human-Centered AI. Whether you’re new to the subject or not, a decision-maker, a practitioner or simply an AI user, this book will help you gain a better understanding of HCAI’s impact on our societies, and of why and how AI should really be developed and deployed in a human-centered future…(More)”.

DC launched an AI tool for navigating the city’s open data


Article by Kaela Roeder: “In a move echoing local governments’ increasing attention toward generative artificial intelligence across the country, the nation’s capital now aims to make navigating its open data easier through a new public beta pilot.

DC Compass, launched in March, uses generative AI to answer user questions and create maps from open data sets, ranging from the district’s population to what different trees are planted in the city. The Office of the Chief Technology Officer (OCTO) partnered with the geographic information system (GIS) technology company Esri, which has an office in Vienna, Virginia, to create the new tool.

This debut follows Mayor Muriel Bowser’s signing of DC’s AI Values and Strategic Plan in February. The order requires agencies to assess if using AI is in alignment with the values it sets forth, including that there’s a clear benefit to people; a plan for “meaningful accountability” for the tool; and transparency, sustainability, privacy and equity at the forefront of deployment.

These values are key when launching something like DC Compass, said Michael Rupert, the interim chief technology officer for digital services at the Office of the Chief Technology Officer.

“The way Mayor Bowser rolled out the mayor’s order and this value statement, I think gives residents and businesses a little more comfort that we aren’t just writing a check and seeing what happens,” Rupert said. “That we’re actually methodically going about it in a responsible way, both morally and fiscally.”..(More)”.

Screenshot of AI portal with black text and data tables over white background

DC COMPASS IN ACTION. (SCREENSHOT/COURTESY OCTO)

The Potential of Artificial Intelligence for the SDGs and Official Statistics


Report by Paris21: “Artificial Intelligence (AI) and its impact on people’s lives is growing rapidly. AI is already leading to significant developments from healthcare to education, which can contribute to the efficient monitoring and achievement of the Sustainable Development Goals (SDGs), a call to action to address the world’s greatest challenges. AI is also raising concerns because, if not addressed carefully, its risks may outweigh its benefits. As a result, AI is garnering increasing attention from National Statistical Offices (NSOs) and the official statistics community as they are challenged to produce more, comprehensive, timely, and highquality data for decision-making with limited resources in a rapidly changing world of data and technologies and in light of complex and converging global issues from pandemics to climate change. This paper has been prepared as an input to the “Data and AI for Sustainable Development: Building a Smarter Future” Conference, organized in partnership with The Partnership in Statistics for Development in the 21st Century (PARIS21), the World Bank and the International Monetary Fund (IMF). Building on case studies that examine the use of AI by NSOs, the paper presents the benefits and risks of AI with a focus on NSO operations related to sustainable development. The objective is to spark discussions and to initiate a dialogue around how AI can be leveraged to inform decisions and take action to better monitor and achieve sustainable development, while mitigating its risks…(More)”.

Generative AI in Journalism


Report by Nicholas Diakopoulos et al: “The introduction of ChatGPT by OpenAI in late 2022 captured the imagination of the public—and the news industry—with the potential of generative AI to upend how people create and consume media. Generative AI is a type of artificial intelligence technology that can create new content, such as text, images, audio, video, or other media, based on the data it has been trained on and according to written prompts provided by users. ChatGPT is the chat-based user interface that made the power and potential of generative AI salient to a wide audience, reaching 100 million users within two months of its launch.

Although similar technology had been around, by late 2022 it was suddenly working, spurring its integration into various products and presenting not only a host of opportunities for productivity and new experiences but also some serious concerns about accuracy, provenance and attribution of source information, and the increased potential for creating misinformation.

This report serves as a snapshot of how the news industry has grappled with the initial promises and challenges of generative AI towards the end of 2023. The sample of participants reflects how some of the more savvy and experienced members of the profession are reacting to the technology.

Based on participants’ responses, they found that generative AI is already changing work structure and organization, even as it triggers ethical concerns around use. Here are some key takeaways:

  • Applications in News Production. The most predominant current use cases for generative AI include various forms of textual content production, information gathering and sensemaking, multimedia content production, and business uses.
  • Changing Work Structure and Organization. There are a host of new roles emerging to grapple with the changes introduced by generative AI including for leadership, editorial, product, legal, and engineering positions.
  • Work Redesign. There is an unmet opportunity to design new interfaces to support journalistic work with generative AI, in particular to enable the human oversight needed for the efficient and confident checking and verification of outputs..(More)”

Data Authenticity, Consent, and Provenance for AI Are All Broken: What Will It Take to Fix Them?


Article by Shayne Longpre et al: “New AI capabilities are owed in large part to massive, widely sourced, and underdocumented training data collections. Dubious collection practices have spurred crises in data transparency, authenticity, consent, privacy, representation, bias, copyright infringement, and the overall development of ethical and trustworthy AI systems. In response, AI regulation is emphasizing the need for training data transparency to understand AI model limitations. Based on a large-scale analysis of the AI training data landscape and existing solutions, we identify the missing infrastructure to facilitate responsible AI development practices. We explain why existing tools for data authenticity, consent, and documentation alone are unable to solve the core problems facing the AI community, and outline how policymakers, developers, and data creators can facilitate responsible AI development, through universal data provenance standards…(More)”.

AI and the Future of Government: Unexpected Effects and Critical Challenges


Policy Brief by Tiago C. Peixoto, Otaviano Canuto, and Luke Jordan: “Based on observable facts, this policy paper explores some of the less- acknowledged yet critically important ways in which artificial intelligence (AI) may affect the public sector and its role. Our focus is on those areas where AI’s influence might be understated currently, but where it has substantial implications for future government policies and actions.

We identify four main areas of impact that could redefine the public sector role, require new answers from it, or both. These areas are the emergence of a new language-based digital divide, jobs displacement in the public administration, disruptions in revenue mobilization, and declining government responsiveness.

This discussion not only identifies critical areas but also underscores the importance of transcending conventional approaches in tackling them. As we examine these challenges, we shed light on their significance, seeking to inform policymakers and stakeholders about the nuanced ways in which AI may quietly, yet profoundly, alter the public sector landscape…(More)”.

AI for Good: Applications in Sustainability, Humanitarian Action, and Health


Book by Juan M. Lavista Ferres and William B. Weeks: “…an insightful and fascinating discussion of how one of the world’s most recognizable software companies is tacking intractable social problems with the power of artificial intelligence (AI). In the book, you’ll learn about how climate change, illness and disease, and challenges to fundamental human rights are all being fought using replicable methods and reusable AI code.

The authors also provide:

  • Easy-to-follow, non-technical explanations of what AI is and how it works
  • Examinations of how healthcare is being improved, climate change is being addressed, and humanitarian aid is being facilitated around the world with AI
  • Discussions of the future of AI in the realm of social benefit organizations and efforts

An essential guide to impactful social change with artificial intelligence, AI for Good is a must-read resource for technical and non-technical professionals interested in AI’s social potential, as well as policymakers, regulators, NGO professionals, and, and non-profit volunteers…(More)”.

The Cambridge Handbook of Facial Recognition in the Modern State


Book edited by Rita Matulionyte and Monika Zalnieriute: “In situations ranging from border control to policing and welfare, governments are using automated facial recognition technology (FRT) to collect taxes, prevent crime, police cities and control immigration. FRT involves the processing of a person’s facial image, usually for identification, categorisation or counting. This ambitious handbook brings together a diverse group of legal, computer, communications, and social and political science scholars to shed light on how FRT has been developed, used by public authorities, and regulated in different jurisdictions across five continents. Informed by their experiences working on FRT across the globe, chapter authors analyse the increasing deployment of FRT in public and private life. The collection argues for the passage of new laws, rules, frameworks, and approaches to prevent harms of FRT in the modern state and advances the debate on scrutiny of power and accountability of public authorities which use FRT…(More)”.

AI Accountability Policy Report


Report by NTIA: “Artificial intelligence (AI) systems are rapidly becoming part of the fabric of everyday American life. From customer service to image generation to manufacturing, AI systems are everywhere.

Alongside their transformative potential for good, AI systems also pose risks of harm. These risks include inaccurate or false outputs; unlawful discriminatory algorithmic decision making; destruction of jobs and the dignity of work; and compromised privacy, safety, and security. Given their influence and ubiquity, these systems must be subject to security and operational mechanisms that mitigate risk and warrant stakeholder trust that they will not cause harm….


The AI Accountability Policy Report
 conceives of accountability as a chain of inputs linked to consequences. It focuses on how information flow (documentation, disclosures, and access) supports independent evaluations (including red-teaming and audits), which in turn feed into consequences (including liability and regulation) to create accountability. It concludes with recommendations for federal government action, some of which elaborate on themes in the AI EO, to encourage and possibly require accountability inputs…(More)”.

Graphic showing the AI Accountability Chain model

A.I.-Generated Garbage Is Polluting Our Culture


Article by Eric Hoel: “Increasingly, mounds of synthetic A.I.-generated outputs drift across our feeds and our searches. The stakes go far beyond what’s on our screens. The entire culture is becoming affected by A.I.’s runoff, an insidious creep into our most important institutions.

Consider science. Right after the blockbuster release of GPT-4, the latest artificial intelligence model from OpenAI and one of the most advanced in existence, the language of scientific research began to mutate. Especially within the field of A.I. itself.

study published this month examined scientists’ peer reviews — researchers’ official pronouncements on others’ work that form the bedrock of scientific progress — across a number of high-profile and prestigious scientific conferences studying A.I. At one such conference, those peer reviews used the word “meticulous” more than 34 times as often as reviews did the previous year. Use of “commendable” was around 10 times as frequent, and “intricate,” 11 times. Other major conferences showed similar patterns.

Such phrasings are, of course, some of the favorite buzzwords of modern large language models like ChatGPT. In other words, significant numbers of researchers at A.I. conferences were caught handing their peer review of others’ work over to A.I. — or, at minimum, writing them with lots of A.I. assistance. And the closer to the deadline the submitted reviews were received, the more A.I. usage was found in them.

If this makes you uncomfortable — especially given A.I.’s current unreliability — or if you think that maybe it shouldn’t be A.I.s reviewing science but the scientists themselves, those feelings highlight the paradox at the core of this technology: It’s unclear what the ethical line is between scam and regular usage. Some A.I.-generated scams are easy to identify, like the medical journal paper featuring a cartoon rat sporting enormous genitalia. Many others are more insidious, like the mislabeled and hallucinated regulatory pathway described in that same paper — a paper that was peer reviewed as well (perhaps, one might speculate, by another A.I.?)…(More)”.