The AI That Could Heal a Divided Internet


Article by Billy Perrigo: “In the 1990s and early 2000s, technologists made the world a grand promise: new communications technologies would strengthen democracy, undermine authoritarianism, and lead to a new era of human flourishing. But today, few people would agree that the internet has lived up to that lofty goal. 

Today, on social media platforms, content tends to be ranked by how much engagement it receives. Over the last two decades politics, the media, and culture have all been reshaped to meet a single, overriding incentive: posts that provoke an emotional response often rise to the top.

Efforts to improve the health of online spaces have long focused on content moderation, the practice of detecting and removing bad content. Tech companies hired workers and built AI to identify hate speech, incitement to violence, and harassment. That worked imperfectly, but it stopped the worst toxicity from flooding our feeds. 

There was one problem: while these AIs helped remove the bad, they didn’t elevate the good. “Do you see an internet that is working, where we are having conversations that are healthy or productive?” asks Yasmin Green, the CEO of Google’s Jigsaw unit, which was founded in 2010 with a remit to address threats to open societies. “No. You see an internet that is driving us further and further apart.”

What if there were another way? 

Jigsaw believes it has found one. On Monday, the Google subsidiary revealed a new set of AI tools, or classifiers, that can score posts based on the likelihood that they contain good content: Is a post nuanced? Does it contain evidence-based reasoning? Does it share a personal story, or foster human compassion? By returning a numerical score (from 0 to 1) representing the likelihood of a post containing each of those virtues and others, these new AI tools could allow the designers of online spaces to rank posts in a new way. Instead of posts that receive the most likes or comments rising to the top, platforms could—in an effort to foster a better community—choose to put the most nuanced comments, or the most compassionate ones, first…(More)”.

United against algorithms: a primer on disability-led struggles against algorithmic injustice


Report by Georgia van Toorn: “Algorithmic decision-making (ADM) poses urgent concerns regarding the rights and entitlements of people with disability from all walks of life. As ADM systems become increasingly embedded in government decision-making processes, there is a heightened risk of harm, such as unjust denial of benefits or inadequate support, accentuated by the expanding reach of state surveillance.

ADM systems have far reaching impacts on disabled lives and life chances. Despite this, they are often designed without the input of people with lived experience of disability, for purposes that do not align with the goals of full rights, participation, and justice for disabled people.

This primer explores how people with disability are collectively responding to the threats posed by algorithmic, data-driven systems – specifically their public sector applications. It provides an introductory overview of the topic, exploring the approaches, obstacles, and actions taken by people with disability in their ‘algoactivist’ struggles…(More)”.

The impact of generative artificial intelligence on socioeconomic inequalities and
policy making


Paper by Valerio Capraro et al: “Generative artificial intelligence, including chatbots like ChatGPT, has the potential to both exacerbate and ameliorate existing socioeconomic inequalities. In this article, we provide a state-of-the-art interdisciplinary overview of the probable impacts of generative AI on four critical domains: work, education, health, and information. Our goal is to warn about how generative AI could worsen existing inequalities while illuminating directions for using AI to resolve pervasive social problems. Generative AI in the workplace can boost productivity and create new jobs, but the benefits will likely be distributed unevenly. In education, it offers personalized learning but may widen the digital divide. In healthcare, it improves diagnostics and accessibility but could deepen pre-existing inequalities. For information, it democratizes content creation and access but also dramatically expands the production and proliferation of misinformation. Each section covers a specific topic, evaluates existing research, identifies critical gaps, and recommends research directions. We conclude with a section highlighting the role of policymaking to maximize generative AI’s potential to reduce inequalities while
mitigating its harmful effects. We discuss strengths and weaknesses of existing policy frameworks in the European Union, the United States, and the United Kingdom, observing that each fails to fully confront the socioeconomic challenges we have identified. We contend that these policies should promote shared prosperity through the advancement of generative AI. We suggest several concrete policies to encourage further research and debate. This article emphasizes the need for interdisciplinary collaborations to understand and address the complex challenges of generative AI…(More)”.

The tech industry can’t agree on what open-source AI means. That’s a problem.


Article by Edd Gent: “Suddenly, “open source” is the latest buzzword in AI circles. Meta has pledged to create open-source artificial general intelligence. And Elon Musk is suing OpenAI over its lack of open-source AI models.

Meanwhile, a growing number of tech leaders and companies are setting themselves up as open-source champions. 

But there’s a fundamental problem—no one can agree on what “open-source AI” means. 

On the face of it, open-source AI promises a future where anyone can take part in the technology’s development. That could accelerate innovation, boost transparency, and give users greater control over systems that could soon reshape many aspects of our lives. But what even is it? What makes an AI model open source, and what disqualifies it?

The answers could have significant ramifications for the future of the technology. Until the tech industry has settled on a definition, powerful companies can easily bend the concept to suit their own needs, and it could become a tool to entrench the dominance of today’s leading players.

Entering this fray is the Open Source Initiative (OSI), the self-appointed arbiters of what it means to be open source. Founded in 1998, the nonprofit is the custodian of the Open Source Definition, a widely accepted set of rules that determine whether a piece of software can be considered open source. 

Now, the organization has assembled a 70-strong group of researchers, lawyers, policymakers, activists, and representatives from big tech companies like Meta, Google, and Amazon to come up with a working definition of open-source AI…(More)”.

New Jersey is turning to AI to improve the job search process


Article by Beth Simone Noveck: “Americans are experiencing some conflicting feelings about AI.

While people are flocking to new roles like prompt engineer and AI ethicist, the technology is also predicted to put many jobs at risk, including computer programmers, data scientists, graphic designers, writers, lawyers.

Little wonder, then, that a national survey by the Heldrich Center for Workforce Development found an overwhelming majority of Americans (66%) believe that they “will need more technological skills to achieve their career goals.” One thing is certain: Workers will need to train for change. And in a world of misinformation-filled social media platforms, it is increasingly important for trusted public institutions to provide reliable, data-driven resources.

In New Jersey, we’ve tried doing just that by collaborating with workers, including many with disabilities, to design technology that will support better decision-making around training and career change. Investing in similar public AI-powered tools could help support better consumer choice across various domains. When a public entity designs, controls and implements AI, there is a far greater likelihood that this powerful technology will be used for good.

In New Jersey, the public can find reliable, independent, unbiased information about training and upskilling on the state’s new MyCareer website, which uses AI to make personalized recommendations about your career prospects, and the training you will need to be ready for a high-growth, in-demand job…(More)”.

Global AI governance: barriers and pathways forward 


Paper by Huw Roberts, Emmie Hine, Mariarosaria Taddeo, Luciano Floridi: “This policy paper is a response to the growing calls for ambitious new international institutions for AI. It maps the geopolitical and institutional barriers to stronger global AI governance and considers potential pathways forward in light of these constraints. We argue that a promising foundation of international regimes focused on AI governance is emerging, but the centrality of AI to interstate competition, dysfunctional international institutions and disagreement over policy priorities problematizes substantive cooperation. We propose strengthening the existing weak ‘regime complex’ of international institutions as the most desirable and realistic path forward for global AI governance. Strengthening coordination between, and the capacities of, existing institutions supports mutually reinforcing policy change, which, if enacted properly, can lead to catalytic change across the various policy areas where AI has an impact. It also facilitates the flexible governance needed for rapidly evolving technologies.

To make this argument, we outline key global AI governance processes in the next section. In the third section, we analyse how first- and second-order cooperation problems in international relations apply to AI. In the fourth section we assess potential routes for advancing global AI governance, and we conclude by providing recommendations on how to strengthen the weak AI regime complex…(More)”.

Human-Centered AI


Book edited by Catherine Régis, Jean-Louis Denis, Maria Luciana Axente, and Atsuo Kishimoto: “Artificial intelligence (AI) permeates our lives in a growing number of ways. Relying solely on traditional, technology-driven approaches won’t suffice to develop and deploy that technology in a way that truly enhances human experience. A new concept is desperately needed to reach that goal. That concept is Human-Centered AI (HCAI).

With 29 captivating chapters, this book delves deep into the realm of HCAI. In Section I, it demystifies HCAI, exploring cutting-edge trends and approaches in its study, including the moral landscape of Large Language Models. Section II looks at how HCAI is viewed in different institutions—like the justice system, health system, and higher education—and how it could affect them. It examines how crafting HCAI could lead to better work. Section III offers practical insights and successful strategies to transform HCAI from theory to reality, for example, studying how using regulatory sandboxes could ensure the development of age-appropriate AI for kids. Finally, decision-makers and practitioners provide invaluable perspectives throughout the book, showcasing the real-world significance of its articles beyond academia.

Authored by experts from a variety of backgrounds, sectors, disciplines, and countries, this engaging book offers a fascinating exploration of Human-Centered AI. Whether you’re new to the subject or not, a decision-maker, a practitioner or simply an AI user, this book will help you gain a better understanding of HCAI’s impact on our societies, and of why and how AI should really be developed and deployed in a human-centered future…(More)”.

DC launched an AI tool for navigating the city’s open data


Article by Kaela Roeder: “In a move echoing local governments’ increasing attention toward generative artificial intelligence across the country, the nation’s capital now aims to make navigating its open data easier through a new public beta pilot.

DC Compass, launched in March, uses generative AI to answer user questions and create maps from open data sets, ranging from the district’s population to what different trees are planted in the city. The Office of the Chief Technology Officer (OCTO) partnered with the geographic information system (GIS) technology company Esri, which has an office in Vienna, Virginia, to create the new tool.

This debut follows Mayor Muriel Bowser’s signing of DC’s AI Values and Strategic Plan in February. The order requires agencies to assess if using AI is in alignment with the values it sets forth, including that there’s a clear benefit to people; a plan for “meaningful accountability” for the tool; and transparency, sustainability, privacy and equity at the forefront of deployment.

These values are key when launching something like DC Compass, said Michael Rupert, the interim chief technology officer for digital services at the Office of the Chief Technology Officer.

“The way Mayor Bowser rolled out the mayor’s order and this value statement, I think gives residents and businesses a little more comfort that we aren’t just writing a check and seeing what happens,” Rupert said. “That we’re actually methodically going about it in a responsible way, both morally and fiscally.”..(More)”.

Screenshot of AI portal with black text and data tables over white background

DC COMPASS IN ACTION. (SCREENSHOT/COURTESY OCTO)

The Potential of Artificial Intelligence for the SDGs and Official Statistics


Report by Paris21: “Artificial Intelligence (AI) and its impact on people’s lives is growing rapidly. AI is already leading to significant developments from healthcare to education, which can contribute to the efficient monitoring and achievement of the Sustainable Development Goals (SDGs), a call to action to address the world’s greatest challenges. AI is also raising concerns because, if not addressed carefully, its risks may outweigh its benefits. As a result, AI is garnering increasing attention from National Statistical Offices (NSOs) and the official statistics community as they are challenged to produce more, comprehensive, timely, and highquality data for decision-making with limited resources in a rapidly changing world of data and technologies and in light of complex and converging global issues from pandemics to climate change. This paper has been prepared as an input to the “Data and AI for Sustainable Development: Building a Smarter Future” Conference, organized in partnership with The Partnership in Statistics for Development in the 21st Century (PARIS21), the World Bank and the International Monetary Fund (IMF). Building on case studies that examine the use of AI by NSOs, the paper presents the benefits and risks of AI with a focus on NSO operations related to sustainable development. The objective is to spark discussions and to initiate a dialogue around how AI can be leveraged to inform decisions and take action to better monitor and achieve sustainable development, while mitigating its risks…(More)”.

Generative AI in Journalism


Report by Nicholas Diakopoulos et al: “The introduction of ChatGPT by OpenAI in late 2022 captured the imagination of the public—and the news industry—with the potential of generative AI to upend how people create and consume media. Generative AI is a type of artificial intelligence technology that can create new content, such as text, images, audio, video, or other media, based on the data it has been trained on and according to written prompts provided by users. ChatGPT is the chat-based user interface that made the power and potential of generative AI salient to a wide audience, reaching 100 million users within two months of its launch.

Although similar technology had been around, by late 2022 it was suddenly working, spurring its integration into various products and presenting not only a host of opportunities for productivity and new experiences but also some serious concerns about accuracy, provenance and attribution of source information, and the increased potential for creating misinformation.

This report serves as a snapshot of how the news industry has grappled with the initial promises and challenges of generative AI towards the end of 2023. The sample of participants reflects how some of the more savvy and experienced members of the profession are reacting to the technology.

Based on participants’ responses, they found that generative AI is already changing work structure and organization, even as it triggers ethical concerns around use. Here are some key takeaways:

  • Applications in News Production. The most predominant current use cases for generative AI include various forms of textual content production, information gathering and sensemaking, multimedia content production, and business uses.
  • Changing Work Structure and Organization. There are a host of new roles emerging to grapple with the changes introduced by generative AI including for leadership, editorial, product, legal, and engineering positions.
  • Work Redesign. There is an unmet opportunity to design new interfaces to support journalistic work with generative AI, in particular to enable the human oversight needed for the efficient and confident checking and verification of outputs..(More)”