Guardrails: Guiding Human Decisions in the Age of AI


Book by Urs Gasser and Viktor Mayer-Schönberger: “When we make decisions, our thinking is informed by societal norms, “guardrails” that guide our decisions, like the laws and rules that govern us. But what are good guardrails in today’s world of overwhelming information flows and increasingly powerful technologies, such as artificial intelligence? Based on the latest insights from the cognitive sciences, economics, and public policy, Guardrails offers a novel approach to shaping decisions by embracing human agency in its social context.

In this visionary book, Urs Gasser and Viktor Mayer-Schönberger show how the quick embrace of technological solutions can lead to results we don’t always want, and they explain how society itself can provide guardrails more suited to the digital age, ones that empower individual choice while accounting for the social good, encourage flexibility in the face of changing circumstances, and ultimately help us to make better decisions as we tackle the most daunting problems of our times, such as global injustice and climate change.

Whether we change jobs, buy a house, or quit smoking, thousands of decisions large and small shape our daily lives. Decisions drive our economies, seal the fate of democracies, create war or peace, and affect the well-being of our planet. Guardrails challenges the notion that technology should step in where our own decision making fails, laying out a surprisingly human-centered set of principles that can create new spaces for better decisions and a more equitable and prosperous society…(More)”.

Collective action for responsible AI in health


OECD Report: “Artificial intelligence (AI) will have profound impacts across health systems, transforming health care, public health, and research. Responsible AI can accelerate efforts toward health systems being more resilient, sustainable, equitable, and person-centred. This paper provides an overview of the background and current state of artificial intelligence in health, perspectives on opportunities, risks, and barriers to success. The paper proposes several areas to be explored for policy-makers to advance the future of responsible AI in health that is adaptable to change, respects individuals, champions equity, and achieves better health outcomes for all.

The areas to be explored relate to trust, capacity building, evaluation, and collaboration. This recognises that the primary forces that are needed to unlock the value from artificial intelligence are people-based and not technical…(More)”

AI’s big rift is like a religious schism


Article by Henry Farrell: “…Henri de Saint-Simon, a French utopian, proposed a new religion, worshipping the godlike force of progress, with Isaac Newton as its chief saint. He believed that humanity’s sole uniting interest, “the progress of the sciences”, should be directed by the “elect of humanity”, a 21-member “Council of Newton”. Friedrich Hayek, a 20th-century economist, later gleefully described how this ludicrous “religion of the engineers” collapsed into a welter of feuding sects.

Today, the engineers of artificial intelligence (ai) are experiencing their own religious schism. One sect worships progress, canonising Hayek himself. The other is gripped by terror of godlike forces. Their battle has driven practical questions to the margins of debate…(More)”.

Medical AI could be ‘dangerous’ for poorer nations, WHO warns


Article by David Adam: “The introduction of health-care technologies based on artificial intelligence (AI) could be “dangerous” for people in lower-income countries, the World Health Organization (WHO) has warned.

The organization, which today issued a report describing new guidelines on large multi-modal models (LMMs), says it is essential that uses of the developing technology are not shaped only by technology companies and those in wealthy countries. If models aren’t trained on data from people in under-resourced places, those populations might be poorly served by the algorithms, the agency says.

“The very last thing that we want to see happen as part of this leap forward with technology is the propagation or amplification of inequities and biases in the social fabric of countries around the world,” Alain Labrique, the WHO’s director for digital health and innovation, said at a media briefing today.

The WHO issued its first guidelines on AI in health care in 2021. But the organization was prompted to update them less than three years later by the rise in the power and availability of LMMs. Also called generative AI, these models, including the one that powers the popular ChatGPT chatbot, process and produce text, videos and images…(More)”.

Facial Recognition: Current Capabilities, Future Prospects, and Governance


A National Academies of Sciences, Engineering, and Medicine study: “Facial recognition technology is increasingly used for identity verification and identification, from aiding law enforcement investigations to identifying potential security threats at large venues. However, advances in this technology have outpaced laws and regulations, raising significant concerns related to equity, privacy, and civil liberties.

This report explores the current capabilities, future possibilities, and necessary governance for facial recognition technology. Facial Recognition Technology discusses legal, societal, and ethical implications of the technology, and recommends ways that federal agencies and others developing and deploying the technology can mitigate potential harms and enact more comprehensive safeguards…(More)”.

A tale of two cities: one real, one virtual


Joy Lo Dico in the Financial Times: “In recent years, digital city-building has become a legitimate part of urban planning. Barcelona, Cambridge and Helsinki are among a number of European cities exploring how copies of themselves could prove useful in making their built environments sharper, faster, cleaner and greener.

What exists in real life is being rendered a second time in the digital space: creating a library of the past, an eagle’s-eye view of the present and, potentially, a vision of the future.

One of the most striking projects has been happening in Ukraine, where technology company Skeiron has, since 2022, been mapping the country’s monuments, under threat from bombing.

The project #SaveUkrainianHeritage has recorded 60 buildings, from the St Sofia Cathedral in Kyiv and the Chernivtsi National University — both Unesco world heritage sites — to wooden churches across the country, something Skeiron’s co-founder Yurii Prepodobnyi mentions with pride. There are thousands of them. “Some are only 20 or 30 square metres,” he says. “But Ukrainian churches keep Ukrainian identity.”

With laser measurements, drone photography and photogrammetry — the art of stitching photographs together — Prepodobnyi and his team can produce highly detailed 3D models.

They have even managed to recreate the exterior of the Mariupol drama theatre, destroyed in the early days of the Ukraine war, after calling for photographs and drone footage.

Another project, in Pompeii, has been using similar digital techniques to capture the evolution of excavations into a 3D model. The Pompeii I. 14 Project, led by Tulane University and Indiana State University, takes the process of excavating buildings within one block of Pompeii, Insula 14, and turns it into a digital representation. Using laser measurements, iPad Pros, a consumer drone and handheld cameras, a space can be measured to within a couple of millimetres. What is relayed back along the stream is a visual record of how a room changes over thousands of years, as the debris, volcanic eruption and layers of life that went before are revealed…(More)”.

Ground Truths Are Human Constructions


Article by Florian Jaton: “Artificial intelligence algorithms are human-made, cultural constructs, something I saw first-hand as a scholar and technician embedded with AI teams for 30 months. Among the many concrete practices and materials these algorithms need in order to come into existence are sets of numerical values that enable machine learning. These referential repositories are often called “ground truths,” and when computer scientists construct or use these datasets to design new algorithms and attest to their efficiency, the process is called “ground-truthing.”

Understanding how ground-truthing works can reveal inherent limitations of algorithms—how they enable the spread of false information, pass biased judgments, or otherwise erode society’s agency—and this could also catalyze more thoughtful regulation. As long as ground-truthing remains clouded and abstract, society will struggle to prevent algorithms from causing harm and to optimize algorithms for the greater good.

Ground-truth datasets define AI algorithms’ fundamental goal of reliably predicting and generating a specific output—say, an image with requested specifications that resembles other input, such as web-crawled images. In other words, ground-truth datasets are deliberately constructed. As such, they, along with their resultant algorithms, are limited and arbitrary and bear the sociocultural fingerprints of the teams that made them…(More)”.

Generative AI for economic research: Use cases and implications for economists  


Paper by Anton Korinek: “…This article describes use cases of modern generative AI to interested economic researchers based on the author’s exploration of the space. The main emphasis is on LLMs, which are the type of generative AI that is currently most useful for research. I have categorized their use cases into six areas: ideation and feedback, writing, background research, data analysis, coding, and mathematical derivations. I provide general instructions for how to take advantage of each of these capabilities and demonstrate them using specific examples. Moreover, I classify the capabilities of the most commonly used LLMs from experimental to highly useful to provide an overview. My hope is that this paper will be a useful guide both for researchers starting to use generative AI and for expert users who are interested in new use cases beyond what they already have experience with to take advantage of the rapidly growing capabilities of LLMs. The online resources associated with this paper are available at the journal website and will provide semi-annual updates on the capabilities and use cases of the most advanced generative AI tools for economic research. In addition, they offer a guide on “How do I start?” as well as a page with “Useful Resources on Generative AI for Economists.”…(More)”

The Branding Dilemma of AI: Steering Towards Efficient Regulation


Blog by Zeynep Engin: “…Undoubtedly, the term ‘Artificial Intelligence’ has captured the public imagination, proving to be an excellent choice from a marketing standpoint (particularly serving the marketing goals of big AI tech companies). However, this has not been without its drawbacks. The field has experienced several ‘AI winters’ when lofty promises failed to translate into real-world outcomes. More critically, this term has anthropomorphized what are, at their core, high-dimensional statistical optimization processes. Such representation has obscured their true nature and the extent of their potential. Moreover, as computing capacities have expanded exponentially, the ability of these systems to process large datasets quickly and precisely, identifying patterns autonomously, has often been misinterpreted as evidence of human-like or even superhuman intelligence. Consequently, AI systems have been elevated to almost mystical status, perceived as incomprehensible to humans and, thus, uncontrollable by humans…

A profound shift in the discourse surrounding AI is urgently necessary. The quest to replicate or surpass human intelligence, while technologically fascinating, does not fully encapsulate the field’s true essence and progress. Indeed, AI has seen significant advances, uncovering a vast array of functionalities. However, its core strength still lies in computational speed and precision — a mechanical prowess. The ‘magic’ of AI truly unfolds when this computational capacity intersects with the wealth of real-world data generated by human activities and the environment, transforming human directives into computational actions. Essentially, we are now outsourcing complex processing tasks to machines, moving beyond crafting bespoke solutions for each problem in favour of leveraging vast computational resources we have. This transition does not yield an ‘artificial intelligence’, but poses a new challenge to human intelligence in the knowledge creation cycle: the responsibility to formulate the ‘right’ questions and vigilantly monitor the outcomes of such intricate processing, ensuring the mitigation of any potential adverse impacts…(More)”.

In shaping AI policy, stories about social impacts are just as important as expert information


Blog by Daniel S. Schiff and Kaylyn Jackson Schiff: “Will artificial intelligence (AI) save the world or destroy it? Will it lead to the end of manual labor and an era of leisure and luxury, or to more surveillance and job insecurity? Is it the start of a revolution in innovation that will transform the economy for the better? Or does it represent a novel threat to human rights?

Irrespective of what turns out to be the truth, what our key policymakers believe about these questions matters. It will shape how they think about the underlying problems that AI policy is aiming to address, and which solutions are appropriate to do so. …In late 2021, we ran a study to better understand the impact of policy narratives on the behavior of policymakers. We focused on US state legislators,…

In our analysis, we found something surprising. We measured whether legislators were more likely to engage with a message featuring a narrative or featuring expert information, which we assessed by seeing if they clicked on a given fact sheet/story or clicked to register for or attended the webinar.

Despite the importance attached to technical expertise in AI circles, we found that narratives were at least as persuasive as expert information. Receiving a narrative emphasizing, say, growing competition between the US and China, or the faulty arrest of Robert Williams due to facial recognition, led to a 30 percent increase in legislator engagement compared to legislators who only received basic information about the civil society organization. These narratives were just as effective as more neutral, fact-based information about AI with accompanying fact sheets…(More)”