Facial Recognition: Current Capabilities, Future Prospects, and Governance


A National Academies of Sciences, Engineering, and Medicine study: “Facial recognition technology is increasingly used for identity verification and identification, from aiding law enforcement investigations to identifying potential security threats at large venues. However, advances in this technology have outpaced laws and regulations, raising significant concerns related to equity, privacy, and civil liberties.

This report explores the current capabilities, future possibilities, and necessary governance for facial recognition technology. Facial Recognition Technology discusses legal, societal, and ethical implications of the technology, and recommends ways that federal agencies and others developing and deploying the technology can mitigate potential harms and enact more comprehensive safeguards…(More)”.

A tale of two cities: one real, one virtual


Joy Lo Dico in the Financial Times: “In recent years, digital city-building has become a legitimate part of urban planning. Barcelona, Cambridge and Helsinki are among a number of European cities exploring how copies of themselves could prove useful in making their built environments sharper, faster, cleaner and greener.

What exists in real life is being rendered a second time in the digital space: creating a library of the past, an eagle’s-eye view of the present and, potentially, a vision of the future.

One of the most striking projects has been happening in Ukraine, where technology company Skeiron has, since 2022, been mapping the country’s monuments, under threat from bombing.

The project #SaveUkrainianHeritage has recorded 60 buildings, from the St Sofia Cathedral in Kyiv and the Chernivtsi National University — both Unesco world heritage sites — to wooden churches across the country, something Skeiron’s co-founder Yurii Prepodobnyi mentions with pride. There are thousands of them. “Some are only 20 or 30 square metres,” he says. “But Ukrainian churches keep Ukrainian identity.”

With laser measurements, drone photography and photogrammetry — the art of stitching photographs together — Prepodobnyi and his team can produce highly detailed 3D models.

They have even managed to recreate the exterior of the Mariupol drama theatre, destroyed in the early days of the Ukraine war, after calling for photographs and drone footage.

Another project, in Pompeii, has been using similar digital techniques to capture the evolution of excavations into a 3D model. The Pompeii I. 14 Project, led by Tulane University and Indiana State University, takes the process of excavating buildings within one block of Pompeii, Insula 14, and turns it into a digital representation. Using laser measurements, iPad Pros, a consumer drone and handheld cameras, a space can be measured to within a couple of millimetres. What is relayed back along the stream is a visual record of how a room changes over thousands of years, as the debris, volcanic eruption and layers of life that went before are revealed…(More)”.

Ground Truths Are Human Constructions


Article by Florian Jaton: “Artificial intelligence algorithms are human-made, cultural constructs, something I saw first-hand as a scholar and technician embedded with AI teams for 30 months. Among the many concrete practices and materials these algorithms need in order to come into existence are sets of numerical values that enable machine learning. These referential repositories are often called “ground truths,” and when computer scientists construct or use these datasets to design new algorithms and attest to their efficiency, the process is called “ground-truthing.”

Understanding how ground-truthing works can reveal inherent limitations of algorithms—how they enable the spread of false information, pass biased judgments, or otherwise erode society’s agency—and this could also catalyze more thoughtful regulation. As long as ground-truthing remains clouded and abstract, society will struggle to prevent algorithms from causing harm and to optimize algorithms for the greater good.

Ground-truth datasets define AI algorithms’ fundamental goal of reliably predicting and generating a specific output—say, an image with requested specifications that resembles other input, such as web-crawled images. In other words, ground-truth datasets are deliberately constructed. As such, they, along with their resultant algorithms, are limited and arbitrary and bear the sociocultural fingerprints of the teams that made them…(More)”.

Generative AI for economic research: Use cases and implications for economists  


Paper by Anton Korinek: “…This article describes use cases of modern generative AI to interested economic researchers based on the author’s exploration of the space. The main emphasis is on LLMs, which are the type of generative AI that is currently most useful for research. I have categorized their use cases into six areas: ideation and feedback, writing, background research, data analysis, coding, and mathematical derivations. I provide general instructions for how to take advantage of each of these capabilities and demonstrate them using specific examples. Moreover, I classify the capabilities of the most commonly used LLMs from experimental to highly useful to provide an overview. My hope is that this paper will be a useful guide both for researchers starting to use generative AI and for expert users who are interested in new use cases beyond what they already have experience with to take advantage of the rapidly growing capabilities of LLMs. The online resources associated with this paper are available at the journal website and will provide semi-annual updates on the capabilities and use cases of the most advanced generative AI tools for economic research. In addition, they offer a guide on “How do I start?” as well as a page with “Useful Resources on Generative AI for Economists.”…(More)”

The Branding Dilemma of AI: Steering Towards Efficient Regulation


Blog by Zeynep Engin: “…Undoubtedly, the term ‘Artificial Intelligence’ has captured the public imagination, proving to be an excellent choice from a marketing standpoint (particularly serving the marketing goals of big AI tech companies). However, this has not been without its drawbacks. The field has experienced several ‘AI winters’ when lofty promises failed to translate into real-world outcomes. More critically, this term has anthropomorphized what are, at their core, high-dimensional statistical optimization processes. Such representation has obscured their true nature and the extent of their potential. Moreover, as computing capacities have expanded exponentially, the ability of these systems to process large datasets quickly and precisely, identifying patterns autonomously, has often been misinterpreted as evidence of human-like or even superhuman intelligence. Consequently, AI systems have been elevated to almost mystical status, perceived as incomprehensible to humans and, thus, uncontrollable by humans…

A profound shift in the discourse surrounding AI is urgently necessary. The quest to replicate or surpass human intelligence, while technologically fascinating, does not fully encapsulate the field’s true essence and progress. Indeed, AI has seen significant advances, uncovering a vast array of functionalities. However, its core strength still lies in computational speed and precision — a mechanical prowess. The ‘magic’ of AI truly unfolds when this computational capacity intersects with the wealth of real-world data generated by human activities and the environment, transforming human directives into computational actions. Essentially, we are now outsourcing complex processing tasks to machines, moving beyond crafting bespoke solutions for each problem in favour of leveraging vast computational resources we have. This transition does not yield an ‘artificial intelligence’, but poses a new challenge to human intelligence in the knowledge creation cycle: the responsibility to formulate the ‘right’ questions and vigilantly monitor the outcomes of such intricate processing, ensuring the mitigation of any potential adverse impacts…(More)”.

In shaping AI policy, stories about social impacts are just as important as expert information


Blog by Daniel S. Schiff and Kaylyn Jackson Schiff: “Will artificial intelligence (AI) save the world or destroy it? Will it lead to the end of manual labor and an era of leisure and luxury, or to more surveillance and job insecurity? Is it the start of a revolution in innovation that will transform the economy for the better? Or does it represent a novel threat to human rights?

Irrespective of what turns out to be the truth, what our key policymakers believe about these questions matters. It will shape how they think about the underlying problems that AI policy is aiming to address, and which solutions are appropriate to do so. …In late 2021, we ran a study to better understand the impact of policy narratives on the behavior of policymakers. We focused on US state legislators,…

In our analysis, we found something surprising. We measured whether legislators were more likely to engage with a message featuring a narrative or featuring expert information, which we assessed by seeing if they clicked on a given fact sheet/story or clicked to register for or attended the webinar.

Despite the importance attached to technical expertise in AI circles, we found that narratives were at least as persuasive as expert information. Receiving a narrative emphasizing, say, growing competition between the US and China, or the faulty arrest of Robert Williams due to facial recognition, led to a 30 percent increase in legislator engagement compared to legislators who only received basic information about the civil society organization. These narratives were just as effective as more neutral, fact-based information about AI with accompanying fact sheets…(More)”

Fairness and Machine Learning


Book by Solon Barocas, Moritz Hardt and Arvind Narayanan: “…introduces advanced undergraduate and graduate students to the intellectual foundations of this recently emergent field, drawing on a diverse range of disciplinary perspectives to identify the opportunities and hazards of automated decision-making. It surveys the risks in many applications of machine learning and provides a review of an emerging set of proposed solutions, showing how even well-intentioned applications may give rise to objectionable results. It covers the statistical and causal measures used to evaluate the fairness of machine learning models as well as the procedural and substantive aspects of decision-making that are core to debates about fairness, including a review of legal and philosophical perspectives on discrimination. This incisive textbook prepares students of machine learning to do quantitative work on fairness while reflecting critically on its foundations and its practical utility.

• Introduces the technical and normative foundations of fairness in automated decision-making
• Covers the formal and computational methods for characterizing and addressing problems
• Provides a critical assessment of their intellectual foundations and practical utility
• Features rich pedagogy and extensive instructor resources…(More)”

Power to the standards


Report by Gergana Baeva, Michael Puntschuh and Matthieu Binder: “Standards and norms will be of central importance when it comes to the practical implementation of legal requirements for developed and deployed AI systems.

Using expert interviews, our study “Power to the standards” documents the existing obstacles on the way to the standardization of AI. In addition to practical and technological challenges, questions of democratic policy arise. After all, requirements such as fairness or transparency are often regarded as criteria to be determined by the legislator, meaning that they are only partially susceptible to standardization.

Our study concludes that the targeted and comprehensive participation of civil society actors is particularly necessary in order to compensate for existing participation deficits within the standardization process…(More)”.

Rebalancing AI


Article by Daron Acemoglu and Simon Johnson: “Optimistic forecasts regarding the growth implications of AI abound. AI adoption could boost productivity growth by 1.5 percentage points per year over a 10-year period and raise global GDP by 7 percent ($7 trillion in additional output), according to Goldman Sachs. Industry insiders offer even more excited estimates, including a supposed 10 percent chance of an “explosive growth” scenario, with global output rising more than 30 percent a year.

All this techno-optimism draws on the “productivity bandwagon”: a deep-rooted belief that technological change—including automation—drives higher productivity, which raises net wages and generates shared prosperity.

Such optimism is at odds with the historical record and seems particularly inappropriate for the current path of “just let AI happen,” which focuses primarily on automation (replacing people). We must recognize that there is no singular, inevitable path of development for new technology. And, assuming that the goal is to sustainably improve economic outcomes for more people, what policies would put AI development on the right path, with greater focus on enhancing what all workers can do?…(More)”

What Will AI Do to Elections?


Article by Rishi Iyengar: “…Requests to X’s press team on how the platform was preparing for elections in 2024 yielded an automated response: “Busy now, please check back later”—a slight improvement from the initial Musk-era change where the auto-reply was a poop emoji.

X isn’t the only major social media platform with fewer content moderators. Meta, which owns Facebook, Instagram, and WhatsApp, has laid off more than 20,000 employees since November 2022—several of whom worked on trust and safety—while many YouTube employees working on misinformation policy were impacted by layoffs at parent company Google.

There could scarcely be a worse time to skimp on combating harmful content online. More than 50 countries, including the world’s three biggest democracies and Taiwan, an increasingly precarious geopolitical hot spot, are expected to hold national elections in 2024. Seven of the world’s 10 most populous countries—Bangladesh, India, Indonesia, Mexico, Pakistan, Russia, and the United States—will collectively send a third of the world’s population to the polls.

Elections, with their emotionally charged and often tribal dynamics, are where misinformation missteps come home to roost. If social media misinformation is the equivalent of yelling “fire” in a crowded theater, election misinformation is like doing so when there’s a horror movie playing and everyone’s already on edge.

Katie Harbath prefers a different analogy, one that illustrates how nebulous and thorny the issues are and the sheer uncertainty surrounding them. “The metaphor I keep using is a kaleidoscope because there’s so many different aspects to this but depending how you turn the kaleidoscope, the pattern changes of what it’s going to look like,” she said in an interview in October. “And that’s how I feel about life post-2024. … I don’t know where in the kaleidoscope it’s going to land.”

Harbath has become something of an election whisperer to the tech industry, having spent a decade at Facebook from 2011 building the company’s election integrity efforts from scratch. She left in 2021 and founded Anchor Change, a public policy consulting firm that helps other platforms combat misinformation and prepare for elections in particular.

Had she been in her old job, Harbath said, her team would have completed risk assessments of global elections by late 2022 or early 2023 and then spent the rest of the year tailoring Meta’s products to them as well as setting up election “war rooms” where necessary. “Right now, we would be starting to move into execution mode.” She cautions against treating the resources that companies are putting into election integrity as a numbers game—“once you build some of those tools, maintaining them doesn’t take as many people”—but acknowledges that the allocation of resources reveals a company leadership’s priorities.

The companies insist they remain committed to election integrity. YouTube has “heavily invested in the policies and systems that help us successfully support elections around the world,” spokesperson Ivy Choi said in a statement. TikTok said it has a total of 40,000 safety professionals and works with 16 fact-checking organizations across 50 global languages. Meta declined to comment for this story, but a company representative directed Foreign Policy to a recent blog post by Nick Clegg, a former U.K. deputy prime minister who now serves as Meta’s head of global affairs. “We have around 40,000 people working on safety and security, with more than $20 billion invested in teams and technology in this area since 2016,” Clegg wrote in the post.

But there are other troubling signs. YouTube announced last June that it would stop taking down content spreading false claims about the 2020 U.S. election or past elections, and Meta quietly made a similar policy change to its political ad rules in 2022. And as past precedent has shown, the platforms tend to have even less cover outside the West, with major blind spots in local languages and context making misinformation and hate speech not only more pervasive but also more dangerous…(More)”.