What Will AI Do to Elections?


Article by Rishi Iyengar: “…Requests to X’s press team on how the platform was preparing for elections in 2024 yielded an automated response: “Busy now, please check back later”—a slight improvement from the initial Musk-era change where the auto-reply was a poop emoji.

X isn’t the only major social media platform with fewer content moderators. Meta, which owns Facebook, Instagram, and WhatsApp, has laid off more than 20,000 employees since November 2022—several of whom worked on trust and safety—while many YouTube employees working on misinformation policy were impacted by layoffs at parent company Google.

There could scarcely be a worse time to skimp on combating harmful content online. More than 50 countries, including the world’s three biggest democracies and Taiwan, an increasingly precarious geopolitical hot spot, are expected to hold national elections in 2024. Seven of the world’s 10 most populous countries—Bangladesh, India, Indonesia, Mexico, Pakistan, Russia, and the United States—will collectively send a third of the world’s population to the polls.

Elections, with their emotionally charged and often tribal dynamics, are where misinformation missteps come home to roost. If social media misinformation is the equivalent of yelling “fire” in a crowded theater, election misinformation is like doing so when there’s a horror movie playing and everyone’s already on edge.

Katie Harbath prefers a different analogy, one that illustrates how nebulous and thorny the issues are and the sheer uncertainty surrounding them. “The metaphor I keep using is a kaleidoscope because there’s so many different aspects to this but depending how you turn the kaleidoscope, the pattern changes of what it’s going to look like,” she said in an interview in October. “And that’s how I feel about life post-2024. … I don’t know where in the kaleidoscope it’s going to land.”

Harbath has become something of an election whisperer to the tech industry, having spent a decade at Facebook from 2011 building the company’s election integrity efforts from scratch. She left in 2021 and founded Anchor Change, a public policy consulting firm that helps other platforms combat misinformation and prepare for elections in particular.

Had she been in her old job, Harbath said, her team would have completed risk assessments of global elections by late 2022 or early 2023 and then spent the rest of the year tailoring Meta’s products to them as well as setting up election “war rooms” where necessary. “Right now, we would be starting to move into execution mode.” She cautions against treating the resources that companies are putting into election integrity as a numbers game—“once you build some of those tools, maintaining them doesn’t take as many people”—but acknowledges that the allocation of resources reveals a company leadership’s priorities.

The companies insist they remain committed to election integrity. YouTube has “heavily invested in the policies and systems that help us successfully support elections around the world,” spokesperson Ivy Choi said in a statement. TikTok said it has a total of 40,000 safety professionals and works with 16 fact-checking organizations across 50 global languages. Meta declined to comment for this story, but a company representative directed Foreign Policy to a recent blog post by Nick Clegg, a former U.K. deputy prime minister who now serves as Meta’s head of global affairs. “We have around 40,000 people working on safety and security, with more than $20 billion invested in teams and technology in this area since 2016,” Clegg wrote in the post.

But there are other troubling signs. YouTube announced last June that it would stop taking down content spreading false claims about the 2020 U.S. election or past elections, and Meta quietly made a similar policy change to its political ad rules in 2022. And as past precedent has shown, the platforms tend to have even less cover outside the West, with major blind spots in local languages and context making misinformation and hate speech not only more pervasive but also more dangerous…(More)”.

Forget technology — politicians pose the gravest misinformation threat


Article by Rasmus Nielsen: “This is set to be a big election year, including in India, Mexico, the US, and probably the UK. People will rightly be on their guard for misinformation, but much of the policy discussion on the topic ignores the most important source: members of the political elite.

As a social scientist working on political communication, I have spent years in these debates — which continue to be remarkably disconnected from what we know from research. Academic findings repeatedly underline the actual impact of politics, while policy documents focus persistently on the possible impact of new technologies.

Most recently, Britain’s National Cyber Security Centre (NCSC) has warned of how “AI-created hyper-realistic bots will make the spread of disinformation easier and the manipulation of media for use in deepfake campaigns will likely become more advanced”. This is similar to warnings from many other public authorities, which ignore the misinformation from the most senior levels of domestic politics. In the US, the Washington Post stopped counting after documenting at least 30,573 false or misleading claims made by Donald Trump as president. In the UK, the non-profit FullFact has reported that as many as 50 MPs — including two prime ministers, cabinet ministers and shadow cabinet ministers — failed to correct false, unevidenced or misleading claims in 2022 alone, despite repeated calls to do so.

These are actual problems of misinformation, and the phenomenon is not new. Both George W Bush and Barack Obama’s administrations obfuscated on Afghanistan. Bush’s government and that of his UK counterpart Tony Blair advanced false and misleading claims in the run-up to the Iraq war. Prominent politicians have, over the years, denied the reality of human-induced climate change, proposed quack remedies for Covid-19, and so much more. These are examples of misinformation, and, at their most egregious, of disinformation — defined as spreading false or misleading information for political advantage or profit.

This basic point is strikingly absent from many policy documents — the NCSC report, for example, has nothing to say about domestic politics. It is not alone. Take the US Surgeon General’s 2021 advisory on confronting health misinformation which calls for a “whole-of-society” approach — and yet contains nothing on politicians and curiously omits the many misleading claims made by the sitting president during the pandemic, including touting hydroxychloroquine as a potential treatment…(More)”.

Introduction to Digital Humanism


Open access textbook edited by Hannes Werthner et al: “…introduces and defines digital humanism from a diverse range of disciplines. Following the 2019 Vienna Manifesto, the book calls for a digital humanism that describes, analyzes, and, most importantly, influences the complex interplay of technology and humankind, for a better society and life, fully respecting universal human rights.The book is organized in three parts: Part I “Background” provides the multidisciplinary background needed to understand digital humanism in its philosophical, cultural, technological, historical, social, and economic dimensions. The goal is to present the necessary knowledge upon which an effective interdisciplinary discourse on digital humanism can be founded. Part II “Digital Humanism – a System’s View” focuses on an in-depth presentation and discussion of the main digital humanism concerns arising in current digital systems. The goal of this part is to make readers aware and sensitive to these issues, including e.g. the control and autonomy of AI systems, privacy and security, and the role of governance. Part III “Critical and Societal Issues of Digital Systems” delves into critical societal issues raised by advances of digital technologies. While the public debate in the past has often focused on them separately, especially when they became visible through sensational events the aim here is to shed light on the entire landscape and show their interconnected relationships. This includes issues such as AI and ethics, fairness and bias, privacy and surveillance, platform power and democracy.

This textbook is intended for students, teachers, and policy makers interested in digital humanism. It is designed for stand-alone and for complementary courses in computer science, or curricula in science, engineering, humanities and social sciences. Each chapter includes questions for students and an annotated reading list to dive deeper into the associated chapter material. The book aims to provide readers with as wide an exposure as possible to digital advances and their consequences for humanity. It includes constructive ideas and approaches that seek to ensure that our collective digital future is determined through human agency…(More)”.

Technology, Data and Elections: An Updated Checklist on the Election Cycle


Checklist by Privacy International: “In the last few years, electoral processes and related activities have undergone significant changes, driven by the development of digital technologies.

The use of personal data has redefined political campaigning and enabled the proliferation of political advertising tailor-made for audiences sharing specific characteristics or personalised to the individual. These new practices, combined with the platforms that enable them, create an environment that facilitate the manipulation of opinion and, in some cases, the exclusion of voters.

In parallel, governments are continuing to invest in modern infrastructure that is inherently data-intensive. Several states are turning to biometric voter registration and verification technologies ostensibly to curtail fraud and vote manipulation. This modernisation often results in the development of nationwide databases containing masses of personal, sensitive information, that require heightened safeguards and protection.

The number and nature of actors involved in the election process is also changing, and so are the relationships between electoral stakeholders. The introduction of new technologies, for example for purposes of voter registration and verification, often goes hand-in-hand with the involvement of private companies, a costly investment that is not without risk and requires robust safeguards to avoid abuse.

This new electoral landscape comes with many challenges that must be addressed in order to protect free and fair elections: a fact that is increasingly recognised by policymakers and regulatory bodies…(More)”.

Knightian Uncertainty


Paper by Cass R. Sunstein: “In 1921, John Maynard Keynes and Frank Knight independently insisted on the importance of making a distinction between uncertainty and risk. Keynes referred to matters about which “there is no scientific basis on which to form any calculable probability whatever.” Knight claimed that “Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated.” Knightian uncertainty exists when people cannot assign probabilities to imaginable outcomes. People might know that a course of action might produce bad outcomes A, B, C, D, and E, without knowing much or anything about the probability of each. Contrary to a standard view in economics, Knightian uncertainty is real. Dogs face Knightian uncertainty; horses and elephants face it; human beings face it; in particular, human beings who make policy, or develop regulations, sometimes face it. Knightian uncertainty poses challenging and unresolved issues for decision theory and regulatory practice. It bears on many problems, potentially including those raised by artificial intelligence. It is tempting to seek to eliminate the worst-case scenario (and thus to adopt the maximin rule), but serious problems arise if eliminating the worst-case scenario would (1) impose high risks and costs, (2) eliminate large benefits or potential “miracles,” or (3) create uncertain risks…(More)”.

The Transferability Question


Report by Geoff Mulgan: “How should we think about the transferability of ideas and methods? If something works in one place and one time, how do we know if it, or some variant of it, will work in another place or another time?

This – the transferability question – is one that many organisations face: businesses, from retailers and taxi firms to restaurants and accountants wanting to expand to other regions or countries; governments wanting to adopt and adapt policies from elsewhere; and professions like doctors, wanting to know whether a kind of surgery, or a smoking cessation programme, will work in another context…

Here I draw on this literature to suggest not so much a generalisable method but rather an approach that starts by asking four basic questions of any promising idea:  

  • SPREAD: has the idea already spread to diverse contexts and been shown to work?  
  • ESSENTIALS: do we know what the essentials are, the crucial ingredients that make it effective?  
  • EASE: how easy is it to adapt or adopt (in other words, how many other things need to change for it to be implemented successfully)? 
  • RELEVANCE: how relevant is the evidence (or how similar is the context of evidence to the context of action)? 

Asking these questions is a protection against the vice of hoping that you can just ‘cut and paste’ an idea from elsewhere, but also an encouragement to be hungry for good ideas that can be adopted or adapted.    

I conclude by arguing that it is healthy for any society or government to assume that there are good ideas that could adopted or adapted; it’s healthy to cultivate a hunger to learn; healthy to understand methods for analysing what aspects of an idea or model could be transferable; and great value in having institutions that are good at promoting and spreading ideas, at adoption and adaptation as well as innovation…(More)”.

The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now


Book by Hilke Schellmann: “Based on exclusive information from whistleblowers, internal documents, and real world test results, Emmy‑award winning Wall Street Journal contributor Hilke Schellmann delivers a shocking and illuminating expose on the next civil rights issue of our time: how AI has already taken over the workplace and shapes our future.
 

Hilke Schellmann, is an Emmy‑award winning investigative reporter, Wall Street Journal and Guardian contributor and Journalism Professor at NYU. In The Algorithm, she investigates the rise of artificial intelligence (AI) in the world of work. AI is now being used to decide who has access to an education, who gets hired, who gets fired, and who receives a promotion. Drawing on exclusive information from whistleblowers, internal documents and real‑world tests, Schellmann discovers that many of the algorithms making high‑stakes decisions are biased, racist, and do more harm than good. Algorithms are on the brink of dominating our lives and threaten our human future—if we don’t fight back. 
 
Schellmann takes readers on a journalistic detective story testing algorithms that have secretly analyzed job candidates’ facial expressions and tone of voice. She investigates algorithms that scan our online activity including Twitter and LinkedIn to construct personality profiles à la Cambridge Analytica. Her reporting reveals how employers track the location of their employees, the keystrokes they make, access everything on their screens and, during meetings, analyze group discussions to diagnose problems in a team. Even universities are now using predictive analytics for admission offers and financial aid…(More)”

Charting the Emerging Geography of AI


Article by Bhaskar Chakravorti, Ajay Bhalla, and Ravi Shankar Chaturvedi: “Given the high stakes of this race, which countries are in the lead? Which are gaining on the leaders? How might this hierarchy shape the future of AI? Identifying AI-leading countries is not straightforward, as data, knowledge, algorithms, and models can, in principle, cross borders. Even the U.S.–China rivalry is complicated by the fact that AI researchers from the two countries cooperate — and more so than researchers from any other pair of countries. Open-source models are out there for everyone to use, with licensing accessible even for cutting-edge models. Nonetheless, AI development benefits from scale economies and, as a result, is geographically clustered as many significant inputs are concentrated and don’t cross borders that easily….

Rapidly accumulating pools of data in digital economies around the world are clearly one of the critical drivers of AI development. In 2019, we introduced the idea of “gross data product” of countries determined by the volume, complexity, and accessibility of data consumed alongside the number of active internet users in the country. For this analysis, we recognized that gross data product is an essential asset for AI development — especially for generative AI, which requires massive, diverse datasets — and updated the 2019 analyses as a foundation, adding drivers that are critical for AI development overall. That essential data layer makes the index introduced here distinct from other indicators of AI “vibrancy” or measures of global investments, innovations, and implementation of AI…(More)”.

Measuring Global Migration: Towards Better Data for All


Book by Frank Laczko, Elisa Mosler Vidal, Marzia Rango: “This book focuses on how to improve the collection, analysis and responsible use of data on global migration and international mobility. While migration remains a topic of great policy interest for governments around the world, there is a serious lack of reliable, timely, disaggregated and comparable data on it, and often insufficient safeguards to protect migrants’ information. Meanwhile, vast amounts of data about the movement of people are being generated in real time due to new technologies, but these have not yet been fully captured and utilized by migration policymakers, who often do not have enough data to inform their policies and programmes. The lack of migration data has been internationally recognized; the Global Compact for Safe, Orderly and Regular Migration urges all countries to improve data on migration to ensure that policies and programmes are “evidence-based”, but does not spell out how this could be done.

This book examines both the technical issues associated with improving data on migration and the wider political challenges of how countries manage the collection and use of migration data. The first part of the book discusses how much we really know about international migration based on existing data, and key concepts and approaches which are often used to measure migration. The second part of the book examines what measures could be taken to improve migration data, highlighting examples of good practice from around the world in recent years, across a range of different policy areas, such as health, climate change and sustainable development more broadly.

Written by leading experts on international migration data, this book is the perfect guide for students, policymakers and practitioners looking to understand more about the existing evidence base on migration and what can be done to improve it…(More)”. (See also: Big Data For Migration Alliance).

New group aims to professionalize AI auditing


Article by Louise Matsakis: “The newly formed International Association of Algorithmic Auditors (IAAA) is hoping to professionalize the sector by creating a code of conduct for AI auditors, training curriculums, and eventually, a certification program.

Over the last few years, lawmakers and researchers have repeatedly proposed the same solution for regulating artificial intelligence: require independent audits. But the industry remains a wild west; there are only a handful of reputable AI auditing firms and no established guardrails for how they should conduct their work.

Yet several jurisdictions have passed laws mandating tech firms to commission independent audits, including New York City. The idea is that AI firms should have to demonstrate their algorithms work as advertised, the same way companies need to prove they haven’t fudged their finances.

Since ChatGPT was released last year, a troubling norm has been established in the AI industry, which is that it’s perfectly acceptable to evaluate your own models in-house.

Leading startups like OpenAI and Anthropic regularly publish research about the AI systems they’re developing, including the potential risks. But they rarely commission independent audits, let alone publish the results, making it difficult for anyone to know what’s really happening under the hood…(More)”..(More)”