What Will AI Do to Elections?


Article by Rishi Iyengar: “…Requests to X’s press team on how the platform was preparing for elections in 2024 yielded an automated response: “Busy now, please check back later”—a slight improvement from the initial Musk-era change where the auto-reply was a poop emoji.

X isn’t the only major social media platform with fewer content moderators. Meta, which owns Facebook, Instagram, and WhatsApp, has laid off more than 20,000 employees since November 2022—several of whom worked on trust and safety—while many YouTube employees working on misinformation policy were impacted by layoffs at parent company Google.

There could scarcely be a worse time to skimp on combating harmful content online. More than 50 countries, including the world’s three biggest democracies and Taiwan, an increasingly precarious geopolitical hot spot, are expected to hold national elections in 2024. Seven of the world’s 10 most populous countries—Bangladesh, India, Indonesia, Mexico, Pakistan, Russia, and the United States—will collectively send a third of the world’s population to the polls.

Elections, with their emotionally charged and often tribal dynamics, are where misinformation missteps come home to roost. If social media misinformation is the equivalent of yelling “fire” in a crowded theater, election misinformation is like doing so when there’s a horror movie playing and everyone’s already on edge.

Katie Harbath prefers a different analogy, one that illustrates how nebulous and thorny the issues are and the sheer uncertainty surrounding them. “The metaphor I keep using is a kaleidoscope because there’s so many different aspects to this but depending how you turn the kaleidoscope, the pattern changes of what it’s going to look like,” she said in an interview in October. “And that’s how I feel about life post-2024. … I don’t know where in the kaleidoscope it’s going to land.”

Harbath has become something of an election whisperer to the tech industry, having spent a decade at Facebook from 2011 building the company’s election integrity efforts from scratch. She left in 2021 and founded Anchor Change, a public policy consulting firm that helps other platforms combat misinformation and prepare for elections in particular.

Had she been in her old job, Harbath said, her team would have completed risk assessments of global elections by late 2022 or early 2023 and then spent the rest of the year tailoring Meta’s products to them as well as setting up election “war rooms” where necessary. “Right now, we would be starting to move into execution mode.” She cautions against treating the resources that companies are putting into election integrity as a numbers game—“once you build some of those tools, maintaining them doesn’t take as many people”—but acknowledges that the allocation of resources reveals a company leadership’s priorities.

The companies insist they remain committed to election integrity. YouTube has “heavily invested in the policies and systems that help us successfully support elections around the world,” spokesperson Ivy Choi said in a statement. TikTok said it has a total of 40,000 safety professionals and works with 16 fact-checking organizations across 50 global languages. Meta declined to comment for this story, but a company representative directed Foreign Policy to a recent blog post by Nick Clegg, a former U.K. deputy prime minister who now serves as Meta’s head of global affairs. “We have around 40,000 people working on safety and security, with more than $20 billion invested in teams and technology in this area since 2016,” Clegg wrote in the post.

But there are other troubling signs. YouTube announced last June that it would stop taking down content spreading false claims about the 2020 U.S. election or past elections, and Meta quietly made a similar policy change to its political ad rules in 2022. And as past precedent has shown, the platforms tend to have even less cover outside the West, with major blind spots in local languages and context making misinformation and hate speech not only more pervasive but also more dangerous…(More)”.

How can Mixed Reality and AI improve emergency medical care?


Springwise: “Mixed reality (MR) refers to technologies that create immersive computer-generated environments in which parts of the physical and virtual environment are combined. With potential applications that range from education and engineering to entertainment, the market for MR is forecast to record revenues of just under $25 billion by 2032. Now, in a ground-breaking partnership, Singapore-based company Mediwave is teaming up with Sri Lanka’s 1990 Suwa Seriya to deploy MR and artificial intelligence (AI) to create a fully connected ambulance.

1990 Suwa Seriya is Sri Lanka’s national pre-hospital emergency ambulance service, which boasts response times that surpass even some services in developed countries. The innovative ambulance it has deployed uses Mediwave’s integrated Emergency Response Suite, which combines the latest communications equipment with internet-of-things (IoT) and AR capabilities to enhance the efficiency of the emergency response eco-system.

The connected ambulance ensures swift response times and digitises critical processes, while specialised care can be provided remotely through a Microsoft HoloLens. The technology enables Emergency Medical Technicians (EMTs) – staff who man ambulances in Sri Lanka – to connect with physicians at the Emergency Command and Control Centre. These physicians help the EMTs provide care during the so-called ‘golden hour’ of medical emergencies – the concept that rapid clinical investigation and care within 60 minutes of a traumatic injury is essential for a positive patient outcome…

Other applications of extended reality in the Springwise library include holograms that are used to train doctorsvirtual environments for treating phobias, and an augmented reality contact lens…(More)”.

Technology, Data and Elections: An Updated Checklist on the Election Cycle


Checklist by Privacy International: “In the last few years, electoral processes and related activities have undergone significant changes, driven by the development of digital technologies.

The use of personal data has redefined political campaigning and enabled the proliferation of political advertising tailor-made for audiences sharing specific characteristics or personalised to the individual. These new practices, combined with the platforms that enable them, create an environment that facilitate the manipulation of opinion and, in some cases, the exclusion of voters.

In parallel, governments are continuing to invest in modern infrastructure that is inherently data-intensive. Several states are turning to biometric voter registration and verification technologies ostensibly to curtail fraud and vote manipulation. This modernisation often results in the development of nationwide databases containing masses of personal, sensitive information, that require heightened safeguards and protection.

The number and nature of actors involved in the election process is also changing, and so are the relationships between electoral stakeholders. The introduction of new technologies, for example for purposes of voter registration and verification, often goes hand-in-hand with the involvement of private companies, a costly investment that is not without risk and requires robust safeguards to avoid abuse.

This new electoral landscape comes with many challenges that must be addressed in order to protect free and fair elections: a fact that is increasingly recognised by policymakers and regulatory bodies…(More)”.

Charting the Emerging Geography of AI


Article by Bhaskar Chakravorti, Ajay Bhalla, and Ravi Shankar Chaturvedi: “Given the high stakes of this race, which countries are in the lead? Which are gaining on the leaders? How might this hierarchy shape the future of AI? Identifying AI-leading countries is not straightforward, as data, knowledge, algorithms, and models can, in principle, cross borders. Even the U.S.–China rivalry is complicated by the fact that AI researchers from the two countries cooperate — and more so than researchers from any other pair of countries. Open-source models are out there for everyone to use, with licensing accessible even for cutting-edge models. Nonetheless, AI development benefits from scale economies and, as a result, is geographically clustered as many significant inputs are concentrated and don’t cross borders that easily….

Rapidly accumulating pools of data in digital economies around the world are clearly one of the critical drivers of AI development. In 2019, we introduced the idea of “gross data product” of countries determined by the volume, complexity, and accessibility of data consumed alongside the number of active internet users in the country. For this analysis, we recognized that gross data product is an essential asset for AI development — especially for generative AI, which requires massive, diverse datasets — and updated the 2019 analyses as a foundation, adding drivers that are critical for AI development overall. That essential data layer makes the index introduced here distinct from other indicators of AI “vibrancy” or measures of global investments, innovations, and implementation of AI…(More)”.

Measuring Global Migration: Towards Better Data for All


Book by Frank Laczko, Elisa Mosler Vidal, Marzia Rango: “This book focuses on how to improve the collection, analysis and responsible use of data on global migration and international mobility. While migration remains a topic of great policy interest for governments around the world, there is a serious lack of reliable, timely, disaggregated and comparable data on it, and often insufficient safeguards to protect migrants’ information. Meanwhile, vast amounts of data about the movement of people are being generated in real time due to new technologies, but these have not yet been fully captured and utilized by migration policymakers, who often do not have enough data to inform their policies and programmes. The lack of migration data has been internationally recognized; the Global Compact for Safe, Orderly and Regular Migration urges all countries to improve data on migration to ensure that policies and programmes are “evidence-based”, but does not spell out how this could be done.

This book examines both the technical issues associated with improving data on migration and the wider political challenges of how countries manage the collection and use of migration data. The first part of the book discusses how much we really know about international migration based on existing data, and key concepts and approaches which are often used to measure migration. The second part of the book examines what measures could be taken to improve migration data, highlighting examples of good practice from around the world in recent years, across a range of different policy areas, such as health, climate change and sustainable development more broadly.

Written by leading experts on international migration data, this book is the perfect guide for students, policymakers and practitioners looking to understand more about the existing evidence base on migration and what can be done to improve it…(More)”. (See also: Big Data For Migration Alliance).

A synthesis of evidence for policy from behavioral science during COVID-19


Paper by Kai Ruggeri et al: “Scientific evidence regularly guides policy decisions, with behavioural science increasingly part of this process. In April 2020, an influential paper proposed 19 policy recommendations (‘claims’) detailing how evidence from behavioural science could contribute to efforts to reduce impacts and end the COVID-19 pandemic. Here we assess 747 pandemic-related research articles that empirically investigated those claims. We report the scale of evidence and whether evidence supports them to indicate applicability for policymaking. Two independent teams, involving 72 reviewers, found evidence for 18 of 19 claims, with both teams finding evidence supporting 16 (89%) of those 18 claims. The strongest evidence supported claims that anticipated culture, polarization and misinformation would be associated with policy effectiveness. Claims suggesting trusted leaders and positive social norms increased adherence to behavioural interventions also had strong empirical support, as did appealing to social consensus or bipartisan agreement. Targeted language in messaging yielded mixed effects and there were no effects for highlighting individual benefits or protecting others. No available evidence existed to assess any distinct differences in effects between using the terms ‘physical distancing’ and ‘social distancing’. Analysis of 463 papers containing data showed generally large samples; 418 involved human participants with a mean of 16,848 (median of 1,699). That statistical power underscored improved suitability of behavioural science research for informing policy decisions. Furthermore, by implementing a standardized approach to evidence selection and synthesis, we amplify broader implications for advancing scientific evidence in policy formulation and prioritization…(More)”

Digital Epidemiology after COVID-19: impact and prospects


Paper by Sara Mesquita, Lília Perfeito, Daniela Paolotti, and Joana Gonçalves-Sá: “Epidemiology and Public Health have increasingly relied on structured and unstructured data, collected inside and outside of typical health systems, to study, identify, and mitigate diseases at the population level. Focusing on infectious disease, we review how Digital Epidemiology (DE) was at the beginning of 2020 and how it was changed by the COVID-19 pandemic, in both nature and breadth. We argue that DE will become a progressively useful tool as long as its potential is recognized and its risks are minimized. Therefore, we expand on the current views and present a new definition of DE that, by highlighting the statistical nature of the datasets, helps in identifying possible biases. We offer some recommendations to reduce inequity and threats to privacy and argue in favour of complex multidisciplinary approaches to tackling infectious diseases…(More)”

The case for adaptive and end-to-end policy management


Article by Pia Andrews: “Why should we reform how we do policy? Simple. Because the gap between policy design and delivery has become the biggest barrier to delivering good public services and policy outcomes and is a challenge most public servants experience daily, directly or indirectly.

This gap wasn’t always the case, with policy design and delivery separated as part of the New Public Management reforms in the ’90s. When you also consider the accelerating rate of change, increasing cadence of emergencies, and the massive speed and scale of new technologies, you could argue that end-to-end policy reform is our most urgent problem to solve.

Policy teams globally have been exploring new design methods like human-centred design, test-driven iteration (agile), and multi-disciplinary teams that get policy end users in the room (eg, NSW Policy Lab). There has also been an increased focus on improving policy evaluation across the world (eg, the Australian Centre for Evaluation). In both cases, I’m delighted to see innovative approaches being normalised across the policy profession, but it has become obvious that improving design and/or evaluation is still far from sufficient to drive better (or more humane) policy outcomes in an ever-changing world. It is not only the current systemic inability to detect and respond to unintended consequences that emerge but the lack of policy agility that perpetuates issues even long after they might be identified.

Below I outline four current challenges for policy management and a couple of potential solutions, as something of a discussion starter

Problem 1) The separation of (and mutual incomprehension between) policy design, delivery and the public

The lack of multi-disciplinary policy design, combined with a set-and-forget approach to policy, combined with delivery teams being left to interpret policy instructions without support, combined with a gap and interpretation inconsistency between policy modelling systems and policy delivery systems, all combined with a lack of feedback loops in improving policy over time, has led to a series of black holes throughout the process. Tweaking the process as it currently stands will not fix the black holes. We need a more holistic model for policy design, delivery and management…(More)”.

After USTR’s Move, Global Governance of Digital Trade Is Fraught with Unknowns


Article by Patrick Leblond: “On October 25, the United States announced at the World Trade Organization (WTO) that it was dropping its support for provisions meant to promote the free flow of data across borders. Also abandoned were efforts to continue negotiations on international e-commerce, to protect the source code in applications and algorithms (the so-called Joint Statement Initiative process).

According to the Office of the US Trade Representative (USTR): “In order to provide enough policy space for those debates to unfold, the United States has removed its support for proposals that might prejudice or hinder those domestic policy considerations.” In other words, the domestic regulation of data, privacy, artificial intelligence, online content and the like, seems to have taken precedence over unhindered international digital trade, which the United States previously strongly defended in trade agreements such as the Trans-Pacific Partnership (TPP) and the Canada-United States-Mexico Agreement (CUSMA)…

One pathway for the future sees the digital governance noodle bowl getting bigger and messier. In this scenario, international digital trade suffers. Agreements continue proliferating but remain ineffective at fostering cross-border digital trade: either they remain hortatory with attempts at cooperation on non-strategic issues, or no one pays attention to the binding provisions because business can’t keep up and governments want to retain their “policy space.” After all, why has there not yet been any dispute launched based on binding provisions in a digital trade agreement (either on its own or as part of a larger trade deal) when there has been increasing digital fragmentation?

The other pathway leads to the creation of a new international standards-setting and governance body (call it an International Digital Standards Board), like there exists for banking and finance. Countries that are members of such an international organization and effectively apply the commonly agreed standards become part of a single digital area where they can conduct cross-border digital trade without impediments. This is the only way to realize the G7’s “data free flow with trust” vision, originally proposed by Japan…(More)”.

Steering Responsible AI: A Case for Algorithmic Pluralism


Paper by Stefaan G. Verhulst: “In this paper, I examine questions surrounding AI neutrality through the prism of existing literature and scholarship about mediation and media pluralism. Such traditions, I argue, provide a valuable theoretical framework for how we should approach the (likely) impending era of AI mediation. In particular, I suggest examining further the notion of algorithmic pluralism. Contrasting this notion to the dominant idea of algorithmic transparency, I seek to describe what algorithmic pluralism may be, and present both its opportunities and challenges. Implemented thoughtfully and responsibly, I argue, Algorithmic or AI pluralism has the potential to sustain the diversity, multiplicity, and inclusiveness that are so vital to democracy…(More)”.