Rebalancing AI


Article by Daron Acemoglu and Simon Johnson: “Optimistic forecasts regarding the growth implications of AI abound. AI adoption could boost productivity growth by 1.5 percentage points per year over a 10-year period and raise global GDP by 7 percent ($7 trillion in additional output), according to Goldman Sachs. Industry insiders offer even more excited estimates, including a supposed 10 percent chance of an “explosive growth” scenario, with global output rising more than 30 percent a year.

All this techno-optimism draws on the “productivity bandwagon”: a deep-rooted belief that technological change—including automation—drives higher productivity, which raises net wages and generates shared prosperity.

Such optimism is at odds with the historical record and seems particularly inappropriate for the current path of “just let AI happen,” which focuses primarily on automation (replacing people). We must recognize that there is no singular, inevitable path of development for new technology. And, assuming that the goal is to sustainably improve economic outcomes for more people, what policies would put AI development on the right path, with greater focus on enhancing what all workers can do?…(More)”

What Will AI Do to Elections?


Article by Rishi Iyengar: “…Requests to X’s press team on how the platform was preparing for elections in 2024 yielded an automated response: “Busy now, please check back later”—a slight improvement from the initial Musk-era change where the auto-reply was a poop emoji.

X isn’t the only major social media platform with fewer content moderators. Meta, which owns Facebook, Instagram, and WhatsApp, has laid off more than 20,000 employees since November 2022—several of whom worked on trust and safety—while many YouTube employees working on misinformation policy were impacted by layoffs at parent company Google.

There could scarcely be a worse time to skimp on combating harmful content online. More than 50 countries, including the world’s three biggest democracies and Taiwan, an increasingly precarious geopolitical hot spot, are expected to hold national elections in 2024. Seven of the world’s 10 most populous countries—Bangladesh, India, Indonesia, Mexico, Pakistan, Russia, and the United States—will collectively send a third of the world’s population to the polls.

Elections, with their emotionally charged and often tribal dynamics, are where misinformation missteps come home to roost. If social media misinformation is the equivalent of yelling “fire” in a crowded theater, election misinformation is like doing so when there’s a horror movie playing and everyone’s already on edge.

Katie Harbath prefers a different analogy, one that illustrates how nebulous and thorny the issues are and the sheer uncertainty surrounding them. “The metaphor I keep using is a kaleidoscope because there’s so many different aspects to this but depending how you turn the kaleidoscope, the pattern changes of what it’s going to look like,” she said in an interview in October. “And that’s how I feel about life post-2024. … I don’t know where in the kaleidoscope it’s going to land.”

Harbath has become something of an election whisperer to the tech industry, having spent a decade at Facebook from 2011 building the company’s election integrity efforts from scratch. She left in 2021 and founded Anchor Change, a public policy consulting firm that helps other platforms combat misinformation and prepare for elections in particular.

Had she been in her old job, Harbath said, her team would have completed risk assessments of global elections by late 2022 or early 2023 and then spent the rest of the year tailoring Meta’s products to them as well as setting up election “war rooms” where necessary. “Right now, we would be starting to move into execution mode.” She cautions against treating the resources that companies are putting into election integrity as a numbers game—“once you build some of those tools, maintaining them doesn’t take as many people”—but acknowledges that the allocation of resources reveals a company leadership’s priorities.

The companies insist they remain committed to election integrity. YouTube has “heavily invested in the policies and systems that help us successfully support elections around the world,” spokesperson Ivy Choi said in a statement. TikTok said it has a total of 40,000 safety professionals and works with 16 fact-checking organizations across 50 global languages. Meta declined to comment for this story, but a company representative directed Foreign Policy to a recent blog post by Nick Clegg, a former U.K. deputy prime minister who now serves as Meta’s head of global affairs. “We have around 40,000 people working on safety and security, with more than $20 billion invested in teams and technology in this area since 2016,” Clegg wrote in the post.

But there are other troubling signs. YouTube announced last June that it would stop taking down content spreading false claims about the 2020 U.S. election or past elections, and Meta quietly made a similar policy change to its political ad rules in 2022. And as past precedent has shown, the platforms tend to have even less cover outside the West, with major blind spots in local languages and context making misinformation and hate speech not only more pervasive but also more dangerous…(More)”.

The 2010 Census Confidentiality Protections Failed, Here’s How and Why


Paper by John M. Abowd, et al: “Using only 34 published tables, we reconstruct five variables (census block, sex, age, race, and ethnicity) in the confidential 2010 Census person records. Using the 38-bin age variable tabulated at the census block level, at most 20.1% of reconstructed records can differ from their confidential source on even a single value for these five variables. Using only published data, an attacker can verify that all records in 70% of all census blocks (97 million people) are perfectly reconstructed. The tabular publications in Summary File 1 thus have prohibited disclosure risk similar to the unreleased confidential microdata. Reidentification studies confirm that an attacker can, within blocks with perfect reconstruction accuracy, correctly infer the actual census response on race and ethnicity for 3.4 million vulnerable population uniques (persons with nonmodal characteristics) with 95% accuracy, the same precision as the confidential data achieve and far greater than statistical baselines. The flaw in the 2010 Census framework was the assumption that aggregation prevented accurate microdata reconstruction, justifying weaker disclosure limitation methods than were applied to 2010 Census public microdata. The framework used for 2020 Census publications defends against attacks that are based on reconstruction, as we also demonstrate here. Finally, we show that alternatives to the 2020 Census Disclosure Avoidance System with similar accuracy (enhanced swapping) also fail to protect confidentiality, and those that partially defend against reconstruction attacks (incomplete suppression implementations) destroy the primary statutory use case: data for redistricting all legislatures in the country in compliance with the 1965 Voting Rights Act…(More)”.

How can Mixed Reality and AI improve emergency medical care?


Springwise: “Mixed reality (MR) refers to technologies that create immersive computer-generated environments in which parts of the physical and virtual environment are combined. With potential applications that range from education and engineering to entertainment, the market for MR is forecast to record revenues of just under $25 billion by 2032. Now, in a ground-breaking partnership, Singapore-based company Mediwave is teaming up with Sri Lanka’s 1990 Suwa Seriya to deploy MR and artificial intelligence (AI) to create a fully connected ambulance.

1990 Suwa Seriya is Sri Lanka’s national pre-hospital emergency ambulance service, which boasts response times that surpass even some services in developed countries. The innovative ambulance it has deployed uses Mediwave’s integrated Emergency Response Suite, which combines the latest communications equipment with internet-of-things (IoT) and AR capabilities to enhance the efficiency of the emergency response eco-system.

The connected ambulance ensures swift response times and digitises critical processes, while specialised care can be provided remotely through a Microsoft HoloLens. The technology enables Emergency Medical Technicians (EMTs) – staff who man ambulances in Sri Lanka – to connect with physicians at the Emergency Command and Control Centre. These physicians help the EMTs provide care during the so-called ‘golden hour’ of medical emergencies – the concept that rapid clinical investigation and care within 60 minutes of a traumatic injury is essential for a positive patient outcome…

Other applications of extended reality in the Springwise library include holograms that are used to train doctorsvirtual environments for treating phobias, and an augmented reality contact lens…(More)”.

Technology, Data and Elections: An Updated Checklist on the Election Cycle


Checklist by Privacy International: “In the last few years, electoral processes and related activities have undergone significant changes, driven by the development of digital technologies.

The use of personal data has redefined political campaigning and enabled the proliferation of political advertising tailor-made for audiences sharing specific characteristics or personalised to the individual. These new practices, combined with the platforms that enable them, create an environment that facilitate the manipulation of opinion and, in some cases, the exclusion of voters.

In parallel, governments are continuing to invest in modern infrastructure that is inherently data-intensive. Several states are turning to biometric voter registration and verification technologies ostensibly to curtail fraud and vote manipulation. This modernisation often results in the development of nationwide databases containing masses of personal, sensitive information, that require heightened safeguards and protection.

The number and nature of actors involved in the election process is also changing, and so are the relationships between electoral stakeholders. The introduction of new technologies, for example for purposes of voter registration and verification, often goes hand-in-hand with the involvement of private companies, a costly investment that is not without risk and requires robust safeguards to avoid abuse.

This new electoral landscape comes with many challenges that must be addressed in order to protect free and fair elections: a fact that is increasingly recognised by policymakers and regulatory bodies…(More)”.

Uses and Purposes of Beneficial Ownership Data


Report by Andres Knobel: “This report describes more than 30 uses and purposes of beneficial ownership data *beyond anti-money laundering) for a whole-government approach. It covers 5 cases for exposing corruption, 6 cases for protecting democracy, the rule of law and national assets, 9 cases for exposing tax abuse, 4 cases for exposing fraud and administrative violations, 3 cases for protecting the environment and mitigating climate change, 5 cases for ensuring fair market conditions and 4 cases for creating fairer societies…(More)”.

The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now


Book by Hilke Schellmann: “Based on exclusive information from whistleblowers, internal documents, and real world test results, Emmy‑award winning Wall Street Journal contributor Hilke Schellmann delivers a shocking and illuminating expose on the next civil rights issue of our time: how AI has already taken over the workplace and shapes our future.
 

Hilke Schellmann, is an Emmy‑award winning investigative reporter, Wall Street Journal and Guardian contributor and Journalism Professor at NYU. In The Algorithm, she investigates the rise of artificial intelligence (AI) in the world of work. AI is now being used to decide who has access to an education, who gets hired, who gets fired, and who receives a promotion. Drawing on exclusive information from whistleblowers, internal documents and real‑world tests, Schellmann discovers that many of the algorithms making high‑stakes decisions are biased, racist, and do more harm than good. Algorithms are on the brink of dominating our lives and threaten our human future—if we don’t fight back. 
 
Schellmann takes readers on a journalistic detective story testing algorithms that have secretly analyzed job candidates’ facial expressions and tone of voice. She investigates algorithms that scan our online activity including Twitter and LinkedIn to construct personality profiles à la Cambridge Analytica. Her reporting reveals how employers track the location of their employees, the keystrokes they make, access everything on their screens and, during meetings, analyze group discussions to diagnose problems in a team. Even universities are now using predictive analytics for admission offers and financial aid…(More)”

Charting the Emerging Geography of AI


Article by Bhaskar Chakravorti, Ajay Bhalla, and Ravi Shankar Chaturvedi: “Given the high stakes of this race, which countries are in the lead? Which are gaining on the leaders? How might this hierarchy shape the future of AI? Identifying AI-leading countries is not straightforward, as data, knowledge, algorithms, and models can, in principle, cross borders. Even the U.S.–China rivalry is complicated by the fact that AI researchers from the two countries cooperate — and more so than researchers from any other pair of countries. Open-source models are out there for everyone to use, with licensing accessible even for cutting-edge models. Nonetheless, AI development benefits from scale economies and, as a result, is geographically clustered as many significant inputs are concentrated and don’t cross borders that easily….

Rapidly accumulating pools of data in digital economies around the world are clearly one of the critical drivers of AI development. In 2019, we introduced the idea of “gross data product” of countries determined by the volume, complexity, and accessibility of data consumed alongside the number of active internet users in the country. For this analysis, we recognized that gross data product is an essential asset for AI development — especially for generative AI, which requires massive, diverse datasets — and updated the 2019 analyses as a foundation, adding drivers that are critical for AI development overall. That essential data layer makes the index introduced here distinct from other indicators of AI “vibrancy” or measures of global investments, innovations, and implementation of AI…(More)”.

Measuring Global Migration: Towards Better Data for All


Book by Frank Laczko, Elisa Mosler Vidal, Marzia Rango: “This book focuses on how to improve the collection, analysis and responsible use of data on global migration and international mobility. While migration remains a topic of great policy interest for governments around the world, there is a serious lack of reliable, timely, disaggregated and comparable data on it, and often insufficient safeguards to protect migrants’ information. Meanwhile, vast amounts of data about the movement of people are being generated in real time due to new technologies, but these have not yet been fully captured and utilized by migration policymakers, who often do not have enough data to inform their policies and programmes. The lack of migration data has been internationally recognized; the Global Compact for Safe, Orderly and Regular Migration urges all countries to improve data on migration to ensure that policies and programmes are “evidence-based”, but does not spell out how this could be done.

This book examines both the technical issues associated with improving data on migration and the wider political challenges of how countries manage the collection and use of migration data. The first part of the book discusses how much we really know about international migration based on existing data, and key concepts and approaches which are often used to measure migration. The second part of the book examines what measures could be taken to improve migration data, highlighting examples of good practice from around the world in recent years, across a range of different policy areas, such as health, climate change and sustainable development more broadly.

Written by leading experts on international migration data, this book is the perfect guide for students, policymakers and practitioners looking to understand more about the existing evidence base on migration and what can be done to improve it…(More)”. (See also: Big Data For Migration Alliance).

New group aims to professionalize AI auditing


Article by Louise Matsakis: “The newly formed International Association of Algorithmic Auditors (IAAA) is hoping to professionalize the sector by creating a code of conduct for AI auditors, training curriculums, and eventually, a certification program.

Over the last few years, lawmakers and researchers have repeatedly proposed the same solution for regulating artificial intelligence: require independent audits. But the industry remains a wild west; there are only a handful of reputable AI auditing firms and no established guardrails for how they should conduct their work.

Yet several jurisdictions have passed laws mandating tech firms to commission independent audits, including New York City. The idea is that AI firms should have to demonstrate their algorithms work as advertised, the same way companies need to prove they haven’t fudged their finances.

Since ChatGPT was released last year, a troubling norm has been established in the AI industry, which is that it’s perfectly acceptable to evaluate your own models in-house.

Leading startups like OpenAI and Anthropic regularly publish research about the AI systems they’re developing, including the potential risks. But they rarely commission independent audits, let alone publish the results, making it difficult for anyone to know what’s really happening under the hood…(More)”..(More)”