From Digital Sovereignty to Digital Agency


Article by Akash Kapur: “In recent years, governments have increasingly pursued variants of digital sovereignty to regulate and control the global digital ecosystem. The pursuit of AI sovereignty represents the latest iteration in this quest. 

Digital sovereignty may offer certain benefits, but it also poses undeniable risks, including the possibility of undermining the very goals of autonomy and self-reliance that nations are seeking. These risks are particularly pronounced for smaller nations with less capacity, which might do better in a revamped, more inclusive, multistakeholder system of digital governance. 

Organizing digital governance around agency rather than sovereignty offers the possibility of such a system. Rather than reinforce the primacy of nations, digital agency asserts the rights, priorities, and needs not only of sovereign governments but also of the constituent parts—the communities and individuals—they purport to represent.

Three cross-cutting principles underlie the concept of digital agency: recognizing stakeholder multiplicity, enhancing the latent possibilities of technology, and promoting collaboration. These principles lead to three action-areas that offer a guide for digital policymakers: reinventing institutions, enabling edge technologies, and building human capacity to ensure technical capacity…(More)”.

What is ‘sovereign AI’ and why is the concept so appealing (and fraught)?


Article by John Letzing: “Denmark unveiled its own artificial intelligence supercomputer last month, funded by the proceeds of wildly popular Danish weight-loss drugs like Ozempic. It’s now one of several sovereign AI initiatives underway, which one CEO believes can “codify” a country’s culture, history, and collective intelligence – and become “the bedrock of modern economies.”

That particular CEO, Jensen Huang, happens to run a company selling the sort of chips needed to pursue sovereign AI – that is, to construct a domestic vintage of the technology, informed by troves of homegrown data and powered by the computing infrastructure necessary to turn that data into a strategic reserve of intellect…

It’s not surprising that countries are forging expansive plans to put their own stamp on AI. But big-ticket supercomputers and other costly resources aren’t feasible everywhere.

Training a large language model has gotten a lot more expensive lately; the funds required for the necessary hardware, energy, and staff may soon top $1 billion. Meanwhile, geopolitical friction over access to the advanced chips necessary for powerful AI systems could further warp the global playing field.

Even for countries with abundant resources and access, there are “sovereignty traps” to consider. Governments pushing ahead on sovereign AI could risk undermining global cooperation meant to ensure the technology is put to use in transparent and equitable ways. That might make it a lot less safe for everyone.

An example: a place using AI systems trained on a local set of values for its security may readily flag behaviour out of sync with those values as a threat…(More)”.

How public-private partnerships can ensure ethical, sustainable and inclusive AI development


Article by Rohan Sharma: “Artificial intelligence (AI) has the potential to solve some of today’s most pressing societal challenges – from climate change to healthcare disparities – but it could also exacerbate existing inequalities if not developed and deployed responsibly.

The rapid pace of AI development, growing awareness of AI’s societal impact and the urgent need to harness AI for positive change make bridging the ‘AI divide’ essential now. Public-private partnerships (PPPs) can play a crucial role in ensuring AI is developed ethically, sustainably and inclusively by leveraging the strengths of multiple stakeholders across sectors and regions…

To bridge the AI divide effectively, collaboration among governments, private companies, civil society and other stakeholders is crucial. PPPs unite these stakeholders’ strengths to ensure AI is developed ethically, sustainably, and inclusively.

1. Bridging the resource and expertise gap

By combining public oversight and private innovation, PPPs bridge resource and expertise gaps. Governments offer funding, regulations and access to public data; companies contribute technical expertise, creativity and market solutions. This collaboration accelerates AI technologies for social good.

Singapore’s National AI Strategy 2.0, for instance, exemplifies how PPPs drive ethical AI development. By bringing together over one hundred experts from academia, industry and government, Singapore is building a trusted AI ecosystem focused on global challenges like health and climate change. Empowering citizens and businesses to use AI responsibly, Singapore demonstrates how PPPs create inclusive AI systems, serving as a model for others.

2. Fostering cross-border collaboration

AI development is a global endeavour, but countries vary in expertise and resources. PPPs facilitate international knowledge sharing, technology transfer and common ethical standards, ensuring AI benefits are distributed globally, rather than concentrated in a few regions or companies.

3. Ensuring multi-stakeholder engagement

Inclusive AI development requires involving not just public and private sectors, but also civil society organizations and local communities. Engaging these groups in PPPs brings diverse perspectives to AI design and deployment, integrating ethical, social and cultural considerations from the start.

These approaches underscore the value of PPPs in driving AI development through diverse expertise, shared resources and international collaboration…(More)”.

AI Analysis of Body Camera Videos Offers a Data-Driven Approach to Police Reform


Article by Ingrid Wickelgren: But unless something tragic happens, body camera footage generally goes unseen. “We spend so much money collecting and storing this data, but it’s almost never used for anything,” says Benjamin Graham, a political scientist at the University of Southern California.

Graham is among a small number of scientists who are reimagining this footage as data rather than just evidence. Their work leverages advances in natural language processing, which relies on artificial intelligence, to automate the analysis of video transcripts of citizen-police interactions. The findings have enabled police departments to spot policing problems, find ways to fix them and determine whether the fixes improve behavior.

Only a small number of police agencies have opened their databases to researchers so far. But if this footage were analyzed routinely, it would be a “real game changer,” says Jennifer Eberhardt, a Stanford University psychologist, who pioneered this line of research. “We can see beat-by-beat, moment-by-moment how an interaction unfolds.”

In papers published over the past seven years, Eberhardt and her colleagues have examined body camera footage to reveal how police speak to white and Black people differently and what type of talk is likely to either gain a person’s trust or portend an undesirable outcome, such as handcuffing or arrest. The findings have refined and enhanced police training. In a study published in PNAS Nexus in September, the researchers showed that the new training changed officers’ behavior…(More)”.

The Age of the Average


Article by Olivier Zunz: “The age of the average emerged from the engineering of high mass consumption during the second industrial revolution of the late nineteenth century, when tinkerers in industry joined forces with scientists to develop new products and markets. The division of labor between them became irrelevant as industrial innovation rested on advances in organic chemistry, the physics of electricity, and thermodynamics. Working together, these industrial engineers and managers created the modern mass market that penetrated all segments of society from the middle out. Thus, in the heyday of the Gilded Age, at the height of the inequality pitting robber barons against the “common man,” was born, unannounced but increasingly present, the “average American.” It is in searching for the average consumer that American business managers at the time drew a composite portrait of an imagined individual. Here was a person nobody ever met or knew, merely a statistical conceit, who nonetheless felt real.

This new character was not uniquely American. Forces at work in America were also operative in Europe, albeit to a lesser degree. Thus, Austrian novelist Robert Musil, who died in 1942, reflected on the average man in his unfinished modernist masterpiece, The Man Without Qualities. In the middle of his narrative, Musil paused for a moment to give a definition of the word average: “What each one of us as laymen calls, simply, the average [is] a ‘something,’ but nobody knows exactly what…. the ultimate meaning turns out to be something arrived at by taking the average of what is basically meaningless” but “[depending] on [the] law of large numbers.” This, I think, is a powerful definition of the American social norm in the “age of the average”: a meaningless something made real, or seemingly real, by virtue of its repetition. Economists called this average person the “representative individual” in their models of the market. Their complex simplification became an agreed-upon norm, at once a measure of performance and an attainable goal. It was not intended to suggest that all people are alike. As William James once approvingly quoted an acquaintance of his, “There is very little difference between one man and another; but what little there is, is very important.” And that remained true in the age of the average…(More)”

The Death of “Deliverism”


Article by Deepak Bhargava, Shahrzad Shams and Harry Hanbury: “How could it be that the largest-ever recorded drop in childhood poverty had next to no political resonance?

One of us became intrigued by this question when he walked into a graduate class one evening in 2021 and received unexpected and bracing lessons about the limits of progressive economic policy from his students.

Deepak had worked on various efforts to secure expanded income support for a long time—and was part of a successful push over two decades earlier to increase the child tax credit, a rare win under the George W. Bush presidency. His students were mostly working-class adults of color with full-time jobs, and many were parents. Knowing that the newly expanded child tax credit would be particularly helpful to his students, he entered the class elated. The money had started to hit people’s bank accounts, and he was eager to hear about how the extra income would improve their lives. He asked how many of them had received the check. More than half raised their hands. Then he asked those students whether they were happy about it. Not one hand went up.

Baffled, Deepak asked why. One student gave voice to the vibe, asking, “What’s the catch?” As the class unfolded, students shared that they had not experienced government as a benevolent force. They assumed that the money would be recaptured later with penalties. It was, surely, a trap. And of course, in light of centuries of exploitation and deceit—in criminal justice, housing, and safety net systems—working-class people of color are not wrong to mistrust government bureaucracies and institutions. The real passion in the class that night, and many nights, was about crime and what it was like to take the subway at night after class. These students were overwhelmingly progressive on economic and social issues, but many of their everyday concerns were spoken to by the right, not the left.

The American Rescue Plan’s temporary expansion of the child tax credit lifted more than 2 million children out of poverty, resulting in an astounding 46 percent reduction in child poverty. Yet the policy’s lapse sparked almost no political response, either from its champions or its beneficiaries. Democrats hardly campaigned on the remarkable achievement they had just delivered, and the millions of parents impacted by the policy did not seem to feel that it made much difference in their day-to-day lives. Even those who experienced the greatest benefit from the expanded child tax credit appeared unmoved by the policy. In fact, during the same time span in which monthly deposits landed in beneficiaries’ bank accounts, the percentage of Black voters—a group that especially benefited from the policy—who said their lives had improved under the Biden Administration actually declined…(More)”.

‘We were just trying to get it to work’: The failure that started the internet


Article by Scott Nover: “At the height of the Cold War, Charley Kline and Bill Duvall were two bright-eyed engineers on the front lines of one of technology’s most ambitious experiments. Kline, a 21-year-old graduate student at the University of California, Los Angeles (UCLA), and Duvall, a 29-year-old systems programmer at Stanford Research Institute (SRI), were working on a system called Arpanet, short for the Advanced Research Projects Agency Network. Funded by the US Department of Defense, the project aimed to create a network that could directly share data without relying on telephone lines. Instead, this system used a method of data delivery called “packet switching” that would later form the basis for the modern internet.

It was the first test of a technology that would change almost every facet of human life. But before it could work, you had to log in.

Kline sat at his keyboard between the lime-green walls of UCLA’s Boelter Hall Room 3420, prepared to connect with Duvall, who was working a computer halfway across the state of California. But Kline didn’t even make it all the way through the word “L-O-G-I-N” before Duvall told him over the phone that his system crashed. Thanks to that error, the first “message” that Kline sent Duvall on that autumn day in 1969 was simply the letters “L-O”…(More)”.

Inside the New Nonprofit AI Initiatives Seeking to Aid Teachers and Farmers in Rural Africa


Article by Andrew R. Chow: “Over the past year, rural farmers in Malawi have been seeking advice about their crops and animals from a generative AI chatbot. These farmers ask questions in Chichewa, their native tongue, and the app, Ulangizi, responds in kind, using conversational language based on information taken from the government’s agricultural manual. “In the past we could wait for days for agriculture extension workers to come and address whatever problems we had on our farms,” Maron Galeta, a Malawian farmer, told Bloomberg. “Just a touch of a button we have all the information we need.”

The nonprofit behind the app, Opportunity International, hopes to bring similar AI-based solutions to other impoverished communities. In February, Opportunity ran an acceleration incubator for humanitarian workers across the world to pitch AI-based ideas and then develop them alongside mentors from institutions like Microsoft and Amazon. On October 30, Opportunity announced the three winners of this program: free-to-use apps that aim to help African farmers with crop and climate strategy, teachers with lesson planning, and school leaders with administration management. The winners will each receive about $150,000 in funding to pilot the apps in their communities, with the goal of reaching millions of people within two years. 

Greg Nelson, the CTO of Opportunity, hopes that the program will show the power of AI to level playing fields for those who previously faced barriers to accessing knowledge and expertise. “Since the mobile phone, this is the biggest democratizing change that we have seen in our lifetime,” he says…(More)”.

Annoyed Redditors tanking Google Search results illustrates perils of AI scrapers


Article by Scharon Harding: “A trend on Reddit that sees Londoners giving false restaurant recommendations in order to keep their favorites clear of tourists and social media influencers highlights the inherent flaws of Google Search’s reliance on Reddit and Google’s AI Overview.

In May, Google launched AI Overviews in the US, an experimental feature that populates the top of Google Search results with a summarized answer based on an AI model built into Google’s web rankings. When Google first debuted AI Overview, it quickly became apparent that the feature needed work with accuracy and its ability to properly summarize information from online sources. AI Overviews are “built to only show information that is backed up by top web results,” Liz Reid, VP and head of Google Search, wrote in a May blog post. But as my colleague Benj Edwards pointed out at the time, that setup could contribute to inaccurate, misleading, or even dangerous results: “The design is based on the false assumption that Google’s page-ranking algorithm favors accurate results and not SEO-gamed garbage.”

As Edwards alluded to, many have complained about Google Search results’ quality declining in recent years, as SEO spam and, more recently, AI slop float to the top of searches. As a result, people often turn to the Reddit hack to make Google results more helpful. By adding “site:reddit.com” to search results, users can hone their search to more easily find answers from real people. Google seems to understand the value of Reddit and signed an AI training deal with the company that’s reportedly worth $60 million per year…(More)”.

South Korea leverages open government data for AI development


Article by Si Ying Thian: “In South Korea, open government data is powering artificial intelligence (AI) innovations in the private sector.

Take the case of TTCare which may be the world’s first mobile application to analyse eye and skin disease symptoms in pets.

AI Hub allows users to search by industry, data format and year (top row), with the data sets made available based on the particular search term “pet” (bottom half of the page). Image: AI Hub, provided by courtesy of Baek

The AI model was trained on about one million pieces of data – half of the data coming from the government-led AI Hub and the rest collected by the firm itself, according to the Korean newspaper Donga.

AI Hub is an integrated platform set up by the government to support the country’s AI infrastructure.

TTCare’s CEO Heo underlined the importance of government-led AI training data in improving the model’s ability to diagnose symptoms. The firm’s training data is currently accessible through AI Hub, and any Korean citizen can download or use it.

Pushing the boundaries of open data

Over the years, South Korea has consistently come up top in the world’s rankings for Open, Useful, and Re-usable data (OURdata) Index.

The government has been pushing the boundaries of what it can do with open data – beyond just making data usable by providing APIs. Application Programming Interfaces, or APIs, make it easier for users to tap on open government data to power their apps and services.

There is now rising interest from public sector agencies to tap on such data to train AI models, said South Korea’s National Information Society Agency (NIA)’s Principal Manager, Dongyub Baek, although this is still at an early stage.

Baek sits in NIA’s open data department, which handles policies, infrastructure such as the National Open Data Portal, as well as impact assessments of the government initiatives…(More)”