Uganda’s Sweeping Surveillance State Is Built on National ID Cards


Article by Olivia Solon: “Uganda has spent hundreds of millions of dollars in the past decade on biometric tools that document a person’s unique physical characteristics, such as their face, fingerprints and irises, to form the basis of a comprehensive identification system. While the system is central to many of the state’s everyday functions, as Museveni has grown increasingly authoritarian over nearly four decades in power, it has also become a powerful mechanism for surveilling politicians, journalists, human rights advocates and ordinary citizens, according to dozens of interviews and hundreds of pages of documents obtained and analyzed by Bloomberg and nonprofit investigative newsroom Lighthouse Reports.

It’s a cautionary tale for any country considering establishing a biometric identity system without rigorous checks and balances and input from civil society. Dozens of global south countries have adopted this approach as part of an effort to meet sustainable development goals from the UN, which considers having a legal identity to be a fundamental human right. But, despite billions of dollars of investment, with backing from organizations including the World Bank, those identity systems haven’t always lived up to expectations. In many cases, the key problem is the failure to register large swathes of the population, leading to exclusion from public services. But in other places, like Uganda, inclusion in the system has been weaponized for surveillance purposes.

A year-long investigation by Bloomberg and Lighthouse Reports sheds new light on the ways in which Museveni’s regime has built and deployed this system to target opponents and consolidate power. It shows how the underlying software and data sets are easily accessed by individuals at all levels of law enforcement, despite official claims to the contrary. It also highlights, in some cases for the first time, how senior government and law enforcement officials have used these tools to target individuals deemed to pose a political threat…(More)”.

We don’t need an AI manifesto — we need a constitution


Article by Vivienne Ming: “Loans drive economic mobility in America, even as they’ve been a historically powerful tool for discrimination. I’ve worked on multiple projects to reduce that bias using AI. What I learnt, however, is that even if an algorithm works exactly as intended, it is still solely designed to optimise the financial returns to the lender who paid for it. The loan application process is already impenetrable to most, and now your hopes for home ownership or small business funding are dying in a 50-millisecond computation…

In law, the right to a lawyer and judicial review are a constitutional guarantee in the US and an established civil right throughout much of the world. These are the foundations of your civil liberties. When algorithms act as an expert witness, testifying against you but immune to cross examination, these rights are not simply eroded — they cease to exist.

People aren’t perfect. Neither ethics training for AI engineers nor legislation by woefully uninformed politicians can change that simple truth. I don’t need to assume that Big Tech chief executives are bad actors or that large companies are malevolent to understand that what is in their self-interest is not always in mine. The framers of the US Constitution recognised this simple truth and sought to leverage human nature for a greater good. The Constitution didn’t simply assume people would always act towards that greater good. Instead it defined a dynamic mechanism — self-interest and the balance of power — that would force compromise and good governance. Its vision of treating people as real actors rather than better angels produced one of the greatest frameworks for governance in history.

Imagine you were offered an AI-powered test for post-partum depression. My company developed that very test and it has the power to change your life, but you may choose not to use it for fear that we might sell the results to data brokers or activist politicians. You have a right to our AI acting solely for your health. It was for this reason I founded an independent non-profit, The Human Trust, that holds all of the data and runs all of the algorithms with sole fiduciary responsibility to you. No mother should have to choose between a life-saving medical test and her civil rights…(More)”.

The Human Rights Data Revolution


Briefing by Domenico Zipoli: “… explores the evolving landscape of digital human rights tracking tools and databases (DHRTTDs). It discusses their growing adoption for monitoring, reporting, and implementing human rights globally, while also pinpointing the challenge of insufficient coordination and knowledge sharing among these tools’ developers and users. Drawing from insights of over 50 experts across multiple sectors gathered during two pivotal roundtables organized by the GHRP in 2022 and 2023, this new publication critically evaluates the impact and future of DHRTTDs. It integrates lessons and challenges from these discussions, along with targeted research and interviews, to guide the human rights community in leveraging digital advancements effectively..(More)”.

Murky Consent: An Approach to the Fictions of Consent in Privacy Law


Paper by Daniel J. Solove: “Consent plays a profound role in nearly all privacy laws. As Professor Heidi Hurd aptly said, consent works “moral magic” – it transforms things that would be illegal and immoral into lawful and legitimate activities. As to privacy, consent authorizes and legitimizes a wide range of data collection and processing.

There are generally two approaches to consent in privacy law. In the United States, the notice-and-choice approach predominates; organizations post a notice of their privacy practices and people are deemed to consent if they continue to do business with the organization or fail to opt out. In the European Union, the General Data Protection Regulation (GDPR) uses the express consent approach, where people must voluntarily and affirmatively consent.

Both approaches fail. The evidence of actual consent is non-existent under the notice-and-choice approach. Individuals are often pressured or manipulated, undermining the validity of their consent. The express consent approach also suffers from these problems – people are ill-equipped to decide about their privacy, and even experts cannot fully understand what algorithms will do with personal data. Express consent also is highly impractical; it inundates individuals with consent requests from thousands of organizations. Express consent cannot scale.

In this Article, I contend that most of the time, privacy consent is fictitious. Privacy law should take a new approach to consent that I call “murky consent.” Traditionally, consent has been binary – an on/off switch – but murky consent exists in the shadowy middle ground between full consent and no consent. Murky consent embraces the fact that consent in privacy is largely a set of fictions and is at best highly dubious….(More)”. See also: The Urgent Need to Reimagine Data Consent

The Secret Life of Data


Book by Aram Sinnreich and Jesse Gilbert: “…explore the many unpredictable, and often surprising, ways in which data surveillance, AI, and the constant presence of algorithms impact our culture and society in the age of global networks. The authors build on this basic premise: no matter what form data takes, and what purpose we think it’s being used for, data will always have a secret life. How this data will be used, by other people in other times and places, has profound implications for every aspect of our lives—from our intimate relationships to our professional lives to our political systems.

With the secret uses of data in mind, Sinnreich and Gilbert interview dozens of experts to explore a broad range of scenarios and contexts—from the playful to the profound to the problematic. Unlike most books about data and society that focus on the short-term effects of our immense data usage, The Secret Life of Data focuses primarily on the long-term consequences of humanity’s recent rush toward digitizing, storing, and analyzing every piece of data about ourselves and the world we live in. The authors advocate for “slow fixes” regarding our relationship to data, such as creating new laws and regulations, ethics and aesthetics, and models of production for our datafied society.

Cutting through the hype and hopelessness that so often inform discussions of data and society, The Secret Life of Data clearly and straightforwardly demonstrates how readers can play an active part in shaping how digital technology influences their lives and the world at large…(More)”

The False Choice Between Digital Regulation and Innovation


Paper by Anu Bradford: “This Article challenges the common view that more stringent regulation of the digital economy inevitably compromises innovation and undermines technological progress. This view, vigorously advocated by the tech industry, has shaped the public discourse in the United States, where the country’s thriving tech economy is often associated with a staunch commitment to free markets. US lawmakers have also traditionally embraced this perspective, which explains their hesitancy to regulate the tech industry to date. The European Union has chosen another path, regulating the digital economy with stringent data privacy, antitrust, content moderation, and other digital regulations designed to shape the evolution of the tech economy towards European values around digital rights and fairness. According to the EU’s critics, this far-reaching tech regulation has come at the cost of innovation, explaining the EU’s inability to nurture tech companies and compete with the US and China in the tech race. However, this Article argues that the association between digital regulation and technological progress is considerably more complex than what the public conversation, US lawmakers, tech companies, and several scholars have suggested to date. For this reason, the existing technological gap between the US and the EU should not be attributed to the laxity of American laws and the stringency of European digital regulation. Instead, this Article shows there are more foundational features of the American legal and technological ecosystem that have paved the way for US tech companies’ rise to global prominence—features that the EU has not been able to replicate to date. By severing tech regulation from its allegedly adverse effect on innovation, this Article seeks to advance a more productive scholarly conversation on the costs and benefits of digital regulation. It also directs governments deliberating tech policy away from a false choice between regulation and innovation while drawing their attention to a broader set of legal and institutional reforms that are necessary for tech companies to innovate and for digital economies and societies to thrive…(More)”.

Human-Centered AI


Book edited by Catherine Régis, Jean-Louis Denis, Maria Luciana Axente, and Atsuo Kishimoto: “Artificial intelligence (AI) permeates our lives in a growing number of ways. Relying solely on traditional, technology-driven approaches won’t suffice to develop and deploy that technology in a way that truly enhances human experience. A new concept is desperately needed to reach that goal. That concept is Human-Centered AI (HCAI).

With 29 captivating chapters, this book delves deep into the realm of HCAI. In Section I, it demystifies HCAI, exploring cutting-edge trends and approaches in its study, including the moral landscape of Large Language Models. Section II looks at how HCAI is viewed in different institutions—like the justice system, health system, and higher education—and how it could affect them. It examines how crafting HCAI could lead to better work. Section III offers practical insights and successful strategies to transform HCAI from theory to reality, for example, studying how using regulatory sandboxes could ensure the development of age-appropriate AI for kids. Finally, decision-makers and practitioners provide invaluable perspectives throughout the book, showcasing the real-world significance of its articles beyond academia.

Authored by experts from a variety of backgrounds, sectors, disciplines, and countries, this engaging book offers a fascinating exploration of Human-Centered AI. Whether you’re new to the subject or not, a decision-maker, a practitioner or simply an AI user, this book will help you gain a better understanding of HCAI’s impact on our societies, and of why and how AI should really be developed and deployed in a human-centered future…(More)”.

The Non-Coherence Theory of Digital Human Rights


Book by Mart Susi: “…offers a novel non-coherence theory of digital human rights to explain the change in meaning and scope of human rights rules, principles, ideas and concepts, and the interrelationships and related actors, when moving from the physical domain into the online domain. The transposition into the digital reality can alter the meaning of well-established offline human rights to a wider or narrower extent, impacting core concepts such as transparency, legal certainty and foreseeability. Susi analyses the ‘loss in transposition’ of some core features of the rights to privacy and freedom of expression. The non-coherence theory is used to explore key human rights theoretical concepts, such as the network society approach, the capabilities approach, transversality, and self-normativity, and it is also applied to e-state and artificial intelligence, challenging the idea of the sameness of rights…(More)”.

Advancing Equitable AI in the US Social Sector


Article by Kelly Fitzsimmons: “…when developed thoughtfully and with equity in mind, AI-powered applications have great potential to help drive stronger and more equitable outcomes for nonprofits, particularly in the following three areas.

1. Closing the data gap. A widening data divide between the private and social sectors threatens to reduce the effectiveness of nonprofits that provide critical social services in the United States and leave those they serve without the support they need. As Kriss Deiglmeir wrote in a recent Stanford Social Innovation Review essay, “Data is a form of power. And the sad reality is that power is being held increasingly by the commercial sector and not by organizations seeking to create a more just, sustainable, and prosperous world.” AI can help break this trend by democratizing the process of generating and mobilizing data and evidence, thus making continuous research and development, evaluation, and data analysis more accessible to a wider range of organizations—including those with limited budgets and in-house expertise.

Take Quill.org, a nonprofit that provides students with free tools that help them build reading comprehension, writing, and language skills. Quill.org uses an AI-powered chatbot that asks students to respond to open-ended questions based on a piece of text. It then reviews student responses and offers suggestions for improvement, such as writing with clarity and using evidence to support claims. This technology makes high-quality critical thinking and writing support available to students and schools that might not otherwise have access to them. As Peter Gault, Quill.org’s founder and executive director, recently shared, “There are 27 million low-income students in the United States who struggle with basic writing and find themselves disadvantaged in school and in the workforce. … By using AI to provide students with immediate feedback on their writing, we can help teachers support millions of students on the path to becoming stronger writers, critical thinkers, and active members of our democracy.”..(More)”.

Automakers Are Sharing Consumers’ Driving Behavior With Insurance Companies


Article by Kashmir Hill: “Kenn Dahl says he has always been a careful driver. The owner of a software company near Seattle, he drives a leased Chevrolet Bolt. He’s never been responsible for an accident.

So Mr. Dahl, 65, was surprised in 2022 when the cost of his car insurance jumped by 21 percent. Quotes from other insurance companies were also high. One insurance agent told him his LexisNexis report was a factor.

LexisNexis is a New York-based global data broker with a “Risk Solutions” division that caters to the auto insurance industry and has traditionally kept tabs on car accidents and tickets. Upon Mr. Dahl’s request, LexisNexis sent him a 258-page “consumer disclosure report,” which it must provide per the Fair Credit Reporting Act.

What it contained stunned him: more than 130 pages detailing each time he or his wife had driven the Bolt over the previous six months. It included the dates of 640 trips, their start and end times, the distance driven and an accounting of any speeding, hard braking or sharp accelerations. The only thing it didn’t have is where they had driven the car.

On a Thursday morning in June for example, the car had been driven 7.33 miles in 18 minutes; there had been two rapid accelerations and two incidents of hard braking.

According to the report, the trip details had been provided by General Motors — the manufacturer of the Chevy Bolt. LexisNexis analyzed that driving data to create a risk score “for insurers to use as one factor of many to create more personalized insurance coverage,” according to a LexisNexis spokesman, Dean Carney. Eight insurance companies had requested information about Mr. Dahl from LexisNexis over the previous month.

“It felt like a betrayal,” Mr. Dahl said. “They’re taking information that I didn’t realize was going to be shared and screwing with our insurance.”..(More)”.