Law, AI, and Human Rights


Article by John Croker: “Technology has been at the heart of two injustices that courts have labelled significant miscarriages of justice. The first example will be familiar now to many people in the UK: colloquially known as the ‘post office’ or ‘horizon’ scandal. The second is from Australia, where the Commonwealth Government sought to utilise AI to identify overpayment in the welfare system through what is colloquially known as the ‘Robodebt System’. The first example resulted in the most widespread miscarriage of justice in the UK legal system’s history. The second example was labelled “a shameful chapter” in government administration in Australia and led to the government unlawfully asserting debts amounting to $1.763 billion against 433,000 Australians, and is now the subject of a Royal Commission seeking to identify how public policy failures could have been made on such a significant scale.

Both examples show that where technology and AI goes wrong, the scale of the injustice can result in unprecedented impacts across societies….(More)”.

Is Participatory Budgeting Coming to a Local Government Near You?


Article by Elizabeth Daigneau:”.. It’s far from a new idea, and you’ve probably been reading about it for years, but participatory budgeting has slowly been growing since it was first introduced in the U.S. in Chicago in 2009. Many anticipate it is about to see a boom as billions of federal dollars continue to pour into local communities…

But with the influx to local communities of billions in federal dollars through the American Rescue Plan Act (ARPA), the Infrastructure Investment and Jobs Act, and the Inflation Reduction Act, many experts think the time is ripe to adopt the tool.

“The stakes are high in restoring and rebuilding our nation’s crumbling civic, political and economic infrastructures,” wrote Hollie Russon Gilman and Lizbeth Lucero of New America’s Political Reform Program in a recent op-ed. “The long overdue improvements needed in America’s cities and countries call for remodeling how we govern and allocate federal funds across the country.”

ARPA dollars prompted the city of Cleveland to push for a participatory budgeting pilot. 

“Cleveland is a city that has one of the higher poverty rates for a city of their size in the United States. They have over 30 percent of their population living below the poverty line,” Kristania De Leon, co-executive director at the Participatory Budgeting Project, said on The Laura Flanders Show’s podcast last July. “So when they found out that they were getting American Rescue Plan Act funds allocated to their municipal government, they said, ‘Wait a minute, this is a huge influx of relatively flexible spending, where’s it going to go and who gets to have a say?’”

A community-led push culminated in a proposal by Cleveland Mayor Justin M. Bibb to the city council last year that $5 million in ARPA funds be allocated to pilot the first citywide participatory budgeting process in its history.

ARPA dollars also elicited Nashville’s city council to allocate $10 million this year to its participatory budgeting program, which is in its third year.

In general, there have been several high-profile participatory budgeting projects in the last year. 

Seattle’s project claims to be the biggest participatory budgeting process ever in the United States. The city council earmarked approximately $30 million in the 2021 budget to run a participatory budgeting process. The goal is to spend the money on initiatives that reduce police violence, reduce crime, and “creating true community safety through community-led safety programs and new investments.”

And in September, New York City Mayor Eric Adams announced the launch of the first-ever citywide participatory budgeting process. The program builds on a 2021 project that engaged residents of the 33 neighborhoods hardest hit by Covid-19 in a $1.3 million participatory budgeting process. The new program invites all New Yorkers, ages 11 and up, to decide how to spend $5 million of mayoral expense funding to address local community needs citywide…(More)”.

The Data Delusion


Jill Lepore at The New Yorker: “…The move from a culture of numbers to a culture of data began during the Second World War, when statistics became more mathematical, largely for the sake of becoming more predictive, which was necessary for wartime applications involving everything from calculating missile trajectories to cracking codes. “This was not data in search of latent truths about humanity or nature,” Wiggins and Jones write. “This was not data from small experiments, recorded in small notebooks. This was data motivated by a pressing need—to supply answers in short order that could spur action and save lives.” That work continued during the Cold War, as an instrument of the national-security state. Mathematical modelling, increased data-storage capacity, and computer simulation all contributed to the pattern detection and prediction in classified intelligence work, military research, social science, and, increasingly, commerce.

Despite the benefit that these tools provided, especially to researchers in the physical and natural sciences—in the study of stars, say, or molecules—scholars in other fields lamented the distorting effect on their disciplines. In 1954, Claude Lévi-Strauss argued that social scientists need “to break away from the hopelessness of the ‘great numbers’—the raft to which the social sciences, lost in an ocean of figures, have been helplessly clinging.” By then, national funding agencies had shifted their priorities. The Ford Foundation announced that although it was interested in the human mind, it was no longer keen on non-predictive research in fields like philosophy and political theory, deriding such disciplines as “polemical, speculative, and pre-scientific.” The best research would be, like physics, based on “experiment, the accumulation of data, the framing of general theories, attempts to verify the theories, and prediction.” Economics and political science became predictive sciences; other ways of knowing in those fields atrophied.

The digitization of human knowledge proceeded apace, with libraries turning books first into microfiche and microfilm and then—through optical character recognition, whose origins date to the nineteen-thirties—into bits and bytes. The field of artificial intelligence, founded in the nineteen-fifties, at first attempted to sift through evidence in order to identify the rules by which humans reason. This approach hit a wall, in a moment known as “the knowledge acquisition bottleneck.” The breakthrough came with advances in processing power and the idea of using the vast stores of data that had for decades been compounding in the worlds of both government and industry to teach machines to teach themselves by detecting patterns: machines, learning…(More)”.

China’s fake science industry: how ‘paper mills’ threaten progress


Article by Eleanor Olcott, Clive Cookson and Alan Smith at the Financial Times: “…Over the past two decades, Chinese researchers have become some of the world’s most prolific publishers of scientific papers. The Institute for Scientific Information, a US-based research analysis organisation, calculated that China produced 3.7mn papers in 2021 — 23 per cent of global output — and just behind the 4.4mn total from the US.

At the same time, China has been climbing the ranks of the number of times a paper is cited by other authors, a metric used to judge output quality. Last year, China surpassed the US for the first time in the number of most cited papers, according to Japan’s National Institute of Science and Technology Policy, although that figure was flattered by multiple references to Chinese research that first sequenced the Covid-19 virus genome.

The soaring output has sparked concern in western capitals. Chinese advances in high-profile fields such as quantum technology, genomics and space science, as well as Beijing’s surprise hypersonic missile test two years ago, have amplified the view that China is marching towards its goal of achieving global hegemony in science and technology.

That concern is a part of a wider breakdown of trust in some quarters between western institutions and Chinese ones, with some universities introducing background checks on Chinese academics amid fears of intellectual property theft.

But experts say that China’s impressive output masks systemic inefficiencies and an underbelly of low-quality and fraudulent research. Academics complain about the crushing pressure to publish to gain prized positions at research universities…(More)”.

The limits of expert judgment: Lessons from social science forecasting during the pandemic


Article by Cendri Hutcherson  Michael Varnum Imagine being a policymaker at the beginning of the COVID-19 pandemic. You have to decide which actions to recommend, how much risk to tolerate and what sacrifices to ask your citizens to bear.

Who would you turn to for an accurate prediction about how people would react? Many would recommend going to the experts — social scientists. But we are here to tell you this would be bad advice.

As psychological scientists with decades of combined experience studying decision-makingwisdomexpert judgment and societal change, we hoped social scientists’ predictions would be accurate and useful. But we also had our doubts.

Our discipline has been undergoing a crisis due to failed study replications and questionable research practices. If basic findings can’t be reproduced in controlled experiments, how confident can we be that our theories can explain complex real-world outcomes?

To find out how well social scientists could predict societal change, we ran the largest forecasting initiative in our field’s history using predictions about change in the first year of the COVID-19 pandemic as a test case….

Our findings, detailed in peer-reviewed papers in Nature Human Behaviour and in American Psychologist, paint a sobering picture. Despite the causal nature of most theories in the social sciences, and the fields’ emphasis on prediction in controlled settings, social scientists’ forecasts were generally not very good.

In both papers, we found that experts’ predictions were generally no more accurate than those made by samples of the general public. Further, their predictions were often worse than predictions generated by simple statistical models.

Our studies did still give us reasons to be optimistic. First, forecasts were more accurate when teams had specific expertise in the domain they were making predictions in. If someone was an expert in depression, for example, they were better at predicting societal trends in depression.

Second, when teams were made up of scientists from different fields working together, they tended to do better at forecasting. Finally, teams that used simpler models to generate their predictions and made use of past data generally outperformed those that didn’t.

These findings suggest that, despite the poor performance of the social scientists in our studies, there are steps scientists can take to improve their accuracy at this type of forecasting….(More)”.

What We Gain from More Behavioral Science in the Global South


Article by Pauline Kabitsis and Lydia Trupe: “In recent years, the field has been critiqued for applying behavioral science at the margins, settling for small but statistically significant effect sizes. Critics have argued that by focusing our efforts on nudging individuals to increase their 401(k) contributions or to reduce their so-called carbon footprint, we have ignored the systemic drivers of important challenges, such as fundamental flaws in the financial system and corporate responsibility for climate change. As Michael Hallsworth points out, however, the field may not be willfully ignoring these deeper challenges, but rather investing in areas of change that are likely easier to move, measure, and secure funding.

It’s been our experience working in the Global South that nudge-based solutions can provide short-term gains within current systems, but for lasting impact a focus beyond individual-level change is required. This is because the challenges in the Global South typically navigate fundamental problems, like enabling women’s reproductive choice, combatting intimate partner violence and improving food security among the world’s most vulnerable populations.

Our work at Common Thread focuses on improving behaviors related to health, like encouraging those persistently left behind to get vaccinated, and enabling Ukrainian refugees in Poland to access health and welfare services. We use a behavioral model that considers not just the individual biases that impact people’s behaviors, but the structural, social, interpersonal, and even historical context that triggers these biases and inhibits health seeking behaviors…(More)”.

Analyzing Big Data on a Shoestring Budget


Article by Toshiko Kaneda and Lori S. Ashford: “Big data has opened a new world for demographers and public health scientists to explore, to gain insights into social and health phenomena using the myriad digital traces we leave behind in our daily lives. But is analyzing big data practical and affordable? Researchers and organizations who have not made the leap might wonder: Do we need a lot more funding? Supercomputers? Armies of data scientists?

Three studies, presented recently in a PRB Demography Talk, show the feasibility of conducting research on a proverbial shoestring—using big data that are publicly, freely available to anyone with a personal computer and Wi-Fi connection.

Study 1: Can Google data help measure health care access more accurately?

The first study, presented by Luis Gabriel Cuervo of the Universitat Autònoma de Barcelona and the AMORE project, used Google mobility data to assess the effect of traffic congestion on people’s ability to access health services in Cali, Colombia, a city of 2.3 million. The study aimed to improve how health care accessibility is measured and communicated, to inform urban and health services planning.

Cuervo assembled a multidisciplinary research team, including mobility experts, to examine travel times from where people live to urgent and frequently used health services. The team used Google’s Distance Matrix API, which provides travel times and distance between origins and destinations, accounting for changing traffic conditions. The data are generated from Google Maps on people’s cell phones.

Combining this information with census and health services data, the study measured travel times repeatedly and revealed significant inequality by sociodemographic characteristics. On typical days, 60% of the city’s population lived more than 15 minutes by car from emergency care, with those in the poorest neighborhoods facing the longest travel times and a greater impact from traffic congestion.

Studies 2 and 3: Can Google data help predict changes in birth rates and examine excess deaths from COVID-19 related shutdowns?

In another study, Joshua Wilde from the Max Planck Institute for Demographic Research (MPIDR) and Portland State University asked, can Google search data predict whether COVID-related shutdowns will lead to a baby boom or bust?  In 2020, early in the pandemic, Wilde and team constructed a forecasting model based on volumes of Google searches with keywords related to conception, pregnancy, childbirth, and economic stability. Their thinking was that if searches increased sharply for keywords such as “pregnancy test” and “missed period,” one might expect higher birth rates seven to nine months later. On the other hand, prior research had associated unemployment with lower birth rates—so if unemployment-related searches climbed, one might expect a baby bust….(More)”.

The Right To Be Free From Automation


Essay by Ziyaad Bhorat: “Is it possible to free ourselves from automation? The idea sounds fanciful, if not outright absurd. Industrial and technological development have reached a planetary level, and automation, as the general substitution or augmentation of human work with artificial tools capable of completing tasks on their own, is the bedrock of all the technologies designed to save, assist and connect us. 

From industrial lathes to OpenAI’s ChatGPT, automation is one of the most groundbreaking achievements in the history of humanity. As a consequence of the human ingenuity and imagination involved in automating our tools, the sky is quite literally no longer a limit. 

But in thinking about our relationship to automation in contemporary life, my unease has grown. And I’m not alone — America’s Blueprint for an AI Bill of Rights and the European Union’s GDPR both express skepticism of automated tools and systems: The “use of technology, data and automated systems in ways that threaten the rights of the American public”; the “right not to be subject to a decision based solely on automated processing.” 

If we look a little deeper, we find this uneasy language in other places where people have been guarding three important abilities against automated technologies. Historically, we have found these abilities so important that we now include them in various contemporary rights frameworks: the right to work, the right to know and understand the source of the things we consume, and the right to make our own decisions. Whether we like it or not, therefore, communities and individuals are already asserting the importance of protecting people from the ubiquity of automated tools and systems.

Consider the case of one of South Africa’s largest retailers, Pick n Pay, which in 2016 tried to introduce self-checkout technology in its retail stores. In post-Apartheid South Africa, trade unions are immensely powerful and unemployment persistently high, so any retail firm that wants to introduce technology that might affect the demand for labor faces huge challenges. After the country’s largest union federation threatened to boycott the new Pick n Pay machines, the company scrapped its pilot. 

As the sociologist Christopher Andrews writes in “The Overworked Consumer,” self-checkout technology is by no means a universally good thing. Firms that introduce it need to deal with new forms of theft, maintenance and bottleneck, while customers end up doing more work themselves. These issues are in addition to the ill fortunes of displaced workers…(More)”.

How Democracy Can Win


Essay by Samantha Power: “…At the core of democratic theory and practice is respect for the dignity of the individual. But among the biggest errors many democracies have made since the Cold War is to view individual dignity primarily through the prism of political freedom without being sufficiently attentive to the indignity of corruption, inequality, and a lack of economic opportunity.

This was not a universal blind spot: a number of political figures, advocates, and individuals working at the grassroots level to advance democratic progress presciently argued that economic inequality could fuel the rise of populist leaders and autocratic governments that pledged to improve living standards even as they eroded freedoms. But too often, the activists, lawyers, and other members of civil society who worked to strengthen democratic institutions and protect civil liberties looked to labor movements, economists, and policymakers to address economic dislocation, wealth inequality, and declining wages rather than building coalitions to tackle these intersecting problems.

Democracy suffered as a result. Over the past two decades,as economic inequality rose, polls showed that people in rich and poor countries alike began to lose faith in democracy and worry that young people would end up worse off than they were, giving populists and ethno­nationalists an opening to exploit grievances and gain a political foothold on every continent.

Moving forward, we must look at all economic programming that respects democratic norms as a form of democracy assistance. When we help democratic leaders provide vaccines to their people, bring down inflation or high food prices, send children to school, or reopen markets after a natural disaster, we are demonstrating—in a way that a free press or vibrant civil society cannot always do—that democracy delivers. And we are making it less likely that autocratic forces will take advantage of people’s economic hardship.

Nowhere is that task more important today than in societies that have managed to elect democratic reformers or throw off autocratic or antidemocratic rule through peaceful mass protests or successful political movements. These democratic bright spots are incredibly fragile. Unless reformers solidify their democratic and economic gains quickly, populations understandably grow impatient, especially if they feel that the risks they took to upend the old order have not yielded tangible dividends in their own lives. Such discontent allows opponents of democratic rule—often aided by external autocratic regimes—to wrest back control, reversing reforms and snuffing out dreams of rights-regarding self-government…(More)”.

Your Data Is Diminishing Your Freedom


Interview by David Marchese: “It’s no secret — even if it hasn’t yet been clearly or widely articulated — that our lives and our data are increasingly intertwined, almost indistinguishable. To be able to function in modern society is to submit to demands for ID numbers, for financial information, for filling out digital fields and drop-down boxes with our demographic details. Such submission, in all senses of the word, can push our lives in very particular and often troubling directions. It’s only recently, though, that I’ve seen someone try to work through the deeper implications of what happens when our data — and the formats it’s required to fit — become an inextricable part of our existence, like a new limb or organ to which we must adapt. ‘‘I don’t want to claim we are only data and nothing but data,’’ says Colin Koopman, chairman of the philosophy department at the University of Oregon and the author of ‘‘How We Became Our Data.’’ ‘‘My claim is you are your data, too.’’ Which at the very least means we should be thinking about this transformation beyond the most obvious data-security concerns. ‘‘We’re strikingly lackadaisical,’’ says Koopman, who is working on a follow-up book, tentatively titled ‘‘Data Equals,’’ ‘‘about how much attention we give to: What are these data showing? What assumptions are built into configuring data in a given way? What inequalities are baked into these data systems? We need to be doing more work on this.’’

Can you explain more what it means to say that we have become our data? Because a natural reaction to that might be, well, no, I’m my mind, I’m my body, I’m not numbers in a database — even if I understand that those numbers in that database have real bearing on my life. The claim that we are data can also be taken as a claim that we live our lives through our data in addition to living our lives through our bodies, through our minds, through whatever else. I like to take a historical perspective on this. If you wind the clock back a couple hundred years or go to certain communities, the pushback wouldn’t be, ‘‘I’m my body,’’ the pushback would be, ‘‘I’m my soul.’’ We have these evolving perceptions of our self. I don’t want to deny anybody that, yeah, you are your soul. My claim is that your data has become something that is increasingly inescapable and certainly inescapable in the sense of being obligatory for your average person living out their life. There’s so much of our lives that are woven through or made possible by various data points that we accumulate around ourselves — and that’s interesting and concerning. It now becomes possible to say: ‘‘These data points are essential to who I am. I need to tend to them, and I feel overwhelmed by them. I feel like it’s being manipulated beyond my control.’’ A lot of people have that relationship to their credit score, for example. It’s both very important to them and very mysterious…(More)”.