The Moral Economy of High-Tech Modernism


Essay by Henry Farrell and Marion Fourcade: “While people in and around the tech industry debate whether algorithms are political at all, social scientists take the politics as a given, asking instead how this politics unfolds: how algorithms concretely govern. What we call “high-tech modernism”—the application of machine learning algorithms to organize our social, economic, and political life—has a dual logic. On the one hand, like traditional bureaucracy, it is an engine of classification, even if it categorizes people and things very differently. On the other, like the market, it provides a means of self-adjusting allocation, though its feedback loops work differently from the price system. Perhaps the most important consequence of high-tech modernism for the contemporary moral political economy is how it weaves hierarchy and data-gathering into the warp and woof of everyday life, replacing visible feedback loops with invisible ones, and suggesting that highly mediated outcomes are in fact the unmediated expression of people’s own true wishes…(More)”.

The Incredible Challenge of Counting Every Global Birth and Death


Jeneen Interlandi at The New York Times: “…The world’s wealthiest nations are awash in so much personal data that data theft has become a lucrative business and its protection a common concern. From such a vantage point, it can be difficult to even fathom the opposite — a lack of any identifying information at all — let alone grapple with its implications. But the undercounting of human lives is pervasive, data scientists say. The resulting ills are numerous and consequential, and recent history is littered with missed opportunities to solve the problem.

More than two decades ago, 147 nations rallied around the Millennium Development Goals, the United Nations’ bold new plan for halving extreme poverty, curbing childhood mortality and conquering infectious diseases like malaria and H.I.V. The health goals became the subject of countless international summits and steady news coverage, ultimately spurring billions of dollars in investment from the world’s wealthiest nations, including the United States. But a fierce debate quickly ensued. Critics said that health officials at the United Nations and elsewhere had almost no idea what the baseline conditions were in many of the countries they were trying to help. They could not say whether maternal mortality was increasing or decreasing, or how many people were being infected with malaria, or how fast tuberculosis was spreading. In a 2004 paper, the World Health Organization’s former director of evidence, Chris Murray, and other researchers described the agency’s estimates as “serial guessing.” Without that baseline data, progress toward any given goal — to halve hunger, for example — could not be measured…(More)”.

Seize the Future by Harnessing the Power of Data


Essay by Kriss Deiglmeier: “…Data is a form of power. And the sad reality is that power is being held increasingly by the commercial sector and not by organizations seeking to create a more just, sustainable, and prosperous world. A year into my tenure as the chief global impact officer at Splunk, I became consumed with the new era driven by data. Specifically, I was concerned with the emerging data divide, which I defined as “the disparity between the expanding use of data to create commercial value, and the comparatively weak use of data to solve social and environmental challenges.”…

To effectively address the emerging data future, the social impact sector must build an entire impact data ecosystem for this moment in time—and the next moment in time. The way to do that is by investing in those areas where we currently lag the commercial sector. Consider the following gaps:

  • Nonprofits are ill-equipped with the financial and technical resources they need to make full use of data, often due to underfunding.
  • The sector’s technical and data talent is a desert compared to the commercial sector.
  • While the sector is rich with output and service-delivery data, that data is locked away or is unusable in its current form.
  • The sector lacks living data platforms (collaboratives and data refineries) that can make use of sector-wide data in a way that helps improve service delivery, maximize impact, and create radical innovation.

The harsh realities of the sector’s disparate data skills, infrastructure, and competencies show the dire current state. For the impact sector to transition to a place of power, it must jump without hesitation into the arena of the Data Age—and invest time, talent, and money in filling in these gaps.

Regardless of our lagging position, the social sector has both an incredible opportunity and a unique capacity to drive the power of data into the emerging and unimaginable. The good news is that there’s pivotal work already happening in the sector that is making it easier to build the kind of impact data ecosystem needed to join the Data Age. The framing and terms used to describe this work are many—data for good, data science for impact, open data, public interest technology, data lakes, ethical data, and artificial intelligence ethics.

These individual pieces, while important, are not enough. To fully exploit the power of data for a more just, sustainable, and prosperous world, we need to be bold enough to build the full ecosystem and not be satisfied with piecemeal work. To do that we should begin by looking at the assets that we have and build on those.

People. There are dedicated leaders in the field of social innovation who are committed to using data for impact and who have been doing that for many years. We need to support them by investing in their work at scale. The list of people leading the way is constantly growing, but to name a few: Stefaan G. Verhulst, Joy Buolamwini, Jim Fruchterman, Katara McCarty, Geoff Mulgan, Rediet Abebe, Jason Saul, and Jake Porway….(More)”.

Data is power — it’s time we act like it


Article by Danil Mikhailov: “Almost 82% of NGOs in low- and middle-income countries cite a lack of funding as their biggest barrier to adopting digital tools for social impact. What’s more, data.org’s 2023 data for social impact, or DSI, report, Accelerate Aspirations: Moving Together to Achieve Systems Change, found that when it comes to financial support, funders overlook the power of advanced data strategies to address longer-term systemic solutions — instead focusing on short-term, project-based outcomes.

That’s a real problem as we look to deploy powerful, data-driven interventions to solve some of today’s biggest crises — from shifting demographics to rising inequality to pandemics to our global climate emergency. Given the urgent challenges our world faces, pilots, one-offs, and underresourced program interventions are no longer acceptable.

It’s time we — as funders, academics, and purpose-driven data practitioners — acknowledge that data is power. And how do we truly harness that power? We must look toward innovative, diverse, equitable, and collaborative funding and partnership models to meet the incredible potential of data for social impact or risk the success of systems-level solutions that lead to long-term impact…(More)”.

Law, AI, and Human Rights


Article by John Croker: “Technology has been at the heart of two injustices that courts have labelled significant miscarriages of justice. The first example will be familiar now to many people in the UK: colloquially known as the ‘post office’ or ‘horizon’ scandal. The second is from Australia, where the Commonwealth Government sought to utilise AI to identify overpayment in the welfare system through what is colloquially known as the ‘Robodebt System’. The first example resulted in the most widespread miscarriage of justice in the UK legal system’s history. The second example was labelled “a shameful chapter” in government administration in Australia and led to the government unlawfully asserting debts amounting to $1.763 billion against 433,000 Australians, and is now the subject of a Royal Commission seeking to identify how public policy failures could have been made on such a significant scale.

Both examples show that where technology and AI goes wrong, the scale of the injustice can result in unprecedented impacts across societies….(More)”.

Is Participatory Budgeting Coming to a Local Government Near You?


Article by Elizabeth Daigneau:”.. It’s far from a new idea, and you’ve probably been reading about it for years, but participatory budgeting has slowly been growing since it was first introduced in the U.S. in Chicago in 2009. Many anticipate it is about to see a boom as billions of federal dollars continue to pour into local communities…

But with the influx to local communities of billions in federal dollars through the American Rescue Plan Act (ARPA), the Infrastructure Investment and Jobs Act, and the Inflation Reduction Act, many experts think the time is ripe to adopt the tool.

“The stakes are high in restoring and rebuilding our nation’s crumbling civic, political and economic infrastructures,” wrote Hollie Russon Gilman and Lizbeth Lucero of New America’s Political Reform Program in a recent op-ed. “The long overdue improvements needed in America’s cities and countries call for remodeling how we govern and allocate federal funds across the country.”

ARPA dollars prompted the city of Cleveland to push for a participatory budgeting pilot. 

“Cleveland is a city that has one of the higher poverty rates for a city of their size in the United States. They have over 30 percent of their population living below the poverty line,” Kristania De Leon, co-executive director at the Participatory Budgeting Project, said on The Laura Flanders Show’s podcast last July. “So when they found out that they were getting American Rescue Plan Act funds allocated to their municipal government, they said, ‘Wait a minute, this is a huge influx of relatively flexible spending, where’s it going to go and who gets to have a say?’”

A community-led push culminated in a proposal by Cleveland Mayor Justin M. Bibb to the city council last year that $5 million in ARPA funds be allocated to pilot the first citywide participatory budgeting process in its history.

ARPA dollars also elicited Nashville’s city council to allocate $10 million this year to its participatory budgeting program, which is in its third year.

In general, there have been several high-profile participatory budgeting projects in the last year. 

Seattle’s project claims to be the biggest participatory budgeting process ever in the United States. The city council earmarked approximately $30 million in the 2021 budget to run a participatory budgeting process. The goal is to spend the money on initiatives that reduce police violence, reduce crime, and “creating true community safety through community-led safety programs and new investments.”

And in September, New York City Mayor Eric Adams announced the launch of the first-ever citywide participatory budgeting process. The program builds on a 2021 project that engaged residents of the 33 neighborhoods hardest hit by Covid-19 in a $1.3 million participatory budgeting process. The new program invites all New Yorkers, ages 11 and up, to decide how to spend $5 million of mayoral expense funding to address local community needs citywide…(More)”.

The Data Delusion


Jill Lepore at The New Yorker: “…The move from a culture of numbers to a culture of data began during the Second World War, when statistics became more mathematical, largely for the sake of becoming more predictive, which was necessary for wartime applications involving everything from calculating missile trajectories to cracking codes. “This was not data in search of latent truths about humanity or nature,” Wiggins and Jones write. “This was not data from small experiments, recorded in small notebooks. This was data motivated by a pressing need—to supply answers in short order that could spur action and save lives.” That work continued during the Cold War, as an instrument of the national-security state. Mathematical modelling, increased data-storage capacity, and computer simulation all contributed to the pattern detection and prediction in classified intelligence work, military research, social science, and, increasingly, commerce.

Despite the benefit that these tools provided, especially to researchers in the physical and natural sciences—in the study of stars, say, or molecules—scholars in other fields lamented the distorting effect on their disciplines. In 1954, Claude Lévi-Strauss argued that social scientists need “to break away from the hopelessness of the ‘great numbers’—the raft to which the social sciences, lost in an ocean of figures, have been helplessly clinging.” By then, national funding agencies had shifted their priorities. The Ford Foundation announced that although it was interested in the human mind, it was no longer keen on non-predictive research in fields like philosophy and political theory, deriding such disciplines as “polemical, speculative, and pre-scientific.” The best research would be, like physics, based on “experiment, the accumulation of data, the framing of general theories, attempts to verify the theories, and prediction.” Economics and political science became predictive sciences; other ways of knowing in those fields atrophied.

The digitization of human knowledge proceeded apace, with libraries turning books first into microfiche and microfilm and then—through optical character recognition, whose origins date to the nineteen-thirties—into bits and bytes. The field of artificial intelligence, founded in the nineteen-fifties, at first attempted to sift through evidence in order to identify the rules by which humans reason. This approach hit a wall, in a moment known as “the knowledge acquisition bottleneck.” The breakthrough came with advances in processing power and the idea of using the vast stores of data that had for decades been compounding in the worlds of both government and industry to teach machines to teach themselves by detecting patterns: machines, learning…(More)”.

China’s fake science industry: how ‘paper mills’ threaten progress


Article by Eleanor Olcott, Clive Cookson and Alan Smith at the Financial Times: “…Over the past two decades, Chinese researchers have become some of the world’s most prolific publishers of scientific papers. The Institute for Scientific Information, a US-based research analysis organisation, calculated that China produced 3.7mn papers in 2021 — 23 per cent of global output — and just behind the 4.4mn total from the US.

At the same time, China has been climbing the ranks of the number of times a paper is cited by other authors, a metric used to judge output quality. Last year, China surpassed the US for the first time in the number of most cited papers, according to Japan’s National Institute of Science and Technology Policy, although that figure was flattered by multiple references to Chinese research that first sequenced the Covid-19 virus genome.

The soaring output has sparked concern in western capitals. Chinese advances in high-profile fields such as quantum technology, genomics and space science, as well as Beijing’s surprise hypersonic missile test two years ago, have amplified the view that China is marching towards its goal of achieving global hegemony in science and technology.

That concern is a part of a wider breakdown of trust in some quarters between western institutions and Chinese ones, with some universities introducing background checks on Chinese academics amid fears of intellectual property theft.

But experts say that China’s impressive output masks systemic inefficiencies and an underbelly of low-quality and fraudulent research. Academics complain about the crushing pressure to publish to gain prized positions at research universities…(More)”.

The limits of expert judgment: Lessons from social science forecasting during the pandemic


Article by Cendri Hutcherson  Michael Varnum Imagine being a policymaker at the beginning of the COVID-19 pandemic. You have to decide which actions to recommend, how much risk to tolerate and what sacrifices to ask your citizens to bear.

Who would you turn to for an accurate prediction about how people would react? Many would recommend going to the experts — social scientists. But we are here to tell you this would be bad advice.

As psychological scientists with decades of combined experience studying decision-makingwisdomexpert judgment and societal change, we hoped social scientists’ predictions would be accurate and useful. But we also had our doubts.

Our discipline has been undergoing a crisis due to failed study replications and questionable research practices. If basic findings can’t be reproduced in controlled experiments, how confident can we be that our theories can explain complex real-world outcomes?

To find out how well social scientists could predict societal change, we ran the largest forecasting initiative in our field’s history using predictions about change in the first year of the COVID-19 pandemic as a test case….

Our findings, detailed in peer-reviewed papers in Nature Human Behaviour and in American Psychologist, paint a sobering picture. Despite the causal nature of most theories in the social sciences, and the fields’ emphasis on prediction in controlled settings, social scientists’ forecasts were generally not very good.

In both papers, we found that experts’ predictions were generally no more accurate than those made by samples of the general public. Further, their predictions were often worse than predictions generated by simple statistical models.

Our studies did still give us reasons to be optimistic. First, forecasts were more accurate when teams had specific expertise in the domain they were making predictions in. If someone was an expert in depression, for example, they were better at predicting societal trends in depression.

Second, when teams were made up of scientists from different fields working together, they tended to do better at forecasting. Finally, teams that used simpler models to generate their predictions and made use of past data generally outperformed those that didn’t.

These findings suggest that, despite the poor performance of the social scientists in our studies, there are steps scientists can take to improve their accuracy at this type of forecasting….(More)”.

What We Gain from More Behavioral Science in the Global South


Article by Pauline Kabitsis and Lydia Trupe: “In recent years, the field has been critiqued for applying behavioral science at the margins, settling for small but statistically significant effect sizes. Critics have argued that by focusing our efforts on nudging individuals to increase their 401(k) contributions or to reduce their so-called carbon footprint, we have ignored the systemic drivers of important challenges, such as fundamental flaws in the financial system and corporate responsibility for climate change. As Michael Hallsworth points out, however, the field may not be willfully ignoring these deeper challenges, but rather investing in areas of change that are likely easier to move, measure, and secure funding.

It’s been our experience working in the Global South that nudge-based solutions can provide short-term gains within current systems, but for lasting impact a focus beyond individual-level change is required. This is because the challenges in the Global South typically navigate fundamental problems, like enabling women’s reproductive choice, combatting intimate partner violence and improving food security among the world’s most vulnerable populations.

Our work at Common Thread focuses on improving behaviors related to health, like encouraging those persistently left behind to get vaccinated, and enabling Ukrainian refugees in Poland to access health and welfare services. We use a behavioral model that considers not just the individual biases that impact people’s behaviors, but the structural, social, interpersonal, and even historical context that triggers these biases and inhibits health seeking behaviors…(More)”.