The Moral Economy of High-Tech Modernism


Essay by Henry Farrell and Marion Fourcade: “While people in and around the tech industry debate whether algorithms are political at all, social scientists take the politics as a given, asking instead how this politics unfolds: how algorithms concretely govern. What we call “high-tech modernism”—the application of machine learning algorithms to organize our social, economic, and political life—has a dual logic. On the one hand, like traditional bureaucracy, it is an engine of classification, even if it categorizes people and things very differently. On the other, like the market, it provides a means of self-adjusting allocation, though its feedback loops work differently from the price system. Perhaps the most important consequence of high-tech modernism for the contemporary moral political economy is how it weaves hierarchy and data-gathering into the warp and woof of everyday life, replacing visible feedback loops with invisible ones, and suggesting that highly mediated outcomes are in fact the unmediated expression of people’s own true wishes…(More)”.

Protecting the integrity of survey research


Paper by Jamieson, Kathleen Hall, et al: “Although polling is not irredeemably broken, changes in technology and society create challenges that, if not addressed well, can threaten the quality of election polls and other important surveys on topics such as the economy. This essay describes some of these challenges and recommends remediations to protect the integrity of all kinds of survey research, including election polls. These 12 recommendations specify ways that survey researchers, and those who use polls and other public-oriented surveys, can increase the accuracy and trustworthiness of their data and analyses. Many of these recommendations align practice with the scientific norms of transparency, clarity, and self-correction. The transparency recommendations focus on improving disclosure of factors that affect the nature and quality of survey data. The clarity recommendations call for more precise use of terms such as “representative sample” and clear description of survey attributes that can affect accuracy. The recommendation about correcting the record urges the creation of a publicly available, professionally curated archive of identified technical problems and their remedies. The paper also calls for development of better benchmarks and for additional research on the effects of panel conditioning. Finally, the authors suggest ways to help people who want to use or learn from survey research understand the strengths and limitations of surveys and distinguish legitimate and problematic uses of these methods…(More)”.

The Incredible Challenge of Counting Every Global Birth and Death


Jeneen Interlandi at The New York Times: “…The world’s wealthiest nations are awash in so much personal data that data theft has become a lucrative business and its protection a common concern. From such a vantage point, it can be difficult to even fathom the opposite — a lack of any identifying information at all — let alone grapple with its implications. But the undercounting of human lives is pervasive, data scientists say. The resulting ills are numerous and consequential, and recent history is littered with missed opportunities to solve the problem.

More than two decades ago, 147 nations rallied around the Millennium Development Goals, the United Nations’ bold new plan for halving extreme poverty, curbing childhood mortality and conquering infectious diseases like malaria and H.I.V. The health goals became the subject of countless international summits and steady news coverage, ultimately spurring billions of dollars in investment from the world’s wealthiest nations, including the United States. But a fierce debate quickly ensued. Critics said that health officials at the United Nations and elsewhere had almost no idea what the baseline conditions were in many of the countries they were trying to help. They could not say whether maternal mortality was increasing or decreasing, or how many people were being infected with malaria, or how fast tuberculosis was spreading. In a 2004 paper, the World Health Organization’s former director of evidence, Chris Murray, and other researchers described the agency’s estimates as “serial guessing.” Without that baseline data, progress toward any given goal — to halve hunger, for example — could not be measured…(More)”.

Why Data for and about Children Needs Attention at the World Data Forum: The Vital Role of Partnerships


Blog by Stefaan Verhulst, Eugenia Olliaro, Danzhen You, Estrella Lajom, and Daniel Shephard: “Issues surrounding children and data are rarely given the thoughtful and dedicated attention they deserve. An increasingly large amount of data is being collected about children, often without a framework to determine whether those data are used responsibly. At the same time, even as the volume of data increases, there remain substantial areas of missing data when it comes to children. This is especially true for children on the move and those who have been marginalized by conflict, displacement, or environmental disasters. There is also a risk that patterns of data collection mirror existing forms of exclusion, thereby perpetuating inequalities that exist along, for example, dimensions of indigeneity and gender.

This year’s World Data Forum, to be held in Hangzhou, China, offers an opportunity to unpack these challenges and consider solutions, such as through new forms of partnerships. The Responsible Data for Children (RD4C) initiative offers one important model for such a partnership. Formed between The GovLab and UNICEF, the initiative seeks to produce guidance, tools, and leadership to support the responsible handling of data for and about children across the globe. It addresses the unique vulnerabilities that face children, identifying shortcomings in the existing data ecology and pointing toward some possible solutions…(More)”.

Seize the Future by Harnessing the Power of Data


Essay by Kriss Deiglmeier: “…Data is a form of power. And the sad reality is that power is being held increasingly by the commercial sector and not by organizations seeking to create a more just, sustainable, and prosperous world. A year into my tenure as the chief global impact officer at Splunk, I became consumed with the new era driven by data. Specifically, I was concerned with the emerging data divide, which I defined as “the disparity between the expanding use of data to create commercial value, and the comparatively weak use of data to solve social and environmental challenges.”…

To effectively address the emerging data future, the social impact sector must build an entire impact data ecosystem for this moment in time—and the next moment in time. The way to do that is by investing in those areas where we currently lag the commercial sector. Consider the following gaps:

  • Nonprofits are ill-equipped with the financial and technical resources they need to make full use of data, often due to underfunding.
  • The sector’s technical and data talent is a desert compared to the commercial sector.
  • While the sector is rich with output and service-delivery data, that data is locked away or is unusable in its current form.
  • The sector lacks living data platforms (collaboratives and data refineries) that can make use of sector-wide data in a way that helps improve service delivery, maximize impact, and create radical innovation.

The harsh realities of the sector’s disparate data skills, infrastructure, and competencies show the dire current state. For the impact sector to transition to a place of power, it must jump without hesitation into the arena of the Data Age—and invest time, talent, and money in filling in these gaps.

Regardless of our lagging position, the social sector has both an incredible opportunity and a unique capacity to drive the power of data into the emerging and unimaginable. The good news is that there’s pivotal work already happening in the sector that is making it easier to build the kind of impact data ecosystem needed to join the Data Age. The framing and terms used to describe this work are many—data for good, data science for impact, open data, public interest technology, data lakes, ethical data, and artificial intelligence ethics.

These individual pieces, while important, are not enough. To fully exploit the power of data for a more just, sustainable, and prosperous world, we need to be bold enough to build the full ecosystem and not be satisfied with piecemeal work. To do that we should begin by looking at the assets that we have and build on those.

People. There are dedicated leaders in the field of social innovation who are committed to using data for impact and who have been doing that for many years. We need to support them by investing in their work at scale. The list of people leading the way is constantly growing, but to name a few: Stefaan G. Verhulst, Joy Buolamwini, Jim Fruchterman, Katara McCarty, Geoff Mulgan, Rediet Abebe, Jason Saul, and Jake Porway….(More)”.

Data is power — it’s time we act like it


Article by Danil Mikhailov: “Almost 82% of NGOs in low- and middle-income countries cite a lack of funding as their biggest barrier to adopting digital tools for social impact. What’s more, data.org’s 2023 data for social impact, or DSI, report, Accelerate Aspirations: Moving Together to Achieve Systems Change, found that when it comes to financial support, funders overlook the power of advanced data strategies to address longer-term systemic solutions — instead focusing on short-term, project-based outcomes.

That’s a real problem as we look to deploy powerful, data-driven interventions to solve some of today’s biggest crises — from shifting demographics to rising inequality to pandemics to our global climate emergency. Given the urgent challenges our world faces, pilots, one-offs, and underresourced program interventions are no longer acceptable.

It’s time we — as funders, academics, and purpose-driven data practitioners — acknowledge that data is power. And how do we truly harness that power? We must look toward innovative, diverse, equitable, and collaborative funding and partnership models to meet the incredible potential of data for social impact or risk the success of systems-level solutions that lead to long-term impact…(More)”.

How Design is Governance


Essay by Amber Case: “At a fundamental level, all design is governance. We encounter inconveniences like this coffee shop every day, both offline and in the apps we use. But it’s not enough to say it’s the result of bad design. It’s also a result of governance decisions made on behalf of the customers during the design process.

Michel Foucault talked about governance as structuring the field of action for others. Governance is the processes, systems, and principles through which a group, organization, or society is managed and controlled.

Design not only shapes how a product or service will be used, but also restricts or frustrates people’s existing or emergent choices, even when they’re not a user themselves. My neighbor at the cafe, who now has a Mac power cord snaked under her feet, can attest to that.

In a coffee shop, we’re lucky that we can move chairs around or talk with other customers. But when it comes to apps, most people cannot move buttons on interfaces. We’re stuck.

When we create designs, we’re basically defining what is possible or at least highly encouraged within the context of our products. We’re also defining what is discouraged.

To illustrate, let’s revisit this same cafe from a governance perspective…(More)”.

Could a Global “Wicked Problems Agency” Incentivize Data Sharing?


Paper by Susan Ariel Aaronson: “Global data sharing could help solve “wicked” problems (problems such as climate change, terrorism and global poverty that no one knows how to solve without creating further problems). There is no one or best way to address wicked problems because they have many different causes and manifest in different contexts. By mixing vast troves of data, policy makers and researchers may find new insights and strategies to address these complex problems. National and international government agencies and large corporations generally control the use of such data, and the world has made little progress in encouraging cross-sectoral and international data sharing. This paper proposes a new international cloud-based organization, the “Wicked Problems Agency,” to catalyze both data sharing and data analysis in the interest of mitigating wicked problems. This organization would work to prod societal entities — firms, individuals, civil society groups and governments — to share and analyze various types of data. The Wicked Problems Agency could provide a practical example of how data sharing can yield both economic and public good benefits…(More)”.

The Data Delusion


Jill Lepore at The New Yorker: “…The move from a culture of numbers to a culture of data began during the Second World War, when statistics became more mathematical, largely for the sake of becoming more predictive, which was necessary for wartime applications involving everything from calculating missile trajectories to cracking codes. “This was not data in search of latent truths about humanity or nature,” Wiggins and Jones write. “This was not data from small experiments, recorded in small notebooks. This was data motivated by a pressing need—to supply answers in short order that could spur action and save lives.” That work continued during the Cold War, as an instrument of the national-security state. Mathematical modelling, increased data-storage capacity, and computer simulation all contributed to the pattern detection and prediction in classified intelligence work, military research, social science, and, increasingly, commerce.

Despite the benefit that these tools provided, especially to researchers in the physical and natural sciences—in the study of stars, say, or molecules—scholars in other fields lamented the distorting effect on their disciplines. In 1954, Claude Lévi-Strauss argued that social scientists need “to break away from the hopelessness of the ‘great numbers’—the raft to which the social sciences, lost in an ocean of figures, have been helplessly clinging.” By then, national funding agencies had shifted their priorities. The Ford Foundation announced that although it was interested in the human mind, it was no longer keen on non-predictive research in fields like philosophy and political theory, deriding such disciplines as “polemical, speculative, and pre-scientific.” The best research would be, like physics, based on “experiment, the accumulation of data, the framing of general theories, attempts to verify the theories, and prediction.” Economics and political science became predictive sciences; other ways of knowing in those fields atrophied.

The digitization of human knowledge proceeded apace, with libraries turning books first into microfiche and microfilm and then—through optical character recognition, whose origins date to the nineteen-thirties—into bits and bytes. The field of artificial intelligence, founded in the nineteen-fifties, at first attempted to sift through evidence in order to identify the rules by which humans reason. This approach hit a wall, in a moment known as “the knowledge acquisition bottleneck.” The breakthrough came with advances in processing power and the idea of using the vast stores of data that had for decades been compounding in the worlds of both government and industry to teach machines to teach themselves by detecting patterns: machines, learning…(More)”.

The limits of expert judgment: Lessons from social science forecasting during the pandemic


Article by Cendri Hutcherson  Michael Varnum Imagine being a policymaker at the beginning of the COVID-19 pandemic. You have to decide which actions to recommend, how much risk to tolerate and what sacrifices to ask your citizens to bear.

Who would you turn to for an accurate prediction about how people would react? Many would recommend going to the experts — social scientists. But we are here to tell you this would be bad advice.

As psychological scientists with decades of combined experience studying decision-makingwisdomexpert judgment and societal change, we hoped social scientists’ predictions would be accurate and useful. But we also had our doubts.

Our discipline has been undergoing a crisis due to failed study replications and questionable research practices. If basic findings can’t be reproduced in controlled experiments, how confident can we be that our theories can explain complex real-world outcomes?

To find out how well social scientists could predict societal change, we ran the largest forecasting initiative in our field’s history using predictions about change in the first year of the COVID-19 pandemic as a test case….

Our findings, detailed in peer-reviewed papers in Nature Human Behaviour and in American Psychologist, paint a sobering picture. Despite the causal nature of most theories in the social sciences, and the fields’ emphasis on prediction in controlled settings, social scientists’ forecasts were generally not very good.

In both papers, we found that experts’ predictions were generally no more accurate than those made by samples of the general public. Further, their predictions were often worse than predictions generated by simple statistical models.

Our studies did still give us reasons to be optimistic. First, forecasts were more accurate when teams had specific expertise in the domain they were making predictions in. If someone was an expert in depression, for example, they were better at predicting societal trends in depression.

Second, when teams were made up of scientists from different fields working together, they tended to do better at forecasting. Finally, teams that used simpler models to generate their predictions and made use of past data generally outperformed those that didn’t.

These findings suggest that, despite the poor performance of the social scientists in our studies, there are steps scientists can take to improve their accuracy at this type of forecasting….(More)”.