Trust with integrity: Harnessing the integrity dividends of digital government for reducing corruption in developing countries


Paper by Carlos Santiso: “Does digitalization reduce corruption? What are the benefits of data-driven digital government innovations to strengthen public integrity and advance the Sustainable Development Goals? While the correlation between digitalization and corruption is well established, there is less actionable evidence on the effects of specific digitalization reforms on different types of corruption and the policy channels through which they operate. This paper unbundles the integrity dividends of digital reforms that the pandemic has accelerated. It analyses the rise of integrity-tech and integrity analytics in the anticorruption space, deployed by data-savvy integrity institutions. It also assesses the broader integrity dividends of government digitalization for cutting redtape, reducing discretion and increasing transparency in government services and social transfers. It argues that digital government can be an effective anticorruption strategy, with subtler yet deeper effects. There nevertheless needs to be greater synergies between digital reforms and anticorruption strategies….(More)”.

Crowdsourcing research questions in science


Paper by Susanne Beck, Tiare-Maria Brasseur, Marion Poetz and Henry Sauermann: “Scientists are increasingly crossing the boundaries of the professional system by involving the general public (the crowd) directly in their research. However, this crowd involvement tends to be confined to empirical work and it is not clear whether and how crowds can also be involved in conceptual stages such as formulating the questions that research is trying to address. Drawing on five different “paradigms” of crowdsourcing and related mechanisms, we first discuss potential merits of involving crowds in the formulation of research questions (RQs). We then analyze data from two crowdsourcing projects in the medical sciences to describe key features of RQs generated by crowd members and compare the quality of crowd contributions to that of RQs generated in the conventional scientific process. We find that the majority of crowd contributions are problem restatements that can be useful to assess problem importance but provide little guidance regarding potential causes or solutions. At the same time, crowd-generated research questions frequently cross disciplinary boundaries by combining elements from different fields within and especially outside medicine. Using evaluations by professional scientists, we find that the average crowd contribution has lower novelty and potential scientific impact than professional research questions, but comparable practical impact. Crowd contributions outperform professional RQs once we apply selection mechanisms at the level of individual contributors or across contributors. Our findings advance research on crowd and citizen science, crowdsourcing and distributed knowledge production, as well as the organization of science. We also inform ongoing policy debates around the involvement of citizens in research in general, and agenda setting in particular.Author links open overlay panel…(More)”.

Facial Recognition Plan from IRS Raises Big Concerns


Article by James Hendler: “The U.S. Internal Revenue Service is planning to require citizens to create accounts with a private facial recognition company in order to file taxes online. The IRS is joining a growing number of federal and state agencies that have contracted with ID.me to authenticate the identities of people accessing services.

The IRS’s move is aimed at cutting down on identity theft, a crime that affects millions of Americans. The IRS, in particular, has reported a number of tax filings from people claiming to be others, and fraud in many of the programs that were administered as part of the American Relief Plan has been a major concern to the government.

The IRS decision has prompted a backlash, in part over concerns about requiring citizens to use facial recognition technology and in part over difficulties some people have had in using the system, particularly with some state agencies that provide unemployment benefits. The reaction has prompted the IRS to revisit its decision.

As a computer science researcher and the chair of the Global Technology Policy Council of the Association for Computing Machinery, I have been involved in exploring some of the issues with government use of facial recognition technology, both its use and its potential flaws. There have been a great number of concerns raised over the general use of this technology in policing and other government functions, often focused on whether the accuracy of these algorithms can have discriminatory affects. In the case of ID.me, there are other issues involved as well….(More)”.

Financing Models for Digital Ecosystems


Paper by Rahul Matthan, Prakhar Misra and Harshita Agrawal: “This paper explores various financing models for the digital ecosystem within the Indian setup. It uses the market/non-market failure distinction and applies it to different parts of the ecosystem, outlined in the Open Digital Ecosystems framework. It identifies which form of financing — public, private and philanthropic — is suitable for the relevant component of the digital world — data registries, exchanges, open stacks, marketplaces, co-creation platforms, and information access portals. Finally, it treats philanthropic financing as a special case of financing mechanisms available and analyses their pros and cons in the Indian digital ecosystem…(More)”.

Data Innovation in Demography, Migration and Human Mobility


Report by Bosco, C., Grubanov-Boskovic, S., Iacus, S., Minora, U., Sermi, F. and Spyratos, S.: “With the consolidation of the culture of evidence-based policymaking, the availability of data has become central for policymakers. Nowadays, innovative data sources have offered opportunity to describe more accurately demographic, mobility- and migration- related phenomena by making available large volumes of real-time and spatially detailed data. At the same time, however, data innovation has brought up new challenges (ethics, privacy, data governance models, data quality) for citizens, statistical offices, policymakers and the private sector.

Focusing on the fields of demography, mobility and migration studies, the aim of this report is to assess the current state of utilisation of data innovation in the scientific literature as well as to identify areas in which data innovation has the most concrete potential for policymaking. For that purpose, this study has reviewed more than 300 articles and scientific reports, as well as numerous tools, that employed non-traditional data sources for demographic, human mobility or migration research.The specific findings of our report contribute to a discussion on a) how innovative data is used in respect to traditional data sources; b) domains in which innovative data have the highest potential to contribute to policymaking; c) prospects for an innovative data transition towards systematic contribution to official statistics and policymaking…(More)”. See also Big Data for Migration Alliance.

Dignity in a Digital Age: Making Tech Work for All of Us


Book by Congressman Ro Khanna: “… offers a revolutionary roadmap to facing America’s digital divide, offering greater economic prosperity to all. In Khanna’s vision, “just as people can move to technology, technology can move to people. People need not be compelled to move from one place to another to reap the benefits offered by technological progress” (from the foreword by Amartya Sen, Nobel Laureate in Economics).

In the digital age, unequal access to technology and the revenue it creates is one of the most pressing issues facing the United States. There is an economic gulf between those who have struck gold in the tech industry and those left behind by the digital revolution; a geographic divide between those in the coastal tech industry and those in the heartland whose jobs have been automated; and existing inequalities in technological access—students without computers, rural workers with spotty WiFi, and plenty of workers without the luxury to work from home.

Dignity in the Digital Age tackles these challenges head-on and imagines how the digital economy can create opportunities for people all across the country without uprooting them. Congressman Ro Khanna of Silicon Valley offers a vision for democratizing digital innovation to build economically vibrant and inclusive communities. Instead of being subject to tech’s reshaping of our economy, Representative Khanna argues that we must channel those powerful forces toward creating a more healthy, equal, and democratic society.

Born into an immigrant family, Khanna understands how economic opportunity can change the course of a person’s life. Anchored by an approach Khanna refers to as “progressive capitalism,” he shows how democratizing access to tech can strengthen every sector of economy and culture. By expanding technological jobs nationwide through public and private partnerships, we can close the wealth gap in America and begin to repair the fractured, distrusting relationships that have plagued our country for far too long.

Moving deftly between storytelling, policy, and some of the country’s greatest thinkers in political philosophy and economics, Khanna presents a bold vision we can’t afford to ignore. Dignity in a Digital Age is a roadmap to how we can seek dignity for every American in an era in which technology shapes every aspect of our lives…(More)”.

Sample Truths


Christopher Beha at Harpers’ Magazine: “…How did we ever come to believe that surveys of this kind could tell us something significant about ourselves?

One version of the story begins in the middle of the seventeenth century, after the Thirty Years’ War left the Holy Roman Empire a patchwork of sovereign territories with uncertain borders, contentious relationships, and varied legal conventions. The resulting “weakness and need for self-definition,” the French researcher Alain Desrosières writes, created a demand among local rulers for “systematic cataloging.” This generally took the form of descriptive reports. Over time the proper methods and parameters of these reports became codified, and thus was born the discipline of Statistik: the systematic study of the attributes of a state.

As Germany was being consolidated in the nineteenth century, “certain officials proposed using the formal, detailed framework of descriptive statistics to present comparisons between the states” by way of tables in which “the countries appeared in rows, and different (literary) elements of the description appeared in columns.” In this way, a single feature, such as population or climate, could be easily removed from its context. Statistics went from being a method for creating a holistic description of one place to what Desrosières calls a “cognitive space of equivalence.” Once this change occurred, it was only a matter of time before the descriptions themselves were put into the language of equivalence, which is to say, numbers.

The development of statistical reasoning was central to the “project of legibility,” as the anthropologist James C. Scott calls it, ushered in by the rise of nation-states. Strong centralized governments, Scott writes in Seeing Like a State, required that local communities be made “legible,” their features abstracted to enable management by distant authorities. In some cases, such “state simplifications” occurred at the level of observation. Cadastral maps, for example, ignored local land-use customs, focusing instead on the points relevant to the state: How big was each plot, and who was responsible for paying taxes on it?

But legibility inevitably requires simplifying the underlying facts, often through coercion. The paradigmatic example here is postrevolutionary France. For administrative purposes, the country was divided into dozens of “departments” of roughly equal size whose boundaries were drawn to break up culturally cohesive regions such as Normandy and Provence. Local dialects were effectively banned, and use of the new, highly rational metric system was required. (As many commentators have noted, this work was a kind of domestic trial run for colonialism.)

One thing these centralized states did not need to make legible was their citizens’ opinions—on the state itself, or anything else for that matter. This was just as true of democratic regimes as authoritarian ones. What eventually helped bring about opinion polling was the rise of consumer capitalism, which created the need for market research.

But expanding the opinion poll beyond questions like “Pepsi or Coke?” required working out a few kinks. As the historian Theodore M. Porter notes, pollsters quickly learned that “logically equivalent forms of the same question produce quite different distributions of responses.” This fact might have led them to doubt the whole undertaking. Instead, they “enforced a strict discipline on employees and respondents,” instructing pollsters to “recite each question with exactly the same wording and in a specified order.” Subjects were then made “to choose one of a small number of packaged statements as the best expression of their opinions.”

This approach has become so familiar that it may be worth noting how odd it is to record people’s opinions on complex matters by asking them to choose among prefabricated options. Yet the method has its advantages. What it sacrifices in accuracy it makes up in pseudoscientific precision and quantifiability. Above all, the results are legible: the easiest way to be sure you understand what a person is telling you is to put your own words in his mouth.

Scott notes a kind of Heisenberg principle to state simplifications: “They frequently have the power to transform the facts they take note of.” This is another advantage to multiple-choice polling. If people are given a narrow range of opinions, they may well think that those are the only options available, and in choosing one, they may well accept it as wholly their own. Even those of us who reject the stricture of these options for ourselves are apt to believe that they fairly represent the opinions of others. One doesn’t have to be a postmodern relativist to suspect that what’s going on here is as much the construction of a reality as the depiction of one….(More)”.

Guide for Policymakers on Making Transparency Meaningful


Report by CDT: “In 2020, the Minneapolis police used a unique kind of warrant to investigate vandalism of an AutoZone store during the protests over the murder of George Floyd by a police officer. This “geofence” warrant required Google to turn over data on all users within a certain geographic area around the store at a particular time — which would have included not only the vandal, but also protesters, bystanders, and journalists. 

It was only several months later that the public learned of the warrant, because Google notified a user that his account information was subject to the warrant, and the user told reporters. And it was not until a year later — when Google first published a transparency report with data about geofence warrants — that the public learned the total number of geofence warrants Google receives from U.S. authorities and of a recent “explosion” in their use. New York lawmakers introduced a bill to forbid geofence warrants because of concerns they could be used to target protesters, and, in light of Google’s transparency report, some civil society organizations are calling for them to be banned, too.

Technology company transparency matters, as this example shows. Transparency about governmental and company practices that affect users’ speech, access to information, and privacy from government surveillance online help us understand and check the ways in which tech companies and governments wield power and impact people’s human rights. 

Policymakers are increasingly proposing transparency measures as part of their efforts to regulate tech companies, both in the United States and around the world. But what exactly do we mean when we talk about transparency when it comes to technology companies like social networks, messaging services, and telecommunications firms? A new report from CDT, Making Transparency Meaningful: A Framework for Policymakers, maps and describes four distinct categories of technology company transparency:

  1. Transparency reports that provide aggregated data and qualitative information about moderation actions, disclosures, and other practices concerning user generated content and government surveillance; 
  2. User notifications about government demands for their data and moderation of their content; 
  3. Access to data held by intermediaries for independent researchers, public policy advocates, and journalists; and 
  4. Public-facing analysis, assessments, and audits of technology company practices with respect to user speech and privacy from government surveillance. 

Different forms of transparency are useful for different purposes or audiences, and they also give rise to varying technical, legal, and practical challenges. Making Transparency Meaningful is designed to help policymakers and advocates understand the potential benefits and tradeoffs that come with each form of transparency. This report addresses key questions raised by proposed legislation in the United States and Europe that seeks to mandate one or more of these types of transparency and thereby hold tech companies and governments more accountable….(More)”.

Leveraging Non-Traditional Data For The Covid-19 Socioeconomic Recovery Strategy


Article by Deepali Khanna: “To this end, it is opportune to ask the following questions: Can we harness the power of data routinely collected by companies—including transportation providers, mobile network operators, social media networks and others—for the public good? Can we bridge the data gap to give governments access to data, insights and tools that can inform national and local response and recovery strategies?

There is increasing recognition that traditional and non-traditional data should be seen as complementary resources. Non-traditional data can bring significant benefits in bridging existing data gaps but must still be calibrated against benchmarks based on established traditional data sources. These traditional datasets are widely seen as reliable as they are subject to established stringent international and national standards. However, they are often limited in frequency and granularity, especially in low- and middle-income countries, given the cost and time required to collect such data. For example, official economic indicators such as GDP, household consumption and consumer confidence may be available only up to national or regional level with quarterly updates…

In the Philippines, UNDP, with support from The Rockefeller Foundation and the government of Japan, recently setup the Pintig Lab: a multidisciplinary network of data scientists, economists, epidemiologists, mathematicians and political scientists, tasked with supporting data-driven crisis response and development strategies. In early 2021, the Lab conducted a study which explored how household spending on consumer-packaged goods, or fast-moving consumer goods (FMCGs), can been used to assess the socioeconomic impact of Covid-19 and identify heterogeneities in the pace of recovery across households in the Philippines. The Philippine National Economic Development Agency is now in the process of incorporating this data for their GDP forecasting, as additional input to their predictive models for consumption. Further, this data can be combined with other non-traditional datasets such as credit card or mobile wallet transactions, and machine learning techniques for higher-frequency GDP nowcasting, to allow for more nimble and responsive economic policies that can both absorb and anticipate the shocks of crisis….(More)”.

Automation exacts a toll in inequality


Rana Foroohar at The Financial Times: “When humans compete with machines, wages go down and jobs go away. But, ultimately, new categories of better work are created. The mechanisation of agriculture in the first half of the 20th century, or advances in computing and communications technology in the 1950s and 1960s, for example, went hand in hand with strong, broadly shared economic growth in the US and other developed economies.

But, in later decades, something in this relationship began to break down. Since the 1980s, we’ve seen the robotics revolution in manufacturing; the rise of software in everything; the consumer internet and the internet of things; and the growth of artificial intelligence. But during this time trend GDP growth in the US has slowed, inequality has risen and many workers — particularly, men without college degrees — have seen their real earnings fall sharply.

Globalisation and the decline of unions have played a part. But so has technological job disruption. That issue is beginning to get serious attention in Washington. In particular, politicians and policymakers are homing in on the work of MIT professor Daron Acemoglu, whose research shows that mass automation is no longer a win-win for both capital and labour. He testified at a select committee hearing to the US House of Representatives in November that automation — the substitution of machines and algorithms for tasks previously performed by workers — is responsible for 50-70 per cent of the economic disparities experienced between 1980 and 2016.

Why is this happening? Basically, while the automation of the early 20th century and the post-1945 period “increased worker productivity in a diverse set of industries and created myriad opportunities for them”, as Acemoglu said in his testimony, “what we’ve experienced since the mid 1980s is an acceleration in automation and a very sharp deceleration in the introduction of new tasks”. Put simply, he added, “the technological portfolio of the American economy has become much less balanced, and in a way that is highly detrimental to workers and especially low-education workers.”

What’s more, some things we are automating these days aren’t so economically beneficial. Consider those annoying computerised checkout stations in drug stores and groceries that force you to self-scan your purchases. They may save retailers a bit in labour costs, but they are hardly the productivity enhancer of, say, a self-driving combine harvester. Cecilia Rouse, chair of the White House’s Council of Economic Advisers, spoke for many when she told a Council on Foreign Relations event that she’d rather “stand in line [at the pharmacy] so that someone else has a job — it may not be a great job, but it is a job — and where I actually feel like I get better assistance.”

Still, there’s no holding back technology. The question is how to make sure more workers can capture its benefits. In her “Virtual Davos” speech a couple of weeks ago, Treasury secretary Janet Yellen pointed out that recent technologically driven productivity gains might exacerbate rather than mitigate inequality. She pointed to the fact that, while the “pandemic-induced surge in telework” will ultimately raise US productivity by 2.7 per cent, the gains will accrue mostly to upper income, white-collar workers, just as online learning has been better accessed and leveraged by wealthier, white students.

Education is where the rubber meets the road in fixing technology-driven inequality. As Harvard researchers Claudia Goldin and Laurence Katz have shown, when the relationship between education and technology gains breaks down, tech-driven prosperity is no longer as widely shared. This is why the Biden administration has been pushing investments into community college, apprenticeships and worker training…(More)”.