The State of Digital Democracy Isn’t As Dire As It Seems


Richard Gibson at the Hedgehog Review: “American society is prone, political theorist Langdon Winner wrote in 2005, to “technological euphoria,” each bout of which is inevitably followed by a period of letdown and reassessment. Perhaps in part for this reason, reviewing the history of digital democracy feels like watching the same movie over and over again. Even Winner’s point has that quality: He first made it in the mid-eighties and has repeated it in every decade since. In the same vein, Warren Yoder, longtime director of the Public Policy Center of Mississippi, responded to the Pew survey by arguing that we have reached the inevitable “low point” with digital technology—as “has happened many times in the past with pamphleteers, muckraking newspapers, radio, deregulated television.” (“Things will get better,” Yoder cheekily adds, “just in time for a new generational crisis beginning soon after 2030.”)

So one threat the present techlash poses is to obscure the ways that digital technology in fact serves many of the functions the visionaries imagined. We now take for granted the vast array of “Gov Tech”—meaning internal government digital upgrades—that makes our democracy go. We have become accustomed to the numerous government services that citizens can avail themselves of with a few clicks, a process spearheaded by the Clinton-Gore administration. We forget how revolutionary the “Internet campaign” of Howard Dean was at the 2004 Democratic primaries, establishing the Internet-based model of campaigning that all presidential candidates use to coordinate volunteer efforts and conduct fundraising, in both cases pulling new participants into the democratic process.

An honest assessment of the current state of digital democracy would acknowledge that the good jostles with the bad and the ugly. Social media has become the new hotspot for Rheingold’s “disinformocracy.” The president’s toxic tweeting continues, though Twitter has attempted recently to provide more oversight. At the same time, digital media have played a conspicuous role in the protests following George Floyd’s death, from the phone used to record his murder to the apps and Google docs used by the organizers of protests. The protests, too, have sparked fresh debate about facial recognition software (rightly one of the major concerns in the Pew report), leading Amazon to announce in June that it was “pausing” police use of its facial recognition software for one year. The city of Boston has made a similar move. Senator Sherrod Brown’s Data Accountability and Transparency Act of 2020, now circulating in draft form, would also limit the federal government’s use of “facial surveillance technology.”

We thus need to avoid summary judgments at this still-early date in the ongoing history of digital democracy. In a superb research paper on “The Internet and Engaged Citizenship” commissioned by the American Academy of Arts and Sciences last year, the political scientist David Karpf wisely concludes that the incredible velocity of “Internet Time” befuddles our attempts to state flatly what has or hasn’t happened to democratic practices and participation in our times. The 2016 election has rightly put many observers on guard. Yet there is a danger in living headline-by-headline. We must not forget how volatile the tech scene remains. That fact leads to Karpf’s hopeful conclusion: “The Internet of 2019 is not a finished product. The choices made by technologists, investors, policy-makers, lawyers, and engaged citizens will all shape what the medium becomes next.” The same can be said about digital technology in 2020: The landscape is still evolving….(More)“.

The ambitious effort to piece together America’s fragmented health data


Nicole Wetsman at The Verge: “From the early days of the COVID-19 pandemic, epidemiologist Melissa Haendel knew that the United States was going to have a data problem. There didn’t seem to be a national strategy to control the virus, and cases were springing up in sporadic hotspots around the country. With such a patchwork response, nationwide information about the people who got sick would probably be hard to come by.

Other researchers around the country were pinpointing similar problems. In Seattle, Adam Wilcox, the chief analytics officer at UW Medicine, was reaching out to colleagues. The city was the first US COVID-19 hotspot. “We had 10 times the data, in terms of just raw testing, than other areas,” he says. He wanted to share that data with other hospitals, so they would have that information on hand before COVID-19 cases started to climb in their area. Everyone wanted to get as much data as possible in the hands of as many people as possible, so they could start to understand the virus.

Haendel was in a good position to help make that happen. She’s the chair of the National Center for Data to Health (CD2H), a National Institutes of Health program that works to improve collaboration and data sharing within the medical research community. So one week in March, just after she’d started working from home and pulled her 10th grader out of school, she started trying to figure out how to use existing data-sharing projects to help fight this new disease.

The solution Haendel and CD2H landed on sounds simple: a centralized, anonymous database of health records from people who tested positive for COVID-19. Researchers could use the data to figure out why some people get very sick and others don’t, how conditions like cancer and asthma interact with the disease, and which treatments end up being effective.

But in the United States, building that type of resource isn’t easy. “The US healthcare system is very fragmented,” Haendel says. “And because we have no centralized healthcare, that makes it also the case that we have no centralized healthcare data.” Hospitals, citing privacy concerns, don’t like to give out their patients’ health data. Even if hospitals agree to share, they all use different ways of storing information. At one institution, the classification “female” could go into a record as one, and “male” could go in as two — and at the next, they’d be reversed….(More)”.

Science Philanthropy and Societal Responsibility: A Match Made for the 21st Century


Blog by Evan S. Michelson: “The overlapping crises the world has experienced in 2020 make clear that resources from multiple sectors — government, private sector, and philanthropy — need to be deployed at multiple scales to better address societal challenges. In particular, science philanthropy has stepped up, helping to advance COVID-19 vaccine developmentidentify solutions to climate change, and make the tools of scientific inquiry more widely available.

As I write in my recently published book, Philanthropy and the Future of Science and Technology (Routledge, 2020), this linkage between science philanthropy and societal responsibility is one that needs to be continually strengthened and advanced as global challenges become more intertwined and as the relationship between science and society becomes more complex. In fact, science philanthropies have an important, yet often overlooked, role in raising the profile of the societal responsibility of research. One way to better understand the role science philanthropies can and should play in society is to draw on the responsible research and innovation (RRI) framework, a concept developed by scholars from fields such as science & technology policy and science & technology studies. Depending on its configuration, the RRI framework has roughly three core dimensions: anticipatory research that is forward-looking and in search of new discoveries, deliberative and inclusive approaches that better engage and integrate members of the public with the research process, and the adoption of reflexive and responsive dispositions by funders (along with those conducting research) to ensure that societal and public values are accounted for and integrated at the outset of a research effort.

Philanthropies that fund research can more explicitly consider this perspective — even just a little bit — when making their funding decisions, thereby helping to better infuse whatever support they provide for individuals, institutions, and networks with attention to broader societal concerns. For instance, doing so not only highlights the need for science philanthropies to identify and support high-quality early career researchers who are pursuing new avenues of science and technology research, but it also raises considerations of diversity, equity, and inclusion as equally important decision-making criteria for funding. The RRI framework also suggests that foundations working in science and technology should not only help to bring together networks of individual scholars and their host institutions, but that the horizon of such collaborations should be actively extended to include practitioners, decision-makers, users, and communities affected by such investigations. Philanthropies can take a further step and reflexively apply these perspectives to how they operate, how they set their strategies and grantmaking priorities, or even in how they directly manage scientific research infrastructure, which some philanthropic institutions have even begun to do within their own institutions….(More)”.

Evaluating the fake news problem at the scale of the information ecosystem


Paper by Jennifer Allen, Baird Howland, Markus Mobius, David Rothschild and Duncan J. Watts: “Fake news,” broadly defined as false or misleading information masquerading as legitimate news, is frequently asserted to be pervasive online with serious consequences for democracy. Using a unique multimode dataset that comprises a nationally representative sample of mobile, desktop, and television consumption, we refute this conventional wisdom on three levels. First, news consumption of any sort is heavily outweighed by other forms of media consumption, comprising at most 14.2% of Americans’ daily media diets. Second, to the extent that Americans do consume news, it is overwhelmingly from television, which accounts for roughly five times as much as news consumption as online. Third, fake news comprises only 0.15% of Americans’ daily media diet. Our results suggest that the origins of public misinformedness and polarization are more likely to lie in the content of ordinary news or the avoidance of news altogether as they are in overt fakery….(More)”.

Behavioral nudges reduce failure to appear for court


Paper by Alissa Fishbane, Aurelie Ouss and Anuj K. Shah: “Each year, millions of Americans fail to appear in court for low-level offenses, and warrants are then issued for their arrest. In two field studies in New York City, we make critical information salient by redesigning the summons form and providing text message reminders. These interventions reduce failures to appear by 13-21% and lead to 30,000 fewer arrest warrants over a 3-year period. In lab experiments, we find that while criminal justice professionals see failures to appear as relatively unintentional, laypeople believe they are more intentional. These lay beliefs reduce support for policies that make court information salient and increase support for punishment. Our findings suggest that criminal justice policies can be made more effective and humane by anticipating human error in unintentional offenses….(More)”

Reset: Reclaiming the Internet for Civil Society


Book by Ronald Deibert: “Digital technologies have given rise to a new machine-based civilization that is increasingly linked to a growing number of social and political maladies. Accountability is weak and insecurity is endemic, creating disturbing opportunities for exploitation.

Drawing from the cutting-edge research of the Citizen Lab, the world-renowned digital security research group which he founded and directs, Ronald J. Deibert exposes the impacts of this communications ecosystem on civil society. He tracks a mostly unregulated surveillance industry, innovations in technologies of remote control, superpower policing practices, dark PR firms, and highly profitable hack-for-hire services feeding off rivers of poorly secured personal data. Deibert also unearths how dependence on social media and its expanding universe of consumer electronics creates immense pressure on the natural environment.?In order to combat authoritarian practices, environmental degradation, and rampant electronic consumerism, he urges restraints on tech platforms and governments to reclaim the internet for civil society…(More)”.

Investigation of Competition in Digital Markets


Press Release: “The House Judiciary Committee’s Antitrust Subcommittee today released the findings of its more than 16-month long investigation into the state of competition in the digital economy, especially the challenges presented by the dominance of Apple, Amazon, Google, and Facebook and their business practices.

The report, entitled Investigation of Competition in the Digital Marketplace: Majority Staff Report and Recommendations, totals more than 400 pages, marking the culmination of an investigation that included seven congressional hearings, the production of nearly 1.3 million internal documents and communications, submissions from 38 antitrust experts, and interviews with more than 240 market participants, former employees of the investigated platforms, and other individuals. It can be downloaded by clicking here.

“As they exist today, Apple, Amazon, Google, and Facebook each possess significant market power over large swaths of our economy. In recent years, each company has expanded and exploited their power of the marketplace in anticompetitive ways,” said Judiciary Committee Chairman Jerrold Nadler (NY-10) and Antitrust Subcommittee Chairman David N. Cicilline (RI-01) in a joint statement. “Our investigation leaves no doubt that there is a clear and compelling need for Congress and the antitrust enforcement agencies to take action that restores competition, improves innovation, and safeguards our democracy. This Report outlines a roadmap for achieving that goal.”

After outlining the challenges presented due to the market domination of Amazon, Apple, Google, and Facebook, the report walks through a series of possible remedies to (1) restore competition in the digital economy, (2) strengthen the antitrust laws, and (3) reinvigorate antitrust enforcement.

The slate of recommendations include:

  • Structural separations to prohibit platforms from operating in lines of business that depend on or interoperate with the platform;
  • Prohibiting platforms from engaging in self-preferencing;
  • Requiring platforms to make its services compatible with competing networks to allow for interoperability and data portability;
  • Mandating that platforms provide due process before taking action against market participants;
  • Establishing a standard to proscribe strategic acquisitions that reduce competition;
  • Improvements to the Clayton Act, the Sherman Act, and the Federal Trade Commission Act, to bring these laws into line with the challenges of the digital economy;
  • Eliminating anticompetitive forced arbitration clauses;
  • Strengthening the Federal Trade Commission (FTC) and the Antitrust Division of the Department of Justice;
  • And promoting greater transparency and democratization of the antitrust agencies….(More)”.

Metrics at Work: Journalism and the Contested Meaning of Algorithms


Book by Angèle Christin: “When the news moved online, journalists suddenly learned what their audiences actually liked, through algorithmic technologies that scrutinize web traffic and activity. Has this advent of audience metrics changed journalists’ work practices and professional identities? In Metrics at Work, Angèle Christin documents the ways that journalists grapple with audience data in the form of clicks, and analyzes how new forms of clickbait journalism travel across national borders.

Drawing on four years of fieldwork in web newsrooms in the United States and France, including more than one hundred interviews with journalists, Christin reveals many similarities among the media groups examined—their editorial goals, technological tools, and even office furniture. Yet she uncovers crucial and paradoxical differences in how American and French journalists understand audience analytics and how these affect the news produced in each country. American journalists routinely disregard traffic numbers and primarily rely on the opinion of their peers to define journalistic quality. Meanwhile, French journalists fixate on internet traffic and view these numbers as a sign of their resonance in the public sphere. Christin offers cultural and historical explanations for these disparities, arguing that distinct journalistic traditions structure how journalists make sense of digital measurements in the two countries.

Contrary to the popular belief that analytics and algorithms are globally homogenizing forces, Metrics at Work shows that computational technologies can have surprisingly divergent ramifications for work and organizations worldwide….(More)”.

The secret to building a smart city that’s antiracist


Article by Eliza McCullough: “….Instead of a smart city model that extracts from, surveils, and displaces poor people of color, we need a democratic model that allows community members to decide how technological infrastructure operates and to ensure the equitable distribution of benefits. Doing so will allow us to create cities defined by inclusion, shared ownership, and shared prosperity.

In 2016, Barcelona, for example, launched its Digital City Plan, which aims to empower residents with control of technology used in their communities. The document incorporates over 8,000 proposals from residents and includes plans for open source software, government ownership of all ICT infrastructure, and a pilot platform to help citizens maintain control over their personal data. As a result, the city now has free applications that allow residents to easily propose city development ideas, actively participate in city council meetings, and choose how their data is shared.

In the U.S., we need a framework for tech sovereignty that incorporates a racial equity approach: In a racist society, race neutrality facilitates continued exclusion and exploitation of people of color. Digital Justice Lab in Toronto illustrates one critical element of this kind of approach: access to information. In 2018, the organization gave community groups a series of grants to hold public events that shared resources and information about digital rights. Their collaborative approach intentionally focuses on the specific needs of people of color and other marginalized groups.

The turn toward intensified surveillance infrastructure in the midst of the coronavirus outbreak makes the need to adopt such practices all the more crucial. Democratic tech models that uplift marginalized populations provide us the chance to build a city that is just and open to everyone….(More)”.

The Cruel New Era of Data-Driven Deportation


Article by Alvaro M. Bedoya: “For a long time, mass deportations were a small-data affair, driven by tips, one-off investigations, or animus-driven hunches. But beginning under George W. Bush, and expanding under Barack Obama, ICE leadership started to reap the benefits of Big Data. The centerpiece of that shift was the “Secure Communities” program, which gathered the fingerprints of arrestees at local and state jails across the nation and compared them with immigration records. That program quickly became a major driver for interior deportations. But ICE wanted more data. The agency had long tapped into driver address records through law enforcement networks. Eyeing the breadth of DMV databases, agents began to ask state officials to run face recognition searches on driver photos against the photos of undocumented people. In Utah, for example, ICE officers requested hundreds of face searches starting in late 2015. Many immigrants avoid contact with any government agency, even the DMV, but they can’t go without heat, electricity, or water; ICE aimed to find them, too. So, that same year, ICE paid for access to a private database that includes the addresses of customers from 80 national and regional electric, cable, gas, and telephone companies.

Amid this bonanza, at least, the Obama administration still acknowledged red lines. Some data were too invasive, some uses too immoral. Under Donald Trump, these limits fell away.

In 2017, breaking with prior practice, ICE started to use data from interviews with scared, detained kids and their relatives to find and arrest more than 500 sponsors who stepped forward to take in the children. At the same time, ICE announced a plan for a social media monitoring program that would use artificial intelligence to automatically flag 10,000 people per month for deportation investigations. (It was scuttled only when computer scientists helpfully indicated that the proposed system was impossible.) The next year, ICE secured access to 5 billion license plate scans from public parking lots and roadways, a hoard that tracks the drives of 60 percent of Americans—an initiative blocked by Department of Homeland Security leadership four years earlier. In August, the agency cut a deal with Clearview AI, whose technology identifies people by comparing their faces not to millions of driver photos, but to 3 billion images from social media and other sites. This is a new era of immigrant surveillance: ICE has transformed from an agency that tracks some people sometimes to an agency that can track anyone at any time….(More)”.