EU and US legislation seek to open up digital platform data


Article by Brandie Nonnecke and Camille Carlton: “Despite the potential societal benefits of granting independent researchers access to digital platform data, such as promotion of transparency and accountability, online platform companies have few legal obligations to do so and potentially stronger business incentives not to. Without legally binding mechanisms that provide greater clarity on what and how data can be shared with independent researchers in privacy-preserving ways, platforms are unlikely to share the breadth of data necessary for robust scientific inquiry and public oversight.

Here, we discuss two notable, legislative efforts aimed at opening up platform data: the Digital Services Act (DSA), recently approved by the European Parliament, and the Platform Accountability and Transparency Act (PATA), recently proposed by several US senators. Although the legislation could support researchers’ access to data, they could also fall short in many ways, highlighting the complex challenges in mandating data access for independent research and oversight.

As large platforms take on increasingly influential roles in our online social, economic, and political interactions, there is a growing demand for transparency and accountability through mandated data disclosures. Research insights from platform data can help, for example, to understand unintended harms of platform use on vulnerable populations, such as children and marginalized communities; identify coordinated foreign influence campaigns targeting elections; and support public health initiatives, such as documenting the spread of antivaccine mis-and disinformation…(More)”.

Metrics at Work: Journalism and the Contested Meaning of Algorithms


Book by Angèle Christin: “When the news moved online, journalists suddenly learned what their audiences actually liked, through algorithmic technologies that scrutinize web traffic and activity. Has this advent of audience metrics changed journalists’ work practices and professional identities? In Metrics at Work, Angèle Christin documents the ways that journalists grapple with audience data in the form of clicks, and analyzes how new forms of clickbait journalism travel across national borders.

Drawing on four years of fieldwork in web newsrooms in the United States and France, including more than one hundred interviews with journalists, Christin reveals many similarities among the media groups examined—their editorial goals, technological tools, and even office furniture. Yet she uncovers crucial and paradoxical differences in how American and French journalists understand audience analytics and how these affect the news produced in each country. American journalists routinely disregard traffic numbers and primarily rely on the opinion of their peers to define journalistic quality. Meanwhile, French journalists fixate on internet traffic and view these numbers as a sign of their resonance in the public sphere. Christin offers cultural and historical explanations for these disparities, arguing that distinct journalistic traditions structure how journalists make sense of digital measurements in the two countries.

Contrary to the popular belief that analytics and algorithms are globally homogenizing forces, Metrics at Work shows that computational technologies can have surprisingly divergent ramifications for work and organizations worldwide…(More)”.

A 680,000-person megastudy of nudges to encourage vaccination in pharmacies


Paper by Katherine L. Milkman et al: “Encouraging vaccination is a pressing policy problem. To assess whether text-based reminders can encourage pharmacy vaccination and what kinds of messages work best, we conducted a megastudy. We randomly assigned 689,693 Walmart pharmacy patients to receive one of 22 different text reminders using a variety of different behavioral science principles to nudge flu vaccination or to a business-as-usual control condition that received no messages. We found that the reminder texts that we tested increased pharmacy vaccination rates by an average of 2.0 percentage points, or 6.8%, over a 3-mo follow-up period. The most-effective messages reminded patients that a flu shot was waiting for them and delivered reminders on multiple days. The top-performing intervention included two texts delivered 3 d apart and communicated to patients that a vaccine was “waiting for you.” Neither experts nor lay people anticipated that this would be the best-performing treatment, underscoring the value of simultaneously testing many different nudges in a highly powered megastudy….(More)”.

Society won’t trust A.I. until business earns that trust


Article by François Candelon, Rodolphe Charme di Carlo and Steven D. Mills: “…The concept of a social license—which was born when the mining industry, and other resource extractors, faced opposition to projects worldwide—differs from the other rules governing A.I.’s use. Academics such as Leeora Black and John Morrison, in the book The Social License: How to Keep Your Organization Legitimate,define the social license as “the negotiation of equitable impacts and benefits in relation to its stakeholders over the near and longer term. It can range from the informal, such as an implicit contract, to the formal, like a community benefit agreement.” 

The social license isn’t a document like a government permit; it’s a form of acceptance that companies must gain through consistent and trustworthy behavior as well as stakeholder interactions. Thus, a social license for A.I. will be a socially constructed perception that a company has secured the right to use the technology for specific purposes in the markets in which it operates. 

Companies cannot award themselves social licenses; they will have to win them by proving they can be trusted. As Morrison argued in 2014, akin to the capability to dig a mine, the fact that an A.I.-powered solution is technologically feasible doesn’t mean that society will find its use morally and ethically acceptable. And losing the social license will have dire consequences, as natural resource companies, such as Shell and BP, have learned in the past…(More)”

Still Muted: The Limited Participatory Democracy of Zoom Public Meetings


Paper by Katherine Levine Einstein: “Recent research has demonstrated that participants in public meetings are unrepresentative of their broader communities. Some suggest that reducing barriers to meeting attendance can improve participation, while others believe doing so will produce minimal changes. The COVID-19 pandemic shifted public meetings online, potentially reducing the time costs associated with participating. We match participants at online public meetings with administrative data to learn whether: (1) online participants are representative of their broader communities and (2) representativeness improves relative to in-person meetings. We find that participants in online forums are quite similar to those in in-person ones. They are similarly unrepresentative of residents in their broader communities and similarly overwhelmingly opposed to the construction of new housing. These results suggest important limitations to public meeting reform. Future research should continue to unpack whether reforms might prove more effective at redressing inequalities in an improved economic and public health context…(More)”.

Data Federalism


Article by Bridget A. Fahey: “Private markets for individual data have received significant and sustained attention in recent years. But data markets are not for the private sector alone. In the public sector, the federal government, states, and cities gather data no less intimate and on a scale no less profound. And our governments have realized what corporations have: It is often easier to obtain data about their constituents from one another than to collect it directly. As in the private sector, these exchanges have multiplied the data available to every level of government for a wide range of purposes, complicated data governance, and created a new source of power, leverage, and currency between governments.

This Article provides an account of this vast and rapidly expanding intergovernmental marketplace in individual data. In areas ranging from policing and national security to immigration and public benefits to election management and public health, our governments exchange data both by engaging in individual transactions and by establishing “data pools” to aggregate the information they each have and diffuse access across governments. Understanding the breadth of this distinctly modern practice of data federalism has descriptive, doctrinal, and normative implications.

In contrast to conventional cooperative federalism programs, Congress has largely declined to structure and regulate intergovernmental data exchange. And in Congress’s absence, our governments have developed unorthodox cross-governmental administrative institutions to manage data flows and oversee data pools, and these sprawling, unwieldy institutions are as important as the usual cooperative initiatives to which federalism scholarship typically attends.

Data exchanges can also go wrong, and courts are not prepared to navigate the ways that data is both at risk of being commandeered and ripe for use as coercive leverage. I argue that these constitutional doctrines can and should be adapted to police the exchange of data. I finally place data federalism in normative frame and argue that data is a form of governmental power so unlike the paradigmatic ones our federalism is believed to distribute that it has the potential to unsettle federalism in both function and theory…(More)”.

Bringing Open Source to the Global Lab Bench


Article by Julieta Arancio and Shannon Dosemagen: “In 2015, Richard Bowman, an optics scientist, began experimenting with 3D printing a microscope as a single piece in order to reduce the time and effort of reproducing the design. Soon after, he started the OpenFlexure project, an open-license 3D-printed microscope. The project quickly took over his research agenda and grew into a global community of hundreds of users and developers, including professional scientists, hobbyists, community scientists, clinical researchers, and teachers. Anyone with access to a 3D printer can download open-source files from the internet to create microscopes that can be used for doing soil science research, detecting diseases such as malaria, or teaching microbiology, among other things. Today, the project is supported by a core team at the Universities of Bath and Cambridge in the United Kingdom, as well as in Tanzania by the Ifakara Health Institute and Bongo Tech & Research Labs, an engineering company. 

OpenFlexure is one of many open science hardware projects that are championed by the Gathering for Open Science Hardware (GOSH), a transnational network of open science hardware advocates. Although there are differences in practice, open hardware projects operate on similar principles to open-source software, and they span disciplines ranging from nanotechnology to environmental monitoring. GOSH defines the field as “any piece of hardware used for scientific investigations that can be obtained, assembled, used, studied, modified, shared, and sold by anyone. It includes standard lab equipment as well as auxiliary materials, such as sensors, biological reagents, analog and digital electronic components.” Compared to an off-the-shelf microscope, which may cost thousands of dollars, an OpenFlexure microscope may cost a few hundred. By being significantly cheaper and easier to maintain, open hardware enables more people in more places to do science….(More)”.

Rehashing the Past: Social Equity, Decentralized Apps & Web 3.0


Opening blog by Jeffrey R. Yost of new series on Blockchain and Society: “Blockchain is a powerful technology with roots three decades old in a 1991 paper on (immutable) timestamping of digital content. This paper, by Bellcore’s Stuart Haber and W. Scott Stornetta, along with key (in both senses) crypto research of a half dozen future Turing Awardees (Nobel of computer science–W. Diffie, M. Hellman, R. Rivest, A. Shamir, L. Adleman, S. Micali), and others, provided critical foundations for Bitcoin, blockchain, Non-Fungible Tokens (NFTs), and Decentralized Autonomous Organizations (DAOs).  This initial and foundational blog post, of Blockchain and Society, seeks to address and analyze the history, sociology, and political economy of blockchain and cryptocurrency. Subsequent blogs will dive deeper into individual themes and topics on crypto’s sociocultural and political economy contexts….(More)”.

Artificial Intelligence Bias and Discrimination: Will We Pull the Arc of the Moral Universe Towards Justice?


Paper by Emile Loza de Siles: “In 1968, the Reverend Martin Luther King Jr. foresaw the inevitability of society’s eventual triumph over the deep racism of his time and the stain that continues to cast its destructive oppressive pall today. From the pulpit of the nation’s church, Dr King said, “We shall overcome because the arc of the moral universe is long but it bends toward justice”. More than 40 years later, Eric Holder, the first African American United States Attorney General, agreed, but only if people acting with conviction exert to pull that arc towards justice.

With artificial intelligence (AI) bias and discrimination rampant, the need to pull the moral arc towards algorithmic justice is urgent. This article offers empowering clarity by conceptually bifurcating AI bias problems into AI bias engineering and organisational AI governance problems, revealing proven legal development pathways to protect against the corrosive harms of AI bias and discrimination…(More)”.

Facial Recognition Plan from IRS Raises Big Concerns


Article by James Hendler: “The U.S. Internal Revenue Service is planning to require citizens to create accounts with a private facial recognition company in order to file taxes online. The IRS is joining a growing number of federal and state agencies that have contracted with ID.me to authenticate the identities of people accessing services.

The IRS’s move is aimed at cutting down on identity theft, a crime that affects millions of Americans. The IRS, in particular, has reported a number of tax filings from people claiming to be others, and fraud in many of the programs that were administered as part of the American Relief Plan has been a major concern to the government.

The IRS decision has prompted a backlash, in part over concerns about requiring citizens to use facial recognition technology and in part over difficulties some people have had in using the system, particularly with some state agencies that provide unemployment benefits. The reaction has prompted the IRS to revisit its decision.

As a computer science researcher and the chair of the Global Technology Policy Council of the Association for Computing Machinery, I have been involved in exploring some of the issues with government use of facial recognition technology, both its use and its potential flaws. There have been a great number of concerns raised over the general use of this technology in policing and other government functions, often focused on whether the accuracy of these algorithms can have discriminatory affects. In the case of ID.me, there are other issues involved as well….(More)”.