Turning the Principle of Participation into Practice: Empowering Parents to Engage on Data and Tech


Guest Blog by Elizabeth Laird at Responsible Data for Children: “Two years into the pandemic, questions about parental rights in school have taken center stage in public debates, particularly in school board meetings and state houses across the United States. Not surprisingly, this extends to the use of data and technology in schools.

CDT recently released research that found that parental concerns around student privacy and security protection have risen since the spring, growing from 60% in February 2021 to 69% in July 2021. Far from being ambivalent, we also found that parents and students expressed eagerness to play a role in decisions about technology and data but indicate these desires are going unmet. Most parents and students want to be consulted but few have been asked for input: 93% of surveyed parents feel that schools should engage them regarding how student data is collected and used, but only 44% say their school has asked for their input on these issues.

While much of this debate has focused on the United States and similar countries, these issues have global resonance as all families have a stake in how their children are educated. Engaging students and families has always been an important component of primary and secondary education, from involving parents in their children’s individual experiences to systemic decision-making; however, there is significant room for improvement, especially as it relates to the use of education data and technology. Done well, community engagement (aligned with the Participatory principle in the Responsible Data for Children (RD4C) initiative) is a two-way, mutually beneficial partnership between public agencies and community members in which questions and concerns are identified, discussed, and decided jointly. It benefits public agencies by building trust, helping them achieve their mission, and minimizing risks, including community pushback. It helps communities by assisting agencies to better meet community needs and increasing transparency and accountability.

To assist education practitioners in improving their community engagement efforts, CDT recently released guidance that focuses on four important steps…(More)”.

End the State Monopoly on Facts


Essay by Adam J. White: “…This Covid-era dynamic has accelerated broader trends toward the consolidation of informational power among a few centralized authorities. And it has further deformed the loose set of institutions and norms that Jonathan Rauch, in a 2018 National Affairs article, identified as Western civilization’s “constitution of knowledge.” This is an arrangement in science, journalism, and the courts in which “any hypothesis can be floated” but “can join reality only insofar as it persuades people after withstanding vigorous questioning and criticism.” The more that Americans delegate the hard work of developing and distributing information to a small number of regulatory institutions, the less capable we all will be of correcting the system’s mistakes — and the more likely the system will be to make mistakes in the first place.

In a 1999 law review article, Timur Kuran and Cass Sunstein warned of availability cascades, a process in which activists promote factual assertions and narratives that in a self-reinforcing dynamic become more plausible the more widely available they are, and can eventually overwhelm the public’s perception. The Covid-19 era has been marked by the opposite problem: unavailability cascades, in which media institutions and social media platforms swiftly erase disfavored narratives and dissenting contentions from the marketplace of ideas, making them seem implausible by their near unavailability. Such cascades occur because legacy media and social media platforms have come to rely overwhelmingly, even exclusively, on federal regulatory agencies’ factual assertions and the pronouncements of a small handful of other favored institutions, such as the World Health Organization, as the gold standard of facts. But availability and unavailability cascades, even when intended in good faith to prevent the spread of disinformation among the public, risk misinforming the very people they purport to inform. A more diverse and vibrant ecosystem of informational institutions would disincentivize the platforms’ and media’s reflexive, cascading reactions to dissenting views.

This second problem — the concentration of informational power — exacerbates the first one: how to counterbalance the executive branch’s power after an emergency. In order for Congress, the courts, and other governing institutions to reassert their own constitutional roles after the initial weeks and months of crisis, they (and the public) need credible sources of information outside the administration itself. An informational ecosystem not overweighted so heavily toward administrative agencies, one that benefits more from the independent contributions of experts in universities, think tanks, journalism, and other public and private institutions, would improve the quality of information that it produces. It would also be less susceptible to the reflexively partisan skepticism that has become endemic in the polarization of modern president-centric government…(More)”.

Algorithm vs. Algorithm


Paper by Cary Coglianese and Alicia Lai: “Critics raise alarm bells about governmental use of digital algorithms, charging that they are too complex, inscrutable, and prone to bias. A realistic assessment of digital algorithms, though, must acknowledge that government is already driven by algorithms of arguably greater complexity and potential for abuse: the algorithms implicit in human decision-making. The human brain operates algorithmically through complex neural networks. And when humans make collective decisions, they operate via algorithms too—those reflected in legislative, judicial, and administrative processes. Yet these human algorithms undeniably fail and are far from transparent.

On an individual level, human decision-making suffers from memory limitations, fatigue, cognitive biases, and racial prejudices, among other problems. On an organizational level, humans succumb to groupthink and free-riding, along with other collective dysfunctionalities. As a result, human decisions will in some cases prove far more problematic than their digital counterparts. Digital algorithms, such as machine learning, can improve governmental performance by facilitating outcomes that are more accurate, timely, and consistent. Still, when deciding whether to deploy digital algorithms to perform tasks currently completed by humans, public officials should proceed with care on a case-by-case basis. They should consider both whether a particular use would satisfy the basic preconditions for successful machine learning and whether it would in fact lead to demonstrable improvements over the status quo. The question about the future of public administration is not whether digital algorithms are perfect. Rather, it is a question about what will work better: human algorithms or digital ones….(More)”.

This Is the Difference Between a Family Surviving and a Family Sinking


Article by Bryce Covert: “…The excitement around policymaking is almost always in the moments after ink dries on a bill creating something new. But if a benefit fails to reach the people it’s designed for, it may as well not exist at all. Making government benefits more accessible and efficient doesn’t usually get the spotlight. But it’s often the difference between a family getting what it needs to survive and falling into hardship and destitution. It’s the glue of our democracy.

President Biden appears to have taken note of this. Late last year, he issued an executive order meant to improve the “customer experience and service delivery” of the entire federal government. He put forward some ideas, including moving Social Security benefit claims and passport renewals online, reducing paperwork for student loan forgiveness and certifying low-income people for all the assistance they qualify for at once, rather than making them seek out benefits program by program. More important, he shifted the focus of government toward whether or not the customers — that’s us — are having a good experience getting what we deserve.

It’s a direction all lawmakers, from the federal level down to counties and cities, should follow.

One of the biggest barriers to government benefits is all of the red tape to untangle, particularly for programs that serve low-income people. They were the ones wrangling with the I.R.S.’s nonfiler portal while others got their payments automatically. Benefits delivered through the tax code, which flow so easily that many people don’t think of them as government benefits at all, mostly help the already well-off. Programs for the poor, on the other hand, tend to be bloated with barriers like income tests, work requirements and in-person interviews. It’s not just about applying once, either; many require people to continually recertify, going through the process over and over again.

The hassle doesn’t just cost time and effort. It comes with a psychological cost. “You get mad at the D.M.V. because it takes hours to do something that should only take minutes,” Pamela Herd, a sociologist at Georgetown, said. “These kind of stresses can be really large when you’re talking about people who are on a knife’s edge in terms of their ability to pay their rent or feed their children.”…(More)”.

The Behavioral Code


Book by Benjamin van Rooij and Adam Fine: “Why do most Americans wear seatbelts but continue to speed even though speeding fines are higher? Why could park rangers reduce theft by removing “no stealing” signs? Why was a man who stole 3 golf clubs sentenced to 25 years in prison?

Some laws radically change behavior whereas others are consistently ignored and routinely broken. And yet we keep relying on harsh punishment against crime despite its continued failure.

Professors Benjamin van Rooij and Adam Fine draw on decades of research to uncover the behavioral code: the root causes and hidden forces that drive human behavior and our responses to society’s laws. In doing so, they present the first accessible analysis of behavioral jurisprudence, which will fundamentally alter how we understand the connection between law and human behavior.

The Behavioral Code offers a necessary and different approach to battling crime and injustice that is based in understanding the science of human misconduct—rather than relying on our instinctual drive to punish as a way to shape behavior. The book reveals the behavioral code’s hidden role through illustrative examples like:

   • The illusion of the US’s beloved tax refund
   • German walls that “pee back” at public urinators
   • The $1,000 monthly “good behavior” reward that reduced gun violence
   • Uber’s backdoor “Greyball” app that helped the company evade Seattle’s taxi regulators
   • A $2.3 billion legal settlement against Pfizer that revealed how whistleblower protections fail to reduce corporate malfeasance
   • A toxic organizational culture playing a core role in Volkswagen’s emissions cheating scandal
   • How Peter Thiel helped Hulk Hogan sue Gawker into oblivion…(More)”.

Shared Measures: Collective Performance Data Use in Collaborations


Paper by Alexander Kroll: “Traditionally, performance metrics and data have been used to hold organizations accountable. But public service provision is not merely hierarchical anymore. Increasingly, we see partnerships among government agencies, private or nonprofit organizations, and civil society groups. Such collaborations may also use goals, measures, and data to manage group efforts, however, the application of performance practices here will likely follow a different logic. This Element introduces the concepts of “shared measures” and “collective data use” to add collaborative, relational elements to existing performance management theory. It draws on a case study of collaboratives in North Carolina that were established to develop community responses to the opioid epidemic. To explain the use of shared performance measures and data within these collaboratives, this Element studies the role of factors such as group composition, participatory structures, social relationships, distributed leadership, group culture, and value congruence…(More)”.

EU and US legislation seek to open up digital platform data


Article by Brandie Nonnecke and Camille Carlton: “Despite the potential societal benefits of granting independent researchers access to digital platform data, such as promotion of transparency and accountability, online platform companies have few legal obligations to do so and potentially stronger business incentives not to. Without legally binding mechanisms that provide greater clarity on what and how data can be shared with independent researchers in privacy-preserving ways, platforms are unlikely to share the breadth of data necessary for robust scientific inquiry and public oversight.

Here, we discuss two notable, legislative efforts aimed at opening up platform data: the Digital Services Act (DSA), recently approved by the European Parliament, and the Platform Accountability and Transparency Act (PATA), recently proposed by several US senators. Although the legislation could support researchers’ access to data, they could also fall short in many ways, highlighting the complex challenges in mandating data access for independent research and oversight.

As large platforms take on increasingly influential roles in our online social, economic, and political interactions, there is a growing demand for transparency and accountability through mandated data disclosures. Research insights from platform data can help, for example, to understand unintended harms of platform use on vulnerable populations, such as children and marginalized communities; identify coordinated foreign influence campaigns targeting elections; and support public health initiatives, such as documenting the spread of antivaccine mis-and disinformation…(More)”.

Metrics at Work: Journalism and the Contested Meaning of Algorithms


Book by Angèle Christin: “When the news moved online, journalists suddenly learned what their audiences actually liked, through algorithmic technologies that scrutinize web traffic and activity. Has this advent of audience metrics changed journalists’ work practices and professional identities? In Metrics at Work, Angèle Christin documents the ways that journalists grapple with audience data in the form of clicks, and analyzes how new forms of clickbait journalism travel across national borders.

Drawing on four years of fieldwork in web newsrooms in the United States and France, including more than one hundred interviews with journalists, Christin reveals many similarities among the media groups examined—their editorial goals, technological tools, and even office furniture. Yet she uncovers crucial and paradoxical differences in how American and French journalists understand audience analytics and how these affect the news produced in each country. American journalists routinely disregard traffic numbers and primarily rely on the opinion of their peers to define journalistic quality. Meanwhile, French journalists fixate on internet traffic and view these numbers as a sign of their resonance in the public sphere. Christin offers cultural and historical explanations for these disparities, arguing that distinct journalistic traditions structure how journalists make sense of digital measurements in the two countries.

Contrary to the popular belief that analytics and algorithms are globally homogenizing forces, Metrics at Work shows that computational technologies can have surprisingly divergent ramifications for work and organizations worldwide…(More)”.

A 680,000-person megastudy of nudges to encourage vaccination in pharmacies


Paper by Katherine L. Milkman et al: “Encouraging vaccination is a pressing policy problem. To assess whether text-based reminders can encourage pharmacy vaccination and what kinds of messages work best, we conducted a megastudy. We randomly assigned 689,693 Walmart pharmacy patients to receive one of 22 different text reminders using a variety of different behavioral science principles to nudge flu vaccination or to a business-as-usual control condition that received no messages. We found that the reminder texts that we tested increased pharmacy vaccination rates by an average of 2.0 percentage points, or 6.8%, over a 3-mo follow-up period. The most-effective messages reminded patients that a flu shot was waiting for them and delivered reminders on multiple days. The top-performing intervention included two texts delivered 3 d apart and communicated to patients that a vaccine was “waiting for you.” Neither experts nor lay people anticipated that this would be the best-performing treatment, underscoring the value of simultaneously testing many different nudges in a highly powered megastudy….(More)”.

Society won’t trust A.I. until business earns that trust


Article by François Candelon, Rodolphe Charme di Carlo and Steven D. Mills: “…The concept of a social license—which was born when the mining industry, and other resource extractors, faced opposition to projects worldwide—differs from the other rules governing A.I.’s use. Academics such as Leeora Black and John Morrison, in the book The Social License: How to Keep Your Organization Legitimate,define the social license as “the negotiation of equitable impacts and benefits in relation to its stakeholders over the near and longer term. It can range from the informal, such as an implicit contract, to the formal, like a community benefit agreement.” 

The social license isn’t a document like a government permit; it’s a form of acceptance that companies must gain through consistent and trustworthy behavior as well as stakeholder interactions. Thus, a social license for A.I. will be a socially constructed perception that a company has secured the right to use the technology for specific purposes in the markets in which it operates. 

Companies cannot award themselves social licenses; they will have to win them by proving they can be trusted. As Morrison argued in 2014, akin to the capability to dig a mine, the fact that an A.I.-powered solution is technologically feasible doesn’t mean that society will find its use morally and ethically acceptable. And losing the social license will have dire consequences, as natural resource companies, such as Shell and BP, have learned in the past…(More)”