Toward a 21st Century National Data Infrastructure: Mobilizing Information for the Common Good


Report by National Academies of Sciences, Engineering, and Medicine: “Historically, the U.S. national data infrastructure has relied on the operations of the federal statistical system and the data assets that it holds. Throughout the 20th century, federal statistical agencies aggregated survey responses of households and businesses to produce information about the nation and diverse subpopulations. The statistics created from such surveys provide most of what people know about the well-being of society, including health, education, employment, safety, housing, and food security. The surveys also contribute to an infrastructure for empirical social- and economic-sciences research. Research using survey-response data, with strict privacy protections, led to important discoveries about the causes and consequences of important societal challenges and also informed policymakers. Like other infrastructure, people can easily take these essential statistics for granted. Only when they are threatened do people recognize the need to protect them…(More)”.

Americans Can’t Consent to Companies Use of their Data


A Report from the Annenberg School for Communication: “Consent has always been a central part of Americans’ interactions with the commercial internet. Federal and state laws, as well as decisions from the Federal Trade Commission (FTC), require either implicit (“opt out”) or explicit (“opt in”) permission from individuals for companies to take and use data about them. Genuine opt out and opt in consent requires that people have knowledge about commercial data-extraction practices as well as a belief they can do something about them. As we approach the 30th anniversary of the commercial internet, the latest Annenberg national survey finds that Americans have neither. High percentages of Americans don’t know, admit they don’t know, and believe they can’t do anything about basic practices and policies around companies’ use of people’s data…
High levels of frustration, concern, and fear compound Americans’ confusion: 80% say they have little control over how marketers can learn about them online; 80% agree that what companies know about them from their online behaviors can harm them. These and related discoveries from our survey paint a picture of an unschooled and admittedly incapable society that rejects the internet industry’s insistence that people will accept tradeoffs for benefits and despairs of its inability to predictably control its digital life in the face of powerful corporate forces. At a time when individual consent lies at the core of key legal frameworks governing the collection and use of personal information, our findings describe an environment where genuine consent may not be possible….The aim of this report is to chart the particulars of Americans’ lack of knowledge about the commercial use of their data and their “dark resignation” in connection to it. Our goal is also to raise questions and suggest solutions about public policies that allow companies to gather, analyze, trade, and otherwise benefit from information they extract from large populations of people who are uninformed about how that information will be used and deeply concerned about the consequences of its use. In short, we find that informed consent at scale is a myth, and we urge policymakers to act with that in mind.”…(More)”.

AI-Ready Open Data


Explainer by Sean Long and Tom Romanoff: “Artificial intelligence and machine learning (AI/ML) have the potential to create applications that tackle societal challenges from human health to climate change. These applications, however, require data to power AI model development and implementation. Government’s vast amount of open data can fill this gap: McKinsey estimates that open data can help unlock $3 trillion to $5 trillion in economic value annually across seven sectors. But for open data to fuel innovations in academia and the private sector, the data must be both easy to find and use. While Data.gov makes it simpler to find the federal government’s open data, researchers still spend up to 80% of their time preparing data into a usable, AI-ready format. As Intel warns, “You’re not AI-ready until your data is.”

In this explainer, the Bipartisan Policy Center provides an overview of existing efforts across the federal government to improve the AI readiness of its open data. We answer the following questions:

  • What is AI-ready data?
  • Why is AI-ready data important to the federal government’s AI agenda?
  • Where is AI-ready data being applied across federal agencies?
  • How could AI-ready data become the federal standard?…(More)”.

Privacy Decisions are not Private: How the Notice and Choice Regime Induces us to Ignore Collective Privacy Risks and what Regulation should do about it


Paper by Christopher Jon Sprigman and Stephan Tontrup: “For many reasons the current notice and choice privacy framework fails to empower individuals in effectively making their own privacy choices. In this Article we offer evidence from three novel experiments showing that at the core of this failure is a cognitive error. Notice and choice caters to a heuristic that people employ to make privacy decisions. This heuristic is meant to judge trustworthiness in face-to-face-situations. In the online context, it distorts privacy decision-making and leaves potential disclosers vulnerable to exploitation.

From our experimental evidence exploring the heuristic’s effect, we conclude that privacy law must become more behaviorally aware. Specifically, privacy law must be redesigned to intervene in the cognitive mechanisms that keep individuals from making better privacy decisions. A behaviorally-aware privacy regime must centralize, standardize and simplify the framework for making privacy choices.

To achieve these goals, we propose a master privacy template which requires consumers to define their privacy preferences in advance—doing so avoids presenting the consumer with a concrete counterparty, and this, in turn, prevents them from applying the trust heuristic and reduces many other biases that affect privacy decision-making. Our data show that blocking the heuristic enables consumers to consider relevant privacy cues and be considerate of externalities their privacy decisions cause.

The master privacy template provides a much more effective platform for regulation. Through the master template the regulator can set the standard for automated communication between user clients and website interfaces, a facility which we expect to enhance enforcement and competition about privacy terms…(More)”.

Government Audits


Paper by Martina Cuneo, Jetson Leder-Luis & Silvia Vannutelli: “Audits are a common mechanism used by governments to monitor public spending. In this paper, we discuss the effectiveness of auditing with theory and empirics. In our model, the value of audits depends on both the underlying presence of abuse and the government’s ability to observe it and enforce punishments, making auditing most effective in middling state-capacity environments. Consistent with this theory, we survey all the existing credibly causal studies and show that government audits seem to have positive effects mostly in middle-state-capacity environments like Brazil. We present new empirical evidence from American city governments, a high-capacity and low-impropriety environment. Using a previously unexplored threshold in federal audit rules and a dynamic regression discontinuity framework, we estimate the effects of these audits on American city finance and find no marginal effect of audits…(More)”.

How ChatGPT Hijacks Democracy


Article by Nathan E. Sanders and Bruce Schneier:”…But for all the consternation over the potential for humans to be replaced by machines in formats like poetry and sitcom scripts, a far greater threat looms: artificial intelligence replacing humans in the democratic processes — not through voting, but through lobbying.

ChatGPT could automatically compose comments submitted in regulatory processes. It could write letters to the editor for publication in local newspapers. It could comment on news articles, blog entries and social media posts millions of times every day. It could mimic the work that the Russian Internet Research Agency did in its attempt to influence our 2016 elections, but without the agency’s reported multimillion-dollar budget and hundreds of employees.Automatically generated comments aren’t a new problem. For some time, we have struggled with bots, machines that automatically post content. Five years ago, at least a million automatically drafted comments were believed to have been submitted to the Federal Communications Commission regarding proposed regulations on net neutrality. In 2019, a Harvard undergraduate, as a test, used a text-generation program to submit 1,001 comments in response to a government request for public input on a Medicaid issue. Back then, submitting comments was just a game of overwhelming numbers…(More)”

Data Brokers and the Sale of Americans’ Mental Health Data


Report by Joanne Kim: “This report includes findings from a two-month-long study of data brokers and data on U.S. individuals’ mental health conditions. The report aims to make more transparent the data broker industry and its processes for selling and exchanging mental health data about depressed and anxious individuals. The research is critical as more depressed and anxious individuals utilize personal devices and software-based health-tracking applications (many of which are not protected by the Health Insurance Portability and Accountability Act), often unknowingly putting their sensitive mental health data at risk. This report finds that the industry appears to lack a set of best practices for handling individuals’ mental health data, particularly in the areas of privacy and buyer vetting. It finds that there are data brokers which advertise and are willing and able to sell data concerning Americans’ highly sensitive mental health information. It concludes by arguing that the largely unregulated and black-box nature of the data broker industry, its buying and selling of sensitive mental health data, and the lack of clear consumer privacy protections in the U.S. necessitate a comprehensive federal privacy law or, at the very least, an expansion of HIPAA’s privacy protections alongside bans on the sale of mental health data on the open market…(More)”.

Who lives in rural America? How data shapes (and misshapes) conceptions of diversity in rural America


CORI Blog: “Racial and ethnic diversity is one of the most commonly misunderstood aspects of rural America.

National media depictions of white farmers and ranchers in the West and Midwest, white coal miners in Appalachia, or the “white working class” living in rural communities reinforce the misconception that rural areas are homogeneously white. It is a misconception that ignores that 86 of the 100 most marginalized counties in the country are rural, 60 of which are located in Tribal lands or Southern regions with large Black populations. It is a misconception that renders invisible the 14 million Black, Hispanic or Latino, Asian, Native, and multiracial people who live in rural America (2020 census-nonmetro plus).

It is a misconception that holds significant consequences.

Misunderstandings of diversity in rural America can inhibit efforts to support programming and policies designed to increase the ability of rural communities to thrive. For rural communities to thrive, national, state, and local leaders need to take efforts to systematically address racial and ethnic inequities that limit the freedomsafety, and opportunity of rural people of color.

There is an imperative to better understand who lives in rural America today. In just the past few years, billions of public and private dollars have been committed to building a more equitable economy. The Infrastructure Investment and Jobs Act (IIJA), the CHIPS Act, and the Inflation Reduction Act (IRA) have committed hundreds of billions of dollars that will be invested by federal agencies and state and local governments in healthcare, housing, energy, and economic development.

As part of these efforts, the Biden administration has ordered federal agencies to prioritize advancing racial equity in the design of these programs and the distribution of resources. Similarly, companies and philanthropy have made racial equity commitments of more than $200 billion. With these public and private commitments, hundreds of billions of dollars will be invested in the coming years with a specific focus on addressing racial equity.

Yet, if these historic investments are not informed by an accurate understanding of rural demographics and how these communities have evolved over time in response to government policies and settler-influenced power shifts, then we risk excluding rural communities and people of color from the critical resources that are needed to strengthen communities and economies that serve everyone.

In Part I of the second story in our Rural Aperture Project, we seek to explain how and why such flawed conceptions of rural America exist…(More)”.

Civic Freedom in an Age of Diversity


Book edited by Dimitrios Karmis and Jocelyn Maclure: “James Tully is one of the world’s most influential political philosophers at work today. Over the past thirty years – first with Strange Multiplicity (1995), and more fully with Public Philosophy in a New Key (2008) and On Global Citizenship (2014) – Tully has developed a distinctive approach to the study of political philosophy, democracy, and active citizenship for a deeply diverse world and a de-imperializing age.

Civic Freedom in an Age of Diversity explores, elucidates, and questions Tully’s innovative approach, methods, and concepts, providing both a critical assessment of Tully’s public philosophy and an exemplification of the dialogues of reciprocal elucidation that are central to Tully’s approach. Since the role of public philosophy is to address public affairs, the contributors consider public philosophy in the context of pressing issues and recent civic struggles such as: crises of democracy and citizenship in the Western world; global citizenship; civil disobedience and non-violence; Indigenous self-determination; nationalism and federalism in multinational states; protest movements in Turkey and Quebec; supranational belonging in the European Union; struggles over equity in academia; and environmental decontamination, decolonization, and cultural restoration in Akwesasne….(More)”

Americans Don’t Understand What Companies Can Do With Their Personal Data — and That’s a Problem


Press Release by the Annenberg School for Communications: “Have you ever had the experience of browsing for an item online, only to then see ads for it everywhere? Or watching a TV program, and suddenly your phone shows you an ad related to the topic? Marketers clearly know a lot about us, but the extent of what they know, how they know it, and what they’re legally allowed to know can feel awfully murky. 

In a new report, “Americans Can’t Consent to Companies’ Use of Their Data,” researchers asked a nationally representative group of more than 2,000 Americans to answer a set of questions about digital marketing policies and how companies can and should use their personal data. Their aim was to determine if current “informed consent” practices are working online. 

They found that the great majority of Americans don’t understand the fundamentals of internet marketing practices and policies, and that many feel incapable of consenting to how companies use their data. As a result, the researchers say, Americans can’t truly give informed consent to digital data collection.

The survey revealed that 56% of American adults don’t understand the term “privacy policy,” often believing it means that a company won’t share their data with third parties without permission. In actual fact, many of these policies state that a company can share or sell any data it gathers about site visitors with other websites or companies.

Perhaps because so many Americans feel that internet privacy feels impossible to comprehend — with “opting-out” or “opting-in,” biometrics, and VPNs — they don’t trust what is being done with their digital data. Eighty percent of Americans believe that what companies know about them can cause them harm.

“People don’t feel that they have the ability to protect their data online — even if they want to,” says lead researcher Joseph Turow, Robert Lewis Shayon Professor of Media Systems & Industries at the Annenberg School for Communication at the University of Pennsylvania….(More)”