Understanding the Crisis in Institutional Trust


Essay by Jacob Harold: “Institutions are patterns of relationship. They form essential threads of our social contract. But those threads are fraying. In the United States, individuals’ trust in major institutions has declined 22 percentage points since 1979.

Institutions face a range of profound challenges. A long-overdue reckoning with the history of racial injustice has highlighted how many institutions reflect patterns of inequity. Technology platforms have supercharged access to information but also reinforced bubbles of interpretation. Anti-elite sentiment has evolved into anti-institutional rebellion.

These forces are affecting institutions of all kinds—from disciplines like journalism to traditions like the nuclear family. This essay focuses on a particular type of institution: organizations. The decline in trust in organizations has practical implications: trust is essential to the day-to-day work of an organization—whether an elite university, a traffic court, or a corner store. The stakes for society are hard to overstate. Organizations “organize” much of our society, culture, and economy.

This essay is meant to offer background for ongoing conversations about the crisis in institutional trust. It does not claim to offer a solution; instead, it lays out the parts of the problem as a step toward shared solutions.

It is not possible to isolate the question of institutional trust from other trends. The institutional trust crisis is intertwined with broader issues of polarization, gridlock, fragility, and social malaise. Figure 1 maps out eight adjacent issues. Some of these may be seen as drivers of the institutional trust crisis, others as consequences of it. Most are both.

figure of institution trust crisis

This essay considers trust as a form of information. It is data about the external perceptions of institutions. Declining trust can thus be seen as society teaching itself. Viewing a decline in trust as information reframes the challenge. Sometimes, institutions may “deserve” some of the mistrust that has been granted to them. In those cases, the information can serve as a direct corrective…(More)”.

Mechanisms for Researcher Access to Online Platform Data


Status Report by the EU/USA: “Academic and civil society research on prominent online platforms has become a crucial way to understand the information environment and its impact on our societies. Scholars across the globe have leveraged application programming interfaces (APIs) and web crawlers to collect public user-generated content and advertising content on online platforms to study societal issues ranging from technology-facilitated gender-based violence, to the impact of media on mental health for children and youth. Yet, a changing landscape of platforms’ data access mechanisms and policies has created uncertainty and difficulty for critical research projects.


The United States and the European Union have a shared commitment to advance data access for researchers, in line with the high-level principles on access to data from online platforms for researchers announced at the EU-U.S. Trade and Technology Council (TTC) Ministerial Meeting in May 2023.1 Since the launch of the TTC, the EU Digital Services Act (DSA) has gone into effect, requiring providers of Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) to provide increased transparency into their services. The DSA includes provisions on transparency reports, terms and conditions, and explanations for content moderation decisions. Among those, two provisions provide important access to publicly available content on platforms:


• DSA Article 40.12 requires providers of VLOPs/VLOSEs to provide academic and civil society researchers with data that is “publicly accessible in their online interface.”
• DSA Article 39 requires providers of VLOPs/VLOSEs to maintain a public repository of advertisements.

The announcements related to new researcher access mechanisms mark an important development and opportunity to better understand the information environment. This status report summarizes a subset of mechanisms that are available to European and/or United States researchers today, following, in part VLOPs and VLOSEs measures to comply with the DSA. The report aims at showcasing the existing access modalities and encouraging the use of these mechanisms to study the impact of online platform’s design and decisions on society. The list of mechanisms reviewed is included in the Appendix…(More)”

DC launched an AI tool for navigating the city’s open data


Article by Kaela Roeder: “In a move echoing local governments’ increasing attention toward generative artificial intelligence across the country, the nation’s capital now aims to make navigating its open data easier through a new public beta pilot.

DC Compass, launched in March, uses generative AI to answer user questions and create maps from open data sets, ranging from the district’s population to what different trees are planted in the city. The Office of the Chief Technology Officer (OCTO) partnered with the geographic information system (GIS) technology company Esri, which has an office in Vienna, Virginia, to create the new tool.

This debut follows Mayor Muriel Bowser’s signing of DC’s AI Values and Strategic Plan in February. The order requires agencies to assess if using AI is in alignment with the values it sets forth, including that there’s a clear benefit to people; a plan for “meaningful accountability” for the tool; and transparency, sustainability, privacy and equity at the forefront of deployment.

These values are key when launching something like DC Compass, said Michael Rupert, the interim chief technology officer for digital services at the Office of the Chief Technology Officer.

“The way Mayor Bowser rolled out the mayor’s order and this value statement, I think gives residents and businesses a little more comfort that we aren’t just writing a check and seeing what happens,” Rupert said. “That we’re actually methodically going about it in a responsible way, both morally and fiscally.”..(More)”.

Screenshot of AI portal with black text and data tables over white background

DC COMPASS IN ACTION. (SCREENSHOT/COURTESY OCTO)

The Unintended Consequences of Data Standardization


Article by Cathleen Clerkin: “The benefits of data standardization within the social sector—and indeed just about any industry—are multiple, important, and undeniable. Access to the same type of data over time lends the ability to track progress and increase accountability. For example, over the last 20 years, my organization, Candid, has tracked grantmaking by the largest foundations to assess changes in giving trends. The data allowed us to demonstrate philanthropy’s disinvestment in historically Black colleges and universities. Data standardization also creates opportunities for benchmarking—allowing individuals and organizations to assess how they stack up to their colleagues and competitors. Moreover, large amounts of standardized data can help predict trends in the sector. Finally—and perhaps most importantly to the social sector—data standardization invariably reduces the significant reporting burdens placed on nonprofits.

Yet, for all of its benefits, data is too often proposed as a universal cure that will allow us to unequivocally determine the success of social change programs and processes. The reality is far more complex and nuanced. Left unchecked, the unintended consequences of data standardization pose significant risks to achieving a more effective, efficient, and equitable social sector…(More)”.

Creating an Integrated System of Data and Statistics on Household Income, Consumption, and Wealth: Time to Build


Report by the National Academies: “Many federal agencies provide data and statistics on inequality and related aspects of household income, consumption, and wealth (ICW). However, because the information provided by these agencies is often produced using different concepts, underlying data, and methods, the resulting estimates of poverty, inequality, mean and median household income, consumption, and wealth, as well as other statistics, do not always tell a consistent or easily interpretable story. Measures also differ in their accuracy, timeliness, and relevance so that it is difficult to address such questions as the effects of the Great Recession on household finances or of the Covid-19 pandemic and the ensuing relief efforts on household income and consumption. The presence of multiple, sometimes conflicting statistics at best muddies the waters of policy debates and, at worst, enable advocates with different policy perspectives to cherry-pick their preferred set of estimates. Achieving an integrated system of relevant, high-quality, and transparent household ICW data and statistics should go far to reduce disagreement about who has how much, and from what sources. Further, such data are essential to advance research on economic wellbeing and to ensure that policies are well targeted to achieve societal goals…(More)”.

AI Accountability Policy Report


Report by NTIA: “Artificial intelligence (AI) systems are rapidly becoming part of the fabric of everyday American life. From customer service to image generation to manufacturing, AI systems are everywhere.

Alongside their transformative potential for good, AI systems also pose risks of harm. These risks include inaccurate or false outputs; unlawful discriminatory algorithmic decision making; destruction of jobs and the dignity of work; and compromised privacy, safety, and security. Given their influence and ubiquity, these systems must be subject to security and operational mechanisms that mitigate risk and warrant stakeholder trust that they will not cause harm….


The AI Accountability Policy Report
 conceives of accountability as a chain of inputs linked to consequences. It focuses on how information flow (documentation, disclosures, and access) supports independent evaluations (including red-teaming and audits), which in turn feed into consequences (including liability and regulation) to create accountability. It concludes with recommendations for federal government action, some of which elaborate on themes in the AI EO, to encourage and possibly require accountability inputs…(More)”.

Graphic showing the AI Accountability Chain model

A.I.-Generated Garbage Is Polluting Our Culture


Article by Eric Hoel: “Increasingly, mounds of synthetic A.I.-generated outputs drift across our feeds and our searches. The stakes go far beyond what’s on our screens. The entire culture is becoming affected by A.I.’s runoff, an insidious creep into our most important institutions.

Consider science. Right after the blockbuster release of GPT-4, the latest artificial intelligence model from OpenAI and one of the most advanced in existence, the language of scientific research began to mutate. Especially within the field of A.I. itself.

study published this month examined scientists’ peer reviews — researchers’ official pronouncements on others’ work that form the bedrock of scientific progress — across a number of high-profile and prestigious scientific conferences studying A.I. At one such conference, those peer reviews used the word “meticulous” more than 34 times as often as reviews did the previous year. Use of “commendable” was around 10 times as frequent, and “intricate,” 11 times. Other major conferences showed similar patterns.

Such phrasings are, of course, some of the favorite buzzwords of modern large language models like ChatGPT. In other words, significant numbers of researchers at A.I. conferences were caught handing their peer review of others’ work over to A.I. — or, at minimum, writing them with lots of A.I. assistance. And the closer to the deadline the submitted reviews were received, the more A.I. usage was found in them.

If this makes you uncomfortable — especially given A.I.’s current unreliability — or if you think that maybe it shouldn’t be A.I.s reviewing science but the scientists themselves, those feelings highlight the paradox at the core of this technology: It’s unclear what the ethical line is between scam and regular usage. Some A.I.-generated scams are easy to identify, like the medical journal paper featuring a cartoon rat sporting enormous genitalia. Many others are more insidious, like the mislabeled and hallucinated regulatory pathway described in that same paper — a paper that was peer reviewed as well (perhaps, one might speculate, by another A.I.?)…(More)”.

Facial Recognition Technology: Current Capabilities, Future Prospects, and Governance


Report by the National Academies of Sciences, Engineering, and Medicine: “Facial recognition technology is increasingly used for identity verification and identification, from aiding law enforcement investigations to identifying potential security threats at large venues. However, advances in this technology have outpaced laws and regulations, raising significant concerns related to equity, privacy, and civil liberties.

This report explores the current capabilities, future possibilities, and necessary governance for facial recognition technology. Facial Recognition Technology discusses legal, societal, and ethical implications of the technology, and recommends ways that federal agencies and others developing and deploying the technology can mitigate potential harms and enact more comprehensive safeguards…(More)”.

Bring on the Policy Entrepreneurs


Article by Erica Goldman: “Teaching early-career researchers the skills to engage in the policy arena could prepare them for a lifetime of high-impact engagement and invite new perspectives into the democratic process.

In the first six months of the COVID-19 pandemic, the scientific literature worldwide was flooded with research articles, letters, reviews, notes, and editorials related to the virus. One study estimates that a staggering 23,634 unique documents were published between January 1 and June 30, 2020, alone.

Making sense of that emerging science was an urgent challenge. As governments all over the world scrambled to get up-to-date guidelines to hospitals and information to an anxious public, Australia stood apart in its readiness to engage scientists and decisionmakers collaboratively. The country used what was called a “living evidence” approach to synthesizing new information, making it available—and helpful—in real time.

Each week during the pandemic, the Australian National COVID‑19 Clinical Evidence Taskforce came together to evaluate changes in the scientific literature base. They then spoke with a single voice to the Australian clinical community so clinicians had rapid, evidence-based, and nationally agreed-upon guidelines to provide the clarity they needed to care for people with COVID-19.

This new model for consensus-aligned, evidence-based decisionmaking helped Australia navigate the pandemic and build trust in the scientific enterprise, but it did not emerge overnight. It took years of iteration and effort to get the living evidence model ready to meet the moment; the crisis of the pandemic opened a policy window that living evidence was poised to surge through. Australia’s example led the World Health Organization and the United Kingdom’s National Institute for Health and Care Excellence to move toward making living evidence models a pillar of decisionmaking for all their health care guidelines. On its own, this is an incredible story, but it also reveals a tremendous amount about how policies get changed…(More)”.

Navigating the Future of Work: Perspectives on Automation, AI, and Economic Prosperity


Report by Erik Brynjolfsson, Adam Thierer and Daron Acemoglu: “Experts and the media tend to overestimate technology’s negative impact on employment. Case studies suggest that technology-induced unemployment fears are often exaggerated, evidenced by the McKinsey Global Institute reversing its AI forecasts and the growth in jobs predicted to be at high risk of automation.

Flexible work arrangements, technical recertification, and creative apprenticeship models offer real-time learning and adaptable skills development to prepare workers for future labor market and technological changes.

AI can potentially generate new employment opportunities, but the complex transition for workers displaced by automation—marked by the need for retraining and credentialing—indicates that the productivity benefits may not adequately compensate for job losses, particularly among low-skilled workers.

Instead of resorting to conflictual relationships, labor unions in the US must work with employers to support firm automation while simultaneously advocating for worker skill development, creating a competitive business enterprise built on strong worker representation similar to that found in Germany…(More)”.