Paper by Amory Gethin & Vincent Pons: “Recent social movements stand out by their spontaneous nature and lack of stable leadership, raising doubts on their ability to generate political change. This article provides systematic evidence on the effects of protests on public opinion and political attitudes. Drawing on a database covering the quasi-universe of protests held in the United States, we identify 14 social movements that took place from 2017 to 2022, covering topics related to environmental protection, gender equality, gun control, immigration, national and international politics, and racial issues. We use Twitter data, Google search volumes, and high-frequency surveys to track the evolution of online interest, policy views, and vote intentions before and after the outset of each movement. Combining national-level event studies with difference-in-differences designs exploiting variation in local protest intensity, we find that protests generate substantial internet activity but have limited effects on political attitudes. Except for the Black Lives Matter protests following the death of George Floyd, which shifted views on racial discrimination and increased votes for the Democrats, we estimate precise null effects of protests on public opinion and electoral behavior…(More)”.
New Jersey is turning to AI to improve the job search process
Article by Beth Simone Noveck: “Americans are experiencing some conflicting feelings about AI.
While people are flocking to new roles like prompt engineer and AI ethicist, the technology is also predicted to put many jobs at risk, including computer programmers, data scientists, graphic designers, writers, lawyers.
Little wonder, then, that a national survey by the Heldrich Center for Workforce Development found an overwhelming majority of Americans (66%) believe that they “will need more technological skills to achieve their career goals.” One thing is certain: Workers will need to train for change. And in a world of misinformation-filled social media platforms, it is increasingly important for trusted public institutions to provide reliable, data-driven resources.
In New Jersey, we’ve tried doing just that by collaborating with workers, including many with disabilities, to design technology that will support better decision-making around training and career change. Investing in similar public AI-powered tools could help support better consumer choice across various domains. When a public entity designs, controls and implements AI, there is a far greater likelihood that this powerful technology will be used for good.
In New Jersey, the public can find reliable, independent, unbiased information about training and upskilling on the state’s new MyCareer website, which uses AI to make personalized recommendations about your career prospects, and the training you will need to be ready for a high-growth, in-demand job…(More)”.
Understanding the Crisis in Institutional Trust
Essay by Jacob Harold: “Institutions are patterns of relationship. They form essential threads of our social contract. But those threads are fraying. In the United States, individuals’ trust in major institutions has declined 22 percentage points since 1979.
Institutions face a range of profound challenges. A long-overdue reckoning with the history of racial injustice has highlighted how many institutions reflect patterns of inequity. Technology platforms have supercharged access to information but also reinforced bubbles of interpretation. Anti-elite sentiment has evolved into anti-institutional rebellion.
These forces are affecting institutions of all kinds—from disciplines like journalism to traditions like the nuclear family. This essay focuses on a particular type of institution: organizations. The decline in trust in organizations has practical implications: trust is essential to the day-to-day work of an organization—whether an elite university, a traffic court, or a corner store. The stakes for society are hard to overstate. Organizations “organize” much of our society, culture, and economy.
This essay is meant to offer background for ongoing conversations about the crisis in institutional trust. It does not claim to offer a solution; instead, it lays out the parts of the problem as a step toward shared solutions.
It is not possible to isolate the question of institutional trust from other trends. The institutional trust crisis is intertwined with broader issues of polarization, gridlock, fragility, and social malaise. Figure 1 maps out eight adjacent issues. Some of these may be seen as drivers of the institutional trust crisis, others as consequences of it. Most are both.
This essay considers trust as a form of information. It is data about the external perceptions of institutions. Declining trust can thus be seen as society teaching itself. Viewing a decline in trust as information reframes the challenge. Sometimes, institutions may “deserve” some of the mistrust that has been granted to them. In those cases, the information can serve as a direct corrective…(More)”.
Mechanisms for Researcher Access to Online Platform Data
Status Report by the EU/USA: “Academic and civil society research on prominent online platforms has become a crucial way to understand the information environment and its impact on our societies. Scholars across the globe have leveraged application programming interfaces (APIs) and web crawlers to collect public user-generated content and advertising content on online platforms to study societal issues ranging from technology-facilitated gender-based violence, to the impact of media on mental health for children and youth. Yet, a changing landscape of platforms’ data access mechanisms and policies has created uncertainty and difficulty for critical research projects.
The United States and the European Union have a shared commitment to advance data access for researchers, in line with the high-level principles on access to data from online platforms for researchers announced at the EU-U.S. Trade and Technology Council (TTC) Ministerial Meeting in May 2023.1 Since the launch of the TTC, the EU Digital Services Act (DSA) has gone into effect, requiring providers of Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) to provide increased transparency into their services. The DSA includes provisions on transparency reports, terms and conditions, and explanations for content moderation decisions. Among those, two provisions provide important access to publicly available content on platforms:
• DSA Article 40.12 requires providers of VLOPs/VLOSEs to provide academic and civil society researchers with data that is “publicly accessible in their online interface.”
• DSA Article 39 requires providers of VLOPs/VLOSEs to maintain a public repository of advertisements.
The announcements related to new researcher access mechanisms mark an important development and opportunity to better understand the information environment. This status report summarizes a subset of mechanisms that are available to European and/or United States researchers today, following, in part VLOPs and VLOSEs measures to comply with the DSA. The report aims at showcasing the existing access modalities and encouraging the use of these mechanisms to study the impact of online platform’s design and decisions on society. The list of mechanisms reviewed is included in the Appendix…(More)”
DC launched an AI tool for navigating the city’s open data
Article by Kaela Roeder: “In a move echoing local governments’ increasing attention toward generative artificial intelligence across the country, the nation’s capital now aims to make navigating its open data easier through a new public beta pilot.
DC Compass, launched in March, uses generative AI to answer user questions and create maps from open data sets, ranging from the district’s population to what different trees are planted in the city. The Office of the Chief Technology Officer (OCTO) partnered with the geographic information system (GIS) technology company Esri, which has an office in Vienna, Virginia, to create the new tool.
This debut follows Mayor Muriel Bowser’s signing of DC’s AI Values and Strategic Plan in February. The order requires agencies to assess if using AI is in alignment with the values it sets forth, including that there’s a clear benefit to people; a plan for “meaningful accountability” for the tool; and transparency, sustainability, privacy and equity at the forefront of deployment.
These values are key when launching something like DC Compass, said Michael Rupert, the interim chief technology officer for digital services at the Office of the Chief Technology Officer.
“The way Mayor Bowser rolled out the mayor’s order and this value statement, I think gives residents and businesses a little more comfort that we aren’t just writing a check and seeing what happens,” Rupert said. “That we’re actually methodically going about it in a responsible way, both morally and fiscally.”..(More)”.
The Unintended Consequences of Data Standardization
Article by Cathleen Clerkin: “The benefits of data standardization within the social sector—and indeed just about any industry—are multiple, important, and undeniable. Access to the same type of data over time lends the ability to track progress and increase accountability. For example, over the last 20 years, my organization, Candid, has tracked grantmaking by the largest foundations to assess changes in giving trends. The data allowed us to demonstrate philanthropy’s disinvestment in historically Black colleges and universities. Data standardization also creates opportunities for benchmarking—allowing individuals and organizations to assess how they stack up to their colleagues and competitors. Moreover, large amounts of standardized data can help predict trends in the sector. Finally—and perhaps most importantly to the social sector—data standardization invariably reduces the significant reporting burdens placed on nonprofits.
Yet, for all of its benefits, data is too often proposed as a universal cure that will allow us to unequivocally determine the success of social change programs and processes. The reality is far more complex and nuanced. Left unchecked, the unintended consequences of data standardization pose significant risks to achieving a more effective, efficient, and equitable social sector…(More)”.
Creating an Integrated System of Data and Statistics on Household Income, Consumption, and Wealth: Time to Build
Report by the National Academies: “Many federal agencies provide data and statistics on inequality and related aspects of household income, consumption, and wealth (ICW). However, because the information provided by these agencies is often produced using different concepts, underlying data, and methods, the resulting estimates of poverty, inequality, mean and median household income, consumption, and wealth, as well as other statistics, do not always tell a consistent or easily interpretable story. Measures also differ in their accuracy, timeliness, and relevance so that it is difficult to address such questions as the effects of the Great Recession on household finances or of the Covid-19 pandemic and the ensuing relief efforts on household income and consumption. The presence of multiple, sometimes conflicting statistics at best muddies the waters of policy debates and, at worst, enable advocates with different policy perspectives to cherry-pick their preferred set of estimates. Achieving an integrated system of relevant, high-quality, and transparent household ICW data and statistics should go far to reduce disagreement about who has how much, and from what sources. Further, such data are essential to advance research on economic wellbeing and to ensure that policies are well targeted to achieve societal goals…(More)”.
AI Accountability Policy Report
Report by NTIA: “Artificial intelligence (AI) systems are rapidly becoming part of the fabric of everyday American life. From customer service to image generation to manufacturing, AI systems are everywhere.
Alongside their transformative potential for good, AI systems also pose risks of harm. These risks include inaccurate or false outputs; unlawful discriminatory algorithmic decision making; destruction of jobs and the dignity of work; and compromised privacy, safety, and security. Given their influence and ubiquity, these systems must be subject to security and operational mechanisms that mitigate risk and warrant stakeholder trust that they will not cause harm….
The AI Accountability Policy Report conceives of accountability as a chain of inputs linked to consequences. It focuses on how information flow (documentation, disclosures, and access) supports independent evaluations (including red-teaming and audits), which in turn feed into consequences (including liability and regulation) to create accountability. It concludes with recommendations for federal government action, some of which elaborate on themes in the AI EO, to encourage and possibly require accountability inputs…(More)”.
A.I.-Generated Garbage Is Polluting Our Culture
Article by Eric Hoel: “Increasingly, mounds of synthetic A.I.-generated outputs drift across our feeds and our searches. The stakes go far beyond what’s on our screens. The entire culture is becoming affected by A.I.’s runoff, an insidious creep into our most important institutions.
Consider science. Right after the blockbuster release of GPT-4, the latest artificial intelligence model from OpenAI and one of the most advanced in existence, the language of scientific research began to mutate. Especially within the field of A.I. itself.
A study published this month examined scientists’ peer reviews — researchers’ official pronouncements on others’ work that form the bedrock of scientific progress — across a number of high-profile and prestigious scientific conferences studying A.I. At one such conference, those peer reviews used the word “meticulous” more than 34 times as often as reviews did the previous year. Use of “commendable” was around 10 times as frequent, and “intricate,” 11 times. Other major conferences showed similar patterns.
Such phrasings are, of course, some of the favorite buzzwords of modern large language models like ChatGPT. In other words, significant numbers of researchers at A.I. conferences were caught handing their peer review of others’ work over to A.I. — or, at minimum, writing them with lots of A.I. assistance. And the closer to the deadline the submitted reviews were received, the more A.I. usage was found in them.
If this makes you uncomfortable — especially given A.I.’s current unreliability — or if you think that maybe it shouldn’t be A.I.s reviewing science but the scientists themselves, those feelings highlight the paradox at the core of this technology: It’s unclear what the ethical line is between scam and regular usage. Some A.I.-generated scams are easy to identify, like the medical journal paper featuring a cartoon rat sporting enormous genitalia. Many others are more insidious, like the mislabeled and hallucinated regulatory pathway described in that same paper — a paper that was peer reviewed as well (perhaps, one might speculate, by another A.I.?)…(More)”.
Facial Recognition Technology: Current Capabilities, Future Prospects, and Governance
Report by the National Academies of Sciences, Engineering, and Medicine: “Facial recognition technology is increasingly used for identity verification and identification, from aiding law enforcement investigations to identifying potential security threats at large venues. However, advances in this technology have outpaced laws and regulations, raising significant concerns related to equity, privacy, and civil liberties.
This report explores the current capabilities, future possibilities, and necessary governance for facial recognition technology. Facial Recognition Technology discusses legal, societal, and ethical implications of the technology, and recommends ways that federal agencies and others developing and deploying the technology can mitigate potential harms and enact more comprehensive safeguards…(More)”.