What Does Information Integrity Mean for Democracies?


Article by Kamya Yadav and Samantha Lai: “Democracies around the world are encountering unique challenges with the rise of new technologies. Experts continue to debate how social media has impacted democratic discourse, pointing to how algorithmic recommendationsinfluence operations, and cultural changes in norms of communication alter the way people consume information. Meanwhile, developments in artificial intelligence (AI) surface new concerns over how the technology might affect voters’ decision-making process. Already, we have seen its increased use in relation to political campaigning. 

In the run-up to Pakistan’s 2024 presidential elections, former Prime Minister Imran Khan used an artificially generated speech to campaign while imprisoned. Meanwhile, in the United States, a private company used an AI-generated imitation of President Biden’s voice to discourage people from voting. In response, the Federal Communications Commission outlawed the use of AI-generated robocalls.

Evolving technologies present new threats. Disinformation, misinformation, and propaganda are all different faces of the same problem: Our information environment—the ecosystem in which we disseminate, create, receive, and process information—is not secure and we lack coherent goals to direct policy actions. Formulating short-term, reactive policy to counter or mitigate the effects of disinformation or propaganda can only bring us so far. Beyond defending democracies from unending threats, we should also be looking at what it will take to strengthen them. This begs the question: How do we work toward building secure and resilient information ecosystems? How can policymakers and democratic governments identify policy areas that require further improvement and shape their actions accordingly?…(More)”.

Digital public infrastructure and public value: What is ‘public’ about DPI?


Paper by David Eaves, Mariana Mazzucato and Beatriz Vasconcellos: “Digital Public Infrastructures (DPI) are becoming increasingly relevant in the policy and academic domains, with DPI not just being regulated, but funded and created by governments, international organisations, philanthropies and the private sector. However, these transformations are not neutral; they have a direction. This paper addresses how to ensure that DPI is not only regulated but created and governed for the common good by maximising public value creation. Our analysis makes explicit which normative values may be associated with DPI development. We also argue that normative values are necessary but not sufficient for maximising public value creation with DPI, and that a more proactive role of the state and governance are key. In this work, policymakers and researchers will find valuable frameworks for understanding where the value-creation elements of DPI come from and how to design a DPI governance that maximises public value…(More)”.

Influence of public innovation laboratories on the development of public sector ambidexterity


Article by Christophe Favoreu et al: “Ambidexterity has become a major issue for public organizations as they manage increasingly strong contradictory pressures to optimize existing processes while innovating. Moreover, although public innovation laboratories are emerging, their influence on the development of ambidexterity remains largely unexplored. Our research aims to understand how innovation laboratories contribute to the formation of individual ambidexterity within the public sector. Drawing from three case studies, this research underscores the influence of these labs on public ambidexterity through the development of innovations by non-specialized actors and the deployment and reuse of innovative managerial practices and techniques outside the i-labs…(More)”.

Bring on the Policy Entrepreneurs


Article by Erica Goldman: “Teaching early-career researchers the skills to engage in the policy arena could prepare them for a lifetime of high-impact engagement and invite new perspectives into the democratic process.

In the first six months of the COVID-19 pandemic, the scientific literature worldwide was flooded with research articles, letters, reviews, notes, and editorials related to the virus. One study estimates that a staggering 23,634 unique documents were published between January 1 and June 30, 2020, alone.

Making sense of that emerging science was an urgent challenge. As governments all over the world scrambled to get up-to-date guidelines to hospitals and information to an anxious public, Australia stood apart in its readiness to engage scientists and decisionmakers collaboratively. The country used what was called a “living evidence” approach to synthesizing new information, making it available—and helpful—in real time.

Each week during the pandemic, the Australian National COVID‑19 Clinical Evidence Taskforce came together to evaluate changes in the scientific literature base. They then spoke with a single voice to the Australian clinical community so clinicians had rapid, evidence-based, and nationally agreed-upon guidelines to provide the clarity they needed to care for people with COVID-19.

This new model for consensus-aligned, evidence-based decisionmaking helped Australia navigate the pandemic and build trust in the scientific enterprise, but it did not emerge overnight. It took years of iteration and effort to get the living evidence model ready to meet the moment; the crisis of the pandemic opened a policy window that living evidence was poised to surge through. Australia’s example led the World Health Organization and the United Kingdom’s National Institute for Health and Care Excellence to move toward making living evidence models a pillar of decisionmaking for all their health care guidelines. On its own, this is an incredible story, but it also reveals a tremendous amount about how policies get changed…(More)”.

Navigating the Future of Work: Perspectives on Automation, AI, and Economic Prosperity


Report by Erik Brynjolfsson, Adam Thierer and Daron Acemoglu: “Experts and the media tend to overestimate technology’s negative impact on employment. Case studies suggest that technology-induced unemployment fears are often exaggerated, evidenced by the McKinsey Global Institute reversing its AI forecasts and the growth in jobs predicted to be at high risk of automation.

Flexible work arrangements, technical recertification, and creative apprenticeship models offer real-time learning and adaptable skills development to prepare workers for future labor market and technological changes.

AI can potentially generate new employment opportunities, but the complex transition for workers displaced by automation—marked by the need for retraining and credentialing—indicates that the productivity benefits may not adequately compensate for job losses, particularly among low-skilled workers.

Instead of resorting to conflictual relationships, labor unions in the US must work with employers to support firm automation while simultaneously advocating for worker skill development, creating a competitive business enterprise built on strong worker representation similar to that found in Germany…(More)”.

How artificial intelligence can facilitate investigative journalism


Article by Luiz Fernando Toledo: “A few years ago, I worked on a project for a large Brazilian television channel whose objective was to analyze the profiles of more than 250 guardianship counselors in the city of São Paulo. These elected professionals have the mission of protecting the rights of children and adolescents in Brazil.

Critics had pointed out that some counselors did not have any expertise or prior experience working with young people and were only elected with the support of religious communities. The investigation sought to verify whether these elected counselors had professional training in working with children and adolescents or had any relationships with churches.

After requesting the counselors’ resumes through Brazil’s access to information law, a small team combed through each resume in depth—a laborious and time-consuming task. But today, this project might have required far less time and labor. Rapid developments in generative AI hold potential to significantly scale access and analysis of data needed for investigative journalism.

Many articles address the potential risks of generative AI for journalism and democracy, such as threats AI poses to the business model for journalism and its ability to facilitate the creation and spread of mis- and disinformation. No doubt there is cause for concern. But technology will continue to evolve, and it is up to journalists and researchers to understand how to use it in favor of the public interest.

I wanted to test how generative AI can help journalists, especially those that work with public documents and data. I tested several tools, including Ask Your PDF (ask questions to any documents in your computer), Chatbase (create your own chatbot), and Document Cloud (upload documents and ask GPT-like questions to hundreds of documents simultaneously).

These tools make use of the same mechanism that operates OpenAI’s famous ChatGPT—large language models that create human-like text. But they analyze the user’s own documents rather than information on the internet, ensuring more accurate answers by using specific, user-provided sources…(More)”.

Youth Media Literacy Program Fact Checking Manual


Internews: “As part of the USAID-funded Advancing Rights in Southern Africa Program (ARISA), Internews developed the Youth Media Literacy Program to enhance the digital literacy skills of young people. Drawing from university journalism students, and young leaders from civil society organizations in Botswana, Eswatini, Lesotho, and South Africa, the program equipped 124 young people to apply critical thinking to online communication and practice improved digital hygiene and digital security practices. The Youth Media Literacy Program Fact Checking Manual was developed to provide additional support and tools to combat misinformation and disinformation and improve online behavior and security…(More)”.

How to Run a Public Records Audit with a Team of Students


Article by By Lam Thuy Vo: “…The Markup (like many other organizations) uses public record requests as an important investigative tool, and we’ve published tips for fellow journalists on how to best craft their requests for specific investigations. But based on where government institutions are located, public record laws vary. Generally, government institutions are required to release documents to anyone who requests them, except when information falls under a specific exemption, like information that invades an individual’s privacy or if there are trade secrets. Federal institutions are governed by the Freedom of Information Act (FOIA), but state or local government agencies have their own state freedom of information laws, and they aren’t all identical. 

Public record audits take a step back. By sending the same freedom of information (FOI) request to agencies around the country, audits can help journalists, researchers and everyday people track which agency will release records and which may not, and if they’re complying with state laws. According to the national freedom of information coalition, “audits have led to legislative reforms and the establishment of ombudsman positions to represent the public’s interests.” 

The basics of auditing is simple: Send the same FOI request to different government agencies, document how you followed up, and document the outcome. Here’s how we coordinated this process with student reporters…(More)”.

Advancing Equitable AI in the US Social Sector


Article by Kelly Fitzsimmons: “…when developed thoughtfully and with equity in mind, AI-powered applications have great potential to help drive stronger and more equitable outcomes for nonprofits, particularly in the following three areas.

1. Closing the data gap. A widening data divide between the private and social sectors threatens to reduce the effectiveness of nonprofits that provide critical social services in the United States and leave those they serve without the support they need. As Kriss Deiglmeir wrote in a recent Stanford Social Innovation Review essay, “Data is a form of power. And the sad reality is that power is being held increasingly by the commercial sector and not by organizations seeking to create a more just, sustainable, and prosperous world.” AI can help break this trend by democratizing the process of generating and mobilizing data and evidence, thus making continuous research and development, evaluation, and data analysis more accessible to a wider range of organizations—including those with limited budgets and in-house expertise.

Take Quill.org, a nonprofit that provides students with free tools that help them build reading comprehension, writing, and language skills. Quill.org uses an AI-powered chatbot that asks students to respond to open-ended questions based on a piece of text. It then reviews student responses and offers suggestions for improvement, such as writing with clarity and using evidence to support claims. This technology makes high-quality critical thinking and writing support available to students and schools that might not otherwise have access to them. As Peter Gault, Quill.org’s founder and executive director, recently shared, “There are 27 million low-income students in the United States who struggle with basic writing and find themselves disadvantaged in school and in the workforce. … By using AI to provide students with immediate feedback on their writing, we can help teachers support millions of students on the path to becoming stronger writers, critical thinkers, and active members of our democracy.”..(More)”.

Power and Governance in the Age of AI


Reflections by several experts: “The best way to think about ChatGPT is as the functional equivalent of expensive private education and tutoring. Yes, there is a free version, but there is also a paid subscription that gets you access to the latest breakthroughs and a more powerful version of the model. More money gets you more power and privileged access. As a result, in my courses at Middlebury College this spring, I was obliged to include the following statement in my syllabus:

“Policy on the use of ChatGPT: You may all use the free version however you like and are encouraged to do so. For purposes of equity, use of the subscription version is forbidden and will be considered a violation of the Honor Code. Your professor has both versions and knows the difference. To ensure you are learning as much as possible from the course readings, careful citation will be mandatory in both your informal and formal writing.”

The United States fails to live up to its founding values when it supports a luxury brand-driven approach to educating its future leaders that is accessible to the privileged and a few select lottery winners. One such “winning ticket” student in my class this spring argued that the quality-education-for-all issue was of such importance for the future of freedom that he would trade his individual good fortune at winning an education at Middlebury College for the elimination of ALL elite education in the United States so that quality education could be a right rather than a privilege.

A democracy cannot function if the entire game seems to be rigged and bought by elites. This is true for the United States and for democracies in the making or under challenge around the world. Consequently, in partnership with other liberal democracies, the U.S. government must do whatever it can to render both public and private governance more transparent and accountable. We should not expect authoritarian states to help us uphold liberal democratic values, nor should we expect corporations to do so voluntarily…(More)”.