Paper by Ariel Karlinsky and Moses Shayo: “Governmental information manipulation has been hard to measure and study systematically. We hand-collect data from official and unofficial sources in 134 countries to estimate misreporting of Covid mortality during 2020-21. We find that between 45%–55% of governments misreported the number of deaths. The lion’s share of misreporting cannot be attributed to a country’s capacity to accurately diagnose and report deaths. Contrary to some theoretical expectations, there is little evidence of governments exaggerating the severity of the pandemic. Misreporting is higher where governments face few social and institutional constraints, in countries holding elections, and in countries with a communist legacy…(More)”
Democracy and Artificial Intelligence: old problems, new solutions?
Discussion between Nardine Alnemr and Rob Weymouth: “…I see three big perspectives relevant to AI and democracy. You have the most conservative, mirroring the 80s and the 90s, still talking about the digital public sphere as if it’s distant from our lives. As if it’s something novel and inaccessible, which is not quite accurate anymore.
Then there’s the more optimistic and cautionary side of the spectrum. People who are excited about the technologies, but they’re not quite sure. They’re intrigued to see the potential and I think they’re optimistic because they overlook how these technologies connect to a broader context. How a lot of these technologies are driven by surveying and surveillance of the data and the communication that we produce. Exploitation of workers who do the filtering and cleaning work. The companies that profit out of this and make engineered election campaigns. So they’re cautious because of that, but still optimistic, because at the same time, they try to isolate it from that bigger context.
And finally, the most radical is something like Cesar Hidalgo’s proposal of augmented democracy…(More)”.
Could artificial intelligence benefit democracy?
Article by Brian Wheeler: “Each week sees a new set of warnings about the potential impact of AI-generated deepfakes – realistic video and audio of politicians saying things they never said – spreading confusion and mistrust among the voting public.
And in the UK, regulators, security services and government are battling to protect this year’s general election from malign foreign interference.
Less attention has been given to the possible benefits of AI.
But a lot of work is going on, often below the radar, to try to harness its power in ways that might enhance democracy rather than destroy it.
“While this technology does pose some important risks in terms of disinformation, it also offers some significant opportunities for campaigns, which we can’t ignore,” Hannah O’Rourke, co-founder of Campaign Lab, a left-leaning network of tech volunteers, says.
“Like all technology, what matters is how AI is actually implemented. “Its impact will be felt in the way campaigners actually use it.”
Among other things, Campaign Lab runs training courses for Labour and Liberal Democrat campaigners on how to use ChatGPT (Chat Generative Pre-trained Transformer) to create the first draft of election leaflets.
It reminds them to edit the final product carefully, though, as large language models (LLMs) such as ChatGPT have a worrying tendency to “hallucinate” or make things up.
The group is also experimenting with chatbots to help train canvassers to have more engaging conversations on the doorstep.
AI is already embedded in everyday programs, from Microsoft Outlook to Adobe Photoshop, Ms O’Rourke says, so why not use it in a responsible way to free up time for more face-to-face campaigning?…
Conservative-supporting AI expert Joe Reeve is another young political campaigner convinced the new technology can transform things for the better.
He runs Future London, a community of “techno optimists” who use AI to seek answers to big questions such as “Why can’t I buy a house?” and, crucially, “Where’s my robot butler?”
In 2020, Mr Reeve founded Tory Techs, partly as a right-wing response to Campaign Lab.
The group has run programming sessions and explored how to use AI to hone Tory campaign messages but, Mr Reeve says, it now “mostly focuses on speaking with MPs in more private and safe spaces to help coach politicians on what AI means and how it can be a positive force”.
“Technology has an opportunity to make the world a lot better for a lot of people and that is regardless of politics,” he tells BBC News…(More)”.
Synthetic Politics: Preparing democracy for Generative AI
Report by Demos: “This year is a politically momentous one, with almost half the world voting in elections. Generative AI may revolutionise our political information environments by making them more effective, relevant, and participatory. But it’s also possible that they will become more manipulative, confusing, and dangerous. We’ve already seen AI-generated audio of politicians going viral and chatbots offering incorrect information about elections.
This report, produced in partnership with University College London, explores how synthetic content produced by generative AI poses risks to the core democratic values of truth, equality, and non-violence. It proposes two action plans for what private and public decision-makers should be doing to safeguard democratic integrity immediately and in the long run:
- In Action Plan 1, we consider the actions that should be urgently put in place to reduce the acute risks to democratic integrity presented by generative AI tools. This includes reducing the production and dissemination of harmful synthetic content and empowering users so that harmful impacts of synthetic content are reduced in the immediate term.
- In Action Plan 2, we set out a longer-term vision for how the fundamental risks to democratic integrity should be addressed. We explore the ways in which generative AI tools can help bolster equality, truth and non-violence, from enabling greater democratic participation to improving how key information institutions operate…(More)”.
Understanding the Crisis in Institutional Trust
Essay by Jacob Harold: “Institutions are patterns of relationship. They form essential threads of our social contract. But those threads are fraying. In the United States, individuals’ trust in major institutions has declined 22 percentage points since 1979.
Institutions face a range of profound challenges. A long-overdue reckoning with the history of racial injustice has highlighted how many institutions reflect patterns of inequity. Technology platforms have supercharged access to information but also reinforced bubbles of interpretation. Anti-elite sentiment has evolved into anti-institutional rebellion.
These forces are affecting institutions of all kinds—from disciplines like journalism to traditions like the nuclear family. This essay focuses on a particular type of institution: organizations. The decline in trust in organizations has practical implications: trust is essential to the day-to-day work of an organization—whether an elite university, a traffic court, or a corner store. The stakes for society are hard to overstate. Organizations “organize” much of our society, culture, and economy.
This essay is meant to offer background for ongoing conversations about the crisis in institutional trust. It does not claim to offer a solution; instead, it lays out the parts of the problem as a step toward shared solutions.
It is not possible to isolate the question of institutional trust from other trends. The institutional trust crisis is intertwined with broader issues of polarization, gridlock, fragility, and social malaise. Figure 1 maps out eight adjacent issues. Some of these may be seen as drivers of the institutional trust crisis, others as consequences of it. Most are both.
This essay considers trust as a form of information. It is data about the external perceptions of institutions. Declining trust can thus be seen as society teaching itself. Viewing a decline in trust as information reframes the challenge. Sometimes, institutions may “deserve” some of the mistrust that has been granted to them. In those cases, the information can serve as a direct corrective…(More)”.
This City Pilots Web3 Quadratic Funding for Public Infrastructure
Article by Makoto Takahiro: “The city of Split, Croatia is piloting an innovative new system for deciding how to fund municipal infrastructure projects. Called “quadratic funding,” the mechanism aims to fairly account for both public and private preferences when allocating limited budget resources.
A coalition of organizations including BlockSplit, Funding the Commons, Gitcoin, and the City of Split launched the Municipal Quadratic Funding Initiative in September 2023. The project goals include implementing quadratic funding for prioritizing public spending, utilizing web3 tools to increase transparency and participation, and demonstrating the potential of these technologies to improve legacy processes.
If successful, the model could scale to other towns and cities or inspire additional quadratic funding experiments.
The partners believe that the transparency and configurability of blockchain systems make them well-suited to quadratic funding applications.
Quadratic funding mathematically accounts for the intensity of demand for public goods. Groups can create projects which individuals can support financially. The amount of money ultimately directed to each proposal is based on the square of support received. This means that projects attracting larger numbers of smaller contributions can compete with those receiving fewer large donations.
In this way, quadratic funding aims to reflect both willingness to pay and breadth of support in funding decisions. It attempts to break tendency towards corruption where influential groups lobby for their niche interests. The goal is a fairer allocation suited to the whole community’s preferences.
The initiative will build on open source quadratic funding infrastructure already deployed for other uses like funding public goods on Ethereum. Practical web3 tools can help teadministration manage funding rounds and disburse awards…(More)”.
EBP+: Integrating science into policy evaluation using Evidential Pluralism
Article by Joe Jones, Alexandra Trofimov, Michael Wilde & Jon Williamson: “…While the need to integrate scientific evidence in policymaking is clear, there isn’t a universally accepted framework for doing so in practice. Orthodox evidence-based approaches take Randomised Controlled Trials (RCTs) as the gold standard of evidence. Others argue that social policy issues require theory-based methods to understand the complexities of policy interventions. These divisions may only further decrease trust in science at this critical time.
EBP+ offers a broader framework within which both orthodox and theory-based methods can sit. EBP+ also provides a systematic account of how to integrate and evaluate these different types of evidence. EBP+ can offer consistency and objectivity in policy evaluation, and could yield a unified approach that increases public trust in scientifically-informed policy…
EBP+ is motivated by Evidential Pluralism, a philosophical theory of causal enquiry that has been developed over the last 15 years. Evidential Pluralism encompasses two key claims. The first, object pluralism, says that establishing that A is a cause of B (e.g., that a policy intervention causes a specific outcome) requires establishing both that A and B are appropriately correlated and that there is some mechanism which links the two and which can account for the extent of the correlation. The second claim, study pluralism, maintains that assessing whether A is a cause of B requires assessing both association studies (studies that repeatedly measure A and B, together with potential confounders, to measure their association) and mechanistic studies (studies of features of the mechanisms linking A to B), where available…(More)”.
Narratives Online. Shared Stories in Social Media
Book by Ruth Page: “Stories are shared by millions of people online every day. They post and re-post interactions as they re-tell and respond to large-scale mediated events. These stories are important as they can bring people together, or polarise them in opposing groups. Narratives Online explores this new genre – the shared story – and uses carefully chosen case-studies to illustrate the complex processes of sharing as they are shaped by four international social media contexts: Wikipedia, Facebook, Twitter and YouTube. Building on discourse analytic research, Ruth Page develops a new framework – ‘Mediated Narrative Analysis’ – to address the large scale, multimodal nature of online narratives, helping researchers interpret the micro- and macro-level politics that are played out in computer-mediated communication…(More)”.
What Does Information Integrity Mean for Democracies?
Article by Kamya Yadav and Samantha Lai: “Democracies around the world are encountering unique challenges with the rise of new technologies. Experts continue to debate how social media has impacted democratic discourse, pointing to how algorithmic recommendations, influence operations, and cultural changes in norms of communication alter the way people consume information. Meanwhile, developments in artificial intelligence (AI) surface new concerns over how the technology might affect voters’ decision-making process. Already, we have seen its increased use in relation to political campaigning.
In the run-up to Pakistan’s 2024 presidential elections, former Prime Minister Imran Khan used an artificially generated speech to campaign while imprisoned. Meanwhile, in the United States, a private company used an AI-generated imitation of President Biden’s voice to discourage people from voting. In response, the Federal Communications Commission outlawed the use of AI-generated robocalls.
Evolving technologies present new threats. Disinformation, misinformation, and propaganda are all different faces of the same problem: Our information environment—the ecosystem in which we disseminate, create, receive, and process information—is not secure and we lack coherent goals to direct policy actions. Formulating short-term, reactive policy to counter or mitigate the effects of disinformation or propaganda can only bring us so far. Beyond defending democracies from unending threats, we should also be looking at what it will take to strengthen them. This begs the question: How do we work toward building secure and resilient information ecosystems? How can policymakers and democratic governments identify policy areas that require further improvement and shape their actions accordingly?…(More)”.
How artificial intelligence can facilitate investigative journalism
Article by Luiz Fernando Toledo: “A few years ago, I worked on a project for a large Brazilian television channel whose objective was to analyze the profiles of more than 250 guardianship counselors in the city of São Paulo. These elected professionals have the mission of protecting the rights of children and adolescents in Brazil.
Critics had pointed out that some counselors did not have any expertise or prior experience working with young people and were only elected with the support of religious communities. The investigation sought to verify whether these elected counselors had professional training in working with children and adolescents or had any relationships with churches.
After requesting the counselors’ resumes through Brazil’s access to information law, a small team combed through each resume in depth—a laborious and time-consuming task. But today, this project might have required far less time and labor. Rapid developments in generative AI hold potential to significantly scale access and analysis of data needed for investigative journalism.
Many articles address the potential risks of generative AI for journalism and democracy, such as threats AI poses to the business model for journalism and its ability to facilitate the creation and spread of mis- and disinformation. No doubt there is cause for concern. But technology will continue to evolve, and it is up to journalists and researchers to understand how to use it in favor of the public interest.
I wanted to test how generative AI can help journalists, especially those that work with public documents and data. I tested several tools, including Ask Your PDF (ask questions to any documents in your computer), Chatbase (create your own chatbot), and Document Cloud (upload documents and ask GPT-like questions to hundreds of documents simultaneously).
These tools make use of the same mechanism that operates OpenAI’s famous ChatGPT—large language models that create human-like text. But they analyze the user’s own documents rather than information on the internet, ensuring more accurate answers by using specific, user-provided sources…(More)”.