The Government of Emergency: Vital Systems, Expertise, and the Politics of Security


Book by Stephen J. Collier and Andrew Lakoff: “From pandemic disease, to the disasters associated with global warming, to cyberattacks, today we face an increasing array of catastrophic threats. It is striking that, despite the diversity of these threats, experts and officials approach them in common terms: as future events that threaten to disrupt the vital, vulnerable systems upon which modern life depends.

The Government of Emergency tells the story of how this now taken-for-granted way of understanding and managing emergencies arose. Amid the Great Depression, World War II, and the Cold War, an array of experts and officials working in obscure government offices developed a new understanding of the nation as a complex of vital, vulnerable systems. They invented technical and administrative devices to mitigate the nation’s vulnerability, and organized a distinctive form of emergency government that would make it possible to prepare for and manage potentially catastrophic events.

Through these conceptual and technical inventions, Stephen Collier and Andrew Lakoff argue, vulnerability was defined as a particular kind of problem, one that continues to structure the approach of experts, officials, and policymakers to future emergencies…(More)”.

Navigating Trust in Society,


Report by Coeuraj: “This report provides empirical evidence of existing levels of trust, among the US population, with regard to institutions, and philanthropy—all shaped during a time of deep polarization and a global pandemic.

The source of the data is two-fold. Firstly, a year-over-year analysis of institutional trust, as measured by Global Web Index USA from more than 20,000 respondents and, secondly, an ad-hoc nationally representative survey, conducted by one of Coeuraj’s data partners AudienceNet, in the two weeks immediately preceding the 2021 United Nations General Assembly. This report presents the core findings that emerged from both research initiatives….(More)”.

The Biden Administration Embraces “Democracy Affirming Technologies”


Article by Marc Rotenberg: “…But amidst the ongoing struggle between declining democracies and emerging authoritarian governments, the Democracy Summit was notable for at least one new initiative – the support for democracy affirming technology. According to the White House, the initiative “aims to galvanize worldwide a new class of technologies” that can support democratic values.  The White House plan is to bring together innovators, investors, researchers, and entrepreneurs to “embed democratic values.”  The President’s top science advisor Eric Lander provided more detail. Democratic values, he said, include “privacy, freedom of expression, access to information, transparency, fairness, inclusion, and equity.”

In order to spur more rapid technological progress the White House Office of Science and Technology announced three Grand Challenges for Democracy-Affirming Technologies. They are:

  • A collaboration between U.S. and UK agencies to promote “privacy enhancing technologies” that “harness the power of data in a secure manner that protects privacy and intellectual property, enabling cross-border and cross-sector collaboration to solve shared challenges.”
  • Censorship circumvention tools, based on peer-to-peer techniques that enable content-sharing and communication without an Internet or cellular connection. The Open Technology Fund, an independent NGO, will invite international participants to compete on promising P2P technologies to counter Internet shutdowns.
  • A Global Entrepreneurship Challenge will seek to identify entrepreneurs who build and advance democracy-affirming technologies through a set of regional startup and scaleup competitions in countries spanning the democratic world. According to the White House, specific areas of innovation may include: data for policymaking, responsible AI and machine learning, fighting misinformation, and advancing government transparency and accessibility of government data and services.

USAID Administrator Samantha Powers said her agency would spend 20 million annually to expand digital democracy work. “We’ll use these funds to help partner nations align their rules governing the use of technology with democratic principles and respect for human rights,” said the former U.S. Ambassador to the United Nations. Notably, Powers also said the U.S. will take a closer look at export practices to “prevent technologies from falling into hands that would misuse them.” The U.S., along with Denmark, Norway, and Australia, will launch a new Export Controls and Human Rights Initiative. Powers also seeks to align surveillance practices of democratic nations with the Universal Declaration for Human Rights….(More)”.

A Framework for Open Civic Design: Integrating Public Participation, Crowdsourcing, and Design Thinking


Paper by Brandon Reynante, Steven P. Dow, Narges Mahyar: “Civic problems are often too complex to solve through traditional top-down strategies. Various governments and civic initiatives have explored more community-driven strategies where citizens get involved with defining problems and innovating solutions. While certain people may feel more empowered, the public at large often does not have accessible, flexible, and meaningful ways to engage. Prior theoretical frameworks for public participation typically offer a one-size-fits-all model based on face-to-face engagement and fail to recognize the barriers faced by even the most engaged citizens. In this article, we explore a vision for open civic design where we integrate theoretical frameworks from public engagement, crowdsourcing, and design thinking to consider the role technology can play in lowering barriers to large-scale participation, scaffolding problem-solving activities, and providing flexible options that cater to individuals’ skills, availability, and interests. We describe our novel theoretical framework and analyze the key goals associated with this vision: (1) to promote inclusive and sustained participation in civics; (2) to facilitate effective management of large-scale participation; and (3) to provide a structured process for achieving effective solutions. We present case studies of existing civic design initiatives and discuss challenges, limitations, and future work related to operationalizing, implementing, and testing this framework…(More)”.

Law Enforcement and Technology: Using Social Media


Congressional Research Service Report: “As the ways in which individuals interact continue to evolve, social media has had an increasing role in facilitating communication and the sharing of content online—including moderated and unmoderated, user-generated content. Over 70% of U.S. adults are estimated to have used social media in 2021. Law enforcement has also turned to social media to help in its operations. Broadly, law enforcement relies on social media as a tool for information sharing as well as for gathering information to assist in investigations.


Social Media as a Communications Tool. Social media is one of many tools law enforcement can use to connect with the community. They may use it, for instance, to push out bulletins on wanted persons and establish tip lines to crowdsource potential investigative leads. It provides degrees of speed and reach unmatched by many other forms of communication law enforcement can use to connect with the public. Officials and researchers have highlighted social media as a tool that, if used properly, can enhance community policing.

Social Media and Investigations. Social media is one tool in agencies’ investigative toolkits to help establish investigative leads and assemble evidence on potential suspects. There are no federal laws that specifically govern law enforcement agencies’ use of information obtained from social media sites, but their ability to obtain or use certain information may be influenced by social media companies’ policies as well as law enforcement agencies’ own social media policies and the rules of criminal procedure. When individuals post content on social media platforms without audience restrictions, anyone— including law enforcement—can access this content without court authorization. However, some information that individuals post on social media may be restricted—by user choice or platform policies—in the scope of audience that may access it. In the instances where law enforcement does not have public access to information, they may rely on a number of tools and techniques, such as informants or undercover operations, to gain access to it. Law enforcement may also require social media platforms to provide access to certain restricted information through a warrant, subpoena, or other court order.

Social Media and Intelligence Gathering. The use of social media to gather intelligence has generated particular interest from policymakers, analysts, and the public. Social media companies have weighed in on the issue of social media monitoring by law enforcement, and some platforms have modified their policies to expressly prohibit their user data from being used by law enforcement to monitor social media. Law enforcement agencies themselves have reportedly grappled with the extent to which they should gather and rely on information and intelligence gleaned from social media. For instance, some observers have suggested that agencies may be reluctant to regularly analyze public social media posts because that could be viewed as spying on the American public and could subsequently chill free speech protected under the First Amendment…(More)”.

Interoperable, agile, and balanced


Brookings Paper on Rethinking technology policy and governance for the 21st century: “Emerging technologies are shifting market power and introducing a range of risks that can only be managed through regulation. Unfortunately, current approaches to governing technology are insufficient, fragmented, and lack the focus towards actionable goals. This paper proposes three tools that can be leveraged to support fit-for-purpose technology regulation for the 21st century: First, a transparent and holistic policymaking levers that clearly communicate goals and identify trade-offs at the national and international levels; second, revamped efforts to collaborate across jurisdictions, particularly through standard-setting and evidence gathering of critical incidents across jurisdictions; and third, a shift towards agile governance, whether acquired through the system, design, or both…(More)”.

A data-based participatory approach for health equity and digital inclusion: prioritizing stakeholders


Paper by Aristea Fotopoulou, Harriet Barratt, and Elodie Marandet: “This article starts from the premise that projects informed by data science can address social concerns, beyond prioritizing the design of efficient products or services. How can we bring the stakeholders and their situated realities back into the picture? It is argued that data-based, participatory interventions can improve health equity and digital inclusion while avoiding the pitfalls of top-down, technocratic methods. A participatory framework puts users, patients and citizens as stakeholders at the centre of the process, and can offer complex, sustainable benefits, which go beyond simply the experience of participation or the development of an innovative design solution. A significant benefit for example is the development of skills, which should not be seen as a by-product of the participatory processes, but a central element of empowering marginalized or excluded communities to participate in public life. By drawing from different examples in various domains, the article discusses what can be learnt from implementations of schemes using data science for social good, human-centric design, arts and wellbeing, to argue for a data-centric, creative and participatory approach to address health equity and digital inclusion in tandem…(More)”.

Data in Collective Impact: Focusing on What Matters


Article by Justin Piff: “One of the five conditions of collective impact, “shared measurement systems,” calls upon initiatives to identify and share key metrics of success that align partners toward a common vision. While the premise that data should guide shared decision-making is not unique to collective impact, its articulation 10 years ago as a necessary condition for collective impact catalyzed a focus on data use across the social sector. In the original article on collective impact in Stanford Social Innovation Review, the authors describe the benefits of using consistent metrics to identify patterns, make comparisons, promote learning, and hold actors accountable for success. While this vision for data collection remains relevant today, the field has developed a more nuanced understanding of how to make it a reality….

Here are four lessons from our work to help collective impact initiatives and their funders use data more effectively for social change.

1. Prioritize the Learning, Not the Data System

Those of us who are “data people” have espoused the benefits of shared data systems and common metrics too many times to recount. But a shared measurement system is only a means to an end, not an end in itself. Too often, new collective impact initiatives focus on creating the mythical, all-knowing data system—spending weeks, months, and even years researching or developing the perfect software that captures, aggregates, and computes data from multiple sectors. They let the perfect become the enemy of the good, as the pursuit of perfect data and technical precision inhibits meaningful action. And communities pay the price.

Using data to solve complex social problems requires more than a technical solution. Many communities in the US have more data than they know what to do with, yet they rarely spend time thinking about the data they actually need. Before building a data system, partners must focus on how they hope to use data in their work and identify the sources and types of data that can help them achieve their goals. Once those data are identified and collected, partners, residents, students, and others can work together to develop a shared understanding of what the data mean and move forward. In Connecticut, the Hartford Data Collaborative helps community agencies and leaders do just this. For example, it has matched programmatic data against Hartford Public Schools data and National Student Clearinghouse data to get a clear picture of postsecondary enrollment patterns across the community. The data also capture services provided to residents across multiple agencies and can be disaggregated by gender, race, and ethnicity to identify and address service gaps….(More)”.

Updated Selected Readings on Inaccurate Data, Half-Truths, Disinformation, and Mob Violence


By Fiona Cece, Uma Kalkar, Stefaan Verhulst, and Andrew J. Zahuranec

As part of an ongoing effort to contribute to current topics in data, technology, and governance, The GovLab’s Selected Readings series provides an annotated and curated collection of recommended works on themes such as open data, data collaboration, and civic technology.

In this edition, we reflect on the one-year anniversary of the January 6, 2021 Capitol Hill Insurrection and its implications of disinformation and data misuse to support malicious objectives. This selected reading builds on the previous edition, published last year, on misinformation’s effect on violence and riots. Readings are listed in alphabetical order. New additions are highlighted in green. 

The mob attack on the US Congress was alarming and the result of various efforts to undermine the trust in and legitimacy of longstanding democratic processes and institutions. The use of inaccurate data, half-truths, and disinformation to spread hate and division is considered a key driver behind last year’s attack. Altering data to support conspiracy theories or challenging and undermining the credibility of trusted data sources to allow for alternative narratives to flourish, if left unchallenged, has consequences — including the increased acceptance and use of violence both offline and online.

The January 6th insurrection was unfortunately not a unique event, nor was it contained to the United States. While efforts to bring perpetrators of the attack to justice have been fruitful, much work remains to be done to address the willful dissemination of disinformation online. Below, we provide a curation of findings and readings that illustrate the global danger of inaccurate data, half-truths, and disinformation. As well, The GovLab, in partnership with the OECD, has explored data-actionable questions around how disinformation can spread across and affect society, and ways to mitigate it. Learn more at disinformation.the100questions.org.

To suggest additional readings on this or any other topic, please email info@thelivinglib.org. All our Selected Readings can be found here.

Readings and Annotations

Al-Zaman, Md. Sayeed. “Digital Disinformation and Communalism in Bangladesh.” China Media Research 15, no. 2 (2019): 68–76.

  • Md. Sayeed Al-Zaman, Lecturer at Jahangirnagar University in Bangladesh, discusses how the country’s increasing number of “netizens” are being manipulated by online disinformation and inciting violence along religious lines. Social media helps quickly spread Anti-Hindu and Buddhist rhetoric, inflaming religious divisions between these groups and Bangladesh’s Muslim majority, impeding possibilities for “peaceful coexistence.”
  • Swaths of online information make it difficult to fact-check, and alluring stories that feed on people’s fear and anxieties are highly likely to be disseminated, leading to a spread of rumors across Bangladesh. Moreover, disruptors and politicians wield religion to target citizens’ emotionality and create violence.
  • Al-Zaman recounts two instances of digital disinformation and communalism. First, in 2016, following a Facebook post supposedly criticizing Islam, riots destroyed 17 templates and 100 houses in Nasrinagar and led to protests in neighboring villages. While the exact source of the disinformation post was never confirmed, a man was beaten and jailed for it despite robust evidence of his wrongdoing. Second, in 2012, after a Facebook post circulated an image of someone desecrating the Quran tagged a Buddhist youth in the picture, 12 Buddhist monasteries and 100 houses in Ramu were destroyed. Through social media, a mob of over 6,000 people, including local Muslim community leaders, attacked the town of Ramu. Later investigation found that the image had been doctored and spread by an Islamic extremist group member in a coordinated attack, manipulating Islamic religious sentiment via fake news to target Buddhist minorities.

Banaji, Shakuntala, and Ram Bhat. “WhatsApp Vigilantes: An exploration of citizen reception and circulation of WhatsApp misinformation linked to mob violence in India.” London School of Economics and Political Science, 2019.

  • London School of Economics and Political Science Associate Professor Shakuntala Banaji and Researcher Ram Bhat articulate how discriminated groups (Dalits, Muslims, Christians, and Adivasis) have been targeted by peer-to-peer communications spreading allegations of bovine related issues, child-snatching, and organ harvesting, culminating in violence against these groups with fatal consequences.
  • WhatsApp messages work in tandem with ideas, tropes, messages, and stereotypes already in the public domain, providing “verification” of fake news.
  • WhatsApp use is gendered, and users are predisposed to believe misinformation and spread misinformation, particularly if it targets a discriminated group that they already have negative and discriminatory feelings towards.
  • Among most WhatsApp users, civic trust is based on ideological, family, and community ties.
  • Restricting sharing, tracking, and reporting of misinformation using “beacon” features and imposing penalties on groups can serve to mitigate the harmful effects of fake news.

Funke, Daniel, and Susan Benkelman. “Misinformation is inciting violence around the world. And tech platforms don’t seem to have a plan to stop it.” Poynter, April 4, 2019.

  • Misinformation leading to violence has been on the rise worldwide. PolitiFact writer Daniel Funke and Susan Benkelman, former Director of Accountability Journalism at the American Press Institute, point to mob violence against Romas in France after rumors of kidnapping attempts circulated on Facebook and Snapchat; the immolation of two men in Puebla, Mexico following fake news spread on Whatsapp of a gang of organ harvesters on the prowl; and false kidnapping claims sent through Whatsapp fueling lynch mobs in India.
  • Slow (re)action to fake news allows mis/disinformation to prey on vulnerable people and infiltrate society. Examples covered in the article discuss how fake news preys on older Americans who lack strong digital literacy. Virulent online rumors have made it difficult for citizens to separate fact from fiction during the Indian general election. Foreign adversaries like Russia are bribing Facebook users for their accounts in order to spread false political news in Ukraine.
  • The article notes that increases in violence caused by disinformation are doubly enabled by “a lack of proper law enforcement” and inaction by technology companies. Facebook, Youtube, and Whatsapp have no coordinated, comprehensive plans to fight fake news and attempt to shift responsibility to “fact-checking partners.” Troublingly, it appears that some platforms deliberately delay the removal of mis/disinformation to attract more engagement. Only once facing intense pressure from policymakers does it seem that these companies remove misleading information.

Kyaw, Nyi Nyi. “Facebooking in Myanmar: From Hate Speech to Fake News to Partisan Political Communication.” ISEAS — Yusof Ishak Institute, no. 36 (2019): 1–10.

  • In the past decade, the number of plugged-in Myanmar citizens has skyrocketed to 39% of the population. All of these 21 million internet users are active on Facebook, where much political rhetoric occurs. Widespread fake news disseminated through Facebook has led to an increase in anti-Muslim sentiment and the spread of misleading, inflammatory headlines.
  • Attempts to curtail fake news on Facebook are difficult. In Myanmar, a developing country where “the rule of law is weak,” monitoring and regulation on social media is not easily enforceable. Criticism from Myanmar and international governments and civil society organizations resulted in Facebook banning and suspending fake news accounts and pages and employing stricter, more invasive monitoring of citizen Facebook use — usually without their knowledge. However, despite Facebook’s key role in agitating and spreading fake news, no political or oversight bodies have “explicitly held the company accountable.”
  • Nyi Nyi Kyaw, Visiting Fellow at the Yusof Ishak Institute in Singapore, notes a cyber law initiative set in motion by the Myanmar government to strengthen social media monitoring methods but is wary of Myanmar’s “human and technological capacity” to enforce these regulations.

Lewandowsky, Stephan, & Sander van der Linden. “Countering Misinformation and Fake News Through Inoculation and Prebunking.” European Review of Social Psychology 32, no. 2, (2020): 348-384.

  • Researchers Stephan Lewandowsky and Sander van der Linden present a scan of conventional instances and tools to combat misinformation. They note the staying power and spread of sensational sound bites, especially in the political arena, and their real-life consequences on problems such as anti-vaccination campaigns, ethnically-charged violence in Myanmar, and mob lynchings in India spurred by Whatsapp rumors.
  • To proactively stop misinformation, the authors introduce the psychological theory of “inoculation,” which forewarns people that they have been exposed to misinformation and alerts them to the ways by which they could be misled to make them more resilient to false information. The paper highlights numerous successes of inoculation in combating misinformation and presents it as a strategy to prevent disinformation-fueled violence.
  • The authors then discuss best strategies to deploy fake news inoculation and generate “herd” cognitive immunity in the face of microtargeting and filter bubbles online.

Osmundsen, Mathias, Alexander Bor, Peter Bjerregaard Vahlstrup, Anja Bechmann, and Michael Bang Petersen. “Partisan polarization is the primary psychological motivation behind “fake news” sharing on Twitter.” American Political Science Review, 115, no.3, (2020): 999-1015.

  • Mathias Osmundsen and colleagues explore the proliferation of fake news on digital platforms. Are those who share fake news “ignorant and lazy,” malicious actors, or playing political games online? Through a psychological mapping of over 2,000 Twitter users across 500,000 stories, the authors find that disruption and polarization fuel fake news dissemination more so than ignorance.
  • Given the increasingly polarized American landscape, spreading fake news can help spread “partisan feelings,” increase interparty social and political cohesion, and call supporters to incideniary and violent action. Thus, misinformation prioritizes usefulness to reach end goals over accuracy and veracity of information.
  • Overall, the authors find that those with low political awareness and media literacy are the least likely to share fake news. While older individuals were more likely to share fake news, the inability to identify real versus fake information was not a major contributor of motivating the spread of misinformation. 
  • For the most part, those who share fake news are knowledgeable about the political sphere and online spaces. They are primarily motivated to ‘troll’ or create online disruption, or to further their partisan stance. In the United States, right-leaning individuals are more likely to follow fake news because they “must turn to more extreme news sources” to find information aligned with their politics, while left-leaning people can find more credible sources from liberal and centrist outlets.

Piazza, James A. “Fake news: the effects of social media disinformation on domestic terrorism.” Dynamics of Asymmetric Conflict (2021): 1-23.

  • James A. Piazza of Pennsylvania State University examines the role of online misinformation in driving distrust, political extremism, and political violence. He reviews some of the ongoing literature on online misinformation and disinformation in driving these and other adverse outcomes.
  • Using data on incidents of terrorism from the Global Terrorism Database and three independent measures of disinformation derived from the Digital Society Project, Piazza finds “disinformation propagated through online social media outlets is statistically associated with increases in domestic terrorism in affected countries. The impact of disinformation on terrorism is mediated, significantly and substantially, through increased political polarization.”
  • Piazza notes that his results support other literature that shows the real-world effects of online disinformation. He emphasizes the need for further research and investigation to better understand the issue.

Posetti, Julie, Nermine Aboulez, Kalina Bontcheva, Jackie Harrison, and Silvio Waisbord. “Online violence Against Women Journalists: A Global Snapshot of Incidence and Impacts.” United Nations Educational, Scientific and Cultural Organization, 2020.

  • The survey focuses on incidence, impacts, and responses to online violence against women journalists that are a result of “coordinated disinformation campaigns leveraging misogyny and other forms of hate speech. There were 901 respondents, hailing from 125 countries, and covering various ethnicities.
  • 73% of women journalists reported facing online violence and harassment in the course of their work, suggesting escalating gendered violence against women in online media.
  • The impact of COVID-19 and populist politics is evident in the gender-based harassment and disinformation campaigns, the source of which is traced to political actors (37%) or anonymous/troll accounts (57%).
  • Investigative reporting on gender issues, politics and elections, immigration and human rights abuses, or fake news itself seems to attract online retaliation and targeted disinformation campaigns against the reporters.

Rajeshwari, Rema. “Mob Lynching and Social Media.” Yale Journal of International Affairs, June 1, 2019.

  • District Police Chief of Jogulamba Gadwal, India, and Yale World Fellow (’17) Rema Rajeshwari writes about how misinformation and disinformation are becoming a growing problem and security threat in India. The fake news phenomenon has spread hatred, fueled sectarian tensions, and continues to diminish social trust in society.
  • One example of this can be found in Jogulamba Gadwal, where videos and rumors were spread throughout social media about how the Parthis, a stigmatized tribal group, were committing acts of violence in the village. This led to a series of mob attacks and killings — “thirty-three people were killed in sixty-nine mob attacks since January 2018 due to rumors” — that could be traced to rumors spread on social media.
  • More importantly, however, Rajeshwari elaborates on how self-regulation and local campaigns can be used as an effective intervention for mis/dis-information. As a police officer, Rajeshwari fought a battle that was both online and on the ground, including the formation of a group of “tech-savvy” cops who could monitor local social media content and flag inaccurate and/or malicious posts, and mobilizing local WhatsApp groups alongside village headmen who could encourage community members to not forward fake messages. These interventions effectively combined local traditions and technology to achieve an “early warning-focused deterrence.”

Taylor, Luke. “Covid-19 Misinformation Sparks Threats and Violence against Doctors in Latin America.” BMJ (2020): m3088.

  • Journalist Luke Taylor details the many incidents of how disinformation campaigns across Latin America have resulted in the mistreatment of health care workers during the Coronavirus pandemic. Examining case studies from Mexico and Colombia, Taylor finds that these mis/disinformation campaigns have resulted in health workers receiving death threats and being subject to acts of aggression.
  • One instance of this link between disinformation and acts of aggression are the 47 reported cases of aggression towards health workers in Mexico and 265 reported complaints against health workers as well. The National Council to Prevent Discrimination noted these acts were the result of a loss of trust in government and government institutions, which was further exacerbated by conspiracy theories that circulated WhatsApp and other social media channels.
  • Another example of false narratives can be seen in Colombia, where a politician theorized that a “covid cartel” of doctors were admitting COVID-19 patients to ICUs in order to receive payments (e.g., a cash payment of ~17,000 Columbian pesos for every dead patient with a covid-19 diagnosis). This false narrative of doctors being incentivized to increase beds for COVID-19 patients quickly spread across social media platforms, resulting in many of those who were ill to avoid seeking care. This rumor also led to doctors in Colombia receiving death threats and intimidation acts.

“The Danger of Fake News in Inflaming or Suppressing Social Conflict.” Center for Information Technology and Society — University of California Santa Barbara, n.d.

  • The article provides case studies of how fake news can be used to intensify social conflict for political gains (e.g., by distracting citizens from having a conversation about critical issues and undermining the democratic process).
  • The cases elaborated upon are 1) Pizzagate: a fake news story that linked human trafficking to a presidential candidate and a political party, and ultimately led to a shooting; 2) Russia’s Internet Research Agency: Russian agents created social media accounts to spread fake news that favored Donald Trump during the 2016 election, and even instigated online protests about social issues (e.g., a BLM protest); and 3) Cambridge Analytica: a British company that used unauthorized social media data for sensationalistic and inflammatory targeted US political advertisements.
  • Notably, it points out that fake news undermines a citizen’s ability to participate in the democratic process and make accurate decisions in important elections.

Tworek, Heidi. “Disinformation: It’s History.” Center for International Governance Innovation, July 14, 2021.

  • While some public narratives frame online disinformation and its influence on real-world violence as “unprecedented and unparalleled” to occurrences in the past. Professor Heidi Tworek of the University of British Columbia points out that “assumptions about the history of disinformation” have (and continue to) influence policymaking to combat fake news. She argues that today’s unprecedented events are rooted in tactics similar to those of the past, such as how Finnish policymakers invested in national communications strategy to fight foreign disinformation coming from Russia and the Soviet Union.
  • She emphasizes the power of learning from historical events to guide modern methods of fighting political misinformation. Connecting today’s concerns of election fraud, foreign interference, and conspiracy theories to those of the past, such as “funding magazines [and] spreading rumors” on Soviet and American practices during the Cold War to further anti-opposition sentiment and hatred reinforces that disinformation is a long-standing problem.

Ward, Megan, and Jessica Beyer. “Vulnerable Landscapes: Case Studies of Violence and Disinformation” Wilson Center, August 2019.

  • This article discusses instances where disinformation inflamed already existing social, political, and ideological cleavages, and ultimately caused violence. Specifically, it elaborates on instances from the US-Mexico border, India, Sri Lanka, and during the course of three Latin American elections.
  • Though the cases are meant to be illustrative and highlight the spread of disinformation globally, the violence in these cases was shown to be affected by the distinct social fabric of each place. Their findings lend credence to the idea that disinformation helped spark violence in places that were already vulnerable and tense.
  • Indeed, now that disinformation can be so quickly distributed using social media, coupled with declining trust in public institutions, low levels of media literacy, meager actions taken by social media companies, and government actors who exploit disinformation for political gain, there has been a rise of these cases globally. It is an interaction of factors such as distrust in traditional media and public institutions, lack of content moderation on social media, and ethnic divides that render societies vulnerable and susceptible to violence.
  • One example of this is at the US/Mexico border, where disinformation campaigns have built on pre-existing xenophobia, and have led to instances of mob-violence and mass shootings. Inflamed by disinformation campaigns that migrant caravans contain criminals (e.g., invasion narratives often used to describe migrant caravans), the armed group United Constitutional Patriots (UCP) impersonated law enforcement and detained migrants at the US border, often turning them over to border officials. UCP has since been arrested by the FBI for impersonating law enforcement.

We welcome other sources we may have missed — please share any suggested additions with us at datastewards [at] thegovlab.org or The GovLab on Twitter.

Toward A Collaborative Smart City: A Play-Based Urban Living Laboratory in Boston


Paper by Eric Gordon, John Harlow, Melissa Teng & Elizabeth Christoferetti: This article reports on an urban living laboratory that designed a suite of play-based prototypes, as an attempt to “institution” collaborative smart city governance in the city of Boston. This project was called “Beta Blocks,” and it geographically defined “Exploration Zones,” governed by local residents and business owners, who decided whether, where, and why to temporarily install technologies in the public realm. To recruit and facilitate the participation of Zone Advisory Group members, the authors fabricated a lavender, parking-space-sized, inflatable art exhibition (Beta Blob) that hosted a suite of public-facing activities. Although the composite model failed at “institutioning” itself into Boston’s government through this prototype, the discrete components succeeded in centering play in public learning situations and prototyping a model for collaborative governance between publics, and the public and private sectors…(More)”.