Paper by Zizheng Yu, Emiliano Treré, and Tiziano Bonini: “This study explores how Chinese riders game the algorithm-mediated governing system of food delivery service platforms and how they mobilize WeChat to build solidarity networks to assist each other and better cope with the platform economy. We rely on 12 interviews with Chinese riders from 4 platforms (Meituan, Eleme, SF Express and Flash EX) in 5 cities, and draw on a 4-month online observation of 7 private WeChat groups. The article provides a detailed account of the gamification ranking and competition techniques employed by delivery platforms to drive the riders to achieve efficiency and productivity gains. Then, it critically explores how Chinese riders adapt and react to the algorithmic systems that govern their work by setting up private WeChat groups and developing everyday practices of resilience and resistance. This study demonstrates that Chinese riders working for food delivery platforms incessantly create a complex repertoire of tactics and develop hidden transcripts to resist the algorithmic control of digital platforms….(More)”.
Report commissioned by the European Data Protection Board: “The present report is part of a study analysing the implications for the work of the European Union (EU)/ European Economic Area (EEA) data protection supervisory authorities (SAs) in relation to transfers of personal data to third countries after the Court of Justice of the European Union (CJEU) judgment C- 311/18 on Data Protection Commissioner v. Facebook Ireland Ltd, Maximilian Schrems (Schrems II). Data controllers and processors may transfer personal data to third countries or international organisations only if the controller or processor has provided appropriate safeguards, and on the condition that enforceable data subject rights and effective legal remedies for data subjects are available.
Whereas it is the primary responsibility of data exporters and data importers to assess that the legislation of the country of destination enables the data importer to comply with any of the appropriate safeguards, SAs will play a key role when issuing further decisions on transfers to third countries. Hence, this report provides the European Data Protection Board (EDPB) and the SAs in the EEA/EU with information on the legislation and practice in China, India and Russia on their governments’ access to personal data processed by economic operators. The report contains an overview of the relevant information in order for the SAs to assess whether and to what extent legislation and practices in the abovementioned countries imply massive and/or indiscriminate access to personal data processed by economic operators…(More)”.
By Fiona Cece, Uma Kalkar, Stefaan Verhulst, and Andrew J. Zahuranec
As part of an ongoing effort to contribute to current topics in data, technology, and governance, The GovLab’s Selected Readings series provides an annotated and curated collection of recommended works on themes such as open data, data collaboration, and civic technology.
In this edition, we reflect on the one-year anniversary of the January 6, 2021 Capitol Hill Insurrection and its implications of disinformation and data misuse to support malicious objectives. This selected reading builds on the previous edition, published last year, on misinformation’s effect on violence and riots. Readings are listed in alphabetical order. New additions are highlighted in green.
The mob attack on the US Congress was alarming and the result of various efforts to undermine the trust in and legitimacy of longstanding democratic processes and institutions. The use of inaccurate data, half-truths, and disinformation to spread hate and division is considered a key driver behind last year’s attack. Altering data to support conspiracy theories or challenging and undermining the credibility of trusted data sources to allow for alternative narratives to flourish, if left unchallenged, has consequences — including the increased acceptance and use of violence both offline and online.
The January 6th insurrection was unfortunately not a unique event, nor was it contained to the United States. While efforts to bring perpetrators of the attack to justice have been fruitful, much work remains to be done to address the willful dissemination of disinformation online. Below, we provide a curation of findings and readings that illustrate the global danger of inaccurate data, half-truths, and disinformation. As well, The GovLab, in partnership with the OECD, has explored data-actionable questions around how disinformation can spread across and affect society, and ways to mitigate it. Learn more at disinformation.the100questions.org.
To suggest additional readings on this or any other topic, please email firstname.lastname@example.org. All our Selected Readings can be found here.
Readings and Annotations
- Md. Sayeed Al-Zaman, Lecturer at Jahangirnagar University in Bangladesh, discusses how the country’s increasing number of “netizens” are being manipulated by online disinformation and inciting violence along religious lines. Social media helps quickly spread Anti-Hindu and Buddhist rhetoric, inflaming religious divisions between these groups and Bangladesh’s Muslim majority, impeding possibilities for “peaceful coexistence.”
- Swaths of online information make it difficult to fact-check, and alluring stories that feed on people’s fear and anxieties are highly likely to be disseminated, leading to a spread of rumors across Bangladesh. Moreover, disruptors and politicians wield religion to target citizens’ emotionality and create violence.
- Al-Zaman recounts two instances of digital disinformation and communalism. First, in 2016, following a Facebook post supposedly criticizing Islam, riots destroyed 17 templates and 100 houses in Nasrinagar and led to protests in neighboring villages. While the exact source of the disinformation post was never confirmed, a man was beaten and jailed for it despite robust evidence of his wrongdoing. Second, in 2012, after a Facebook post circulated an image of someone desecrating the Quran tagged a Buddhist youth in the picture, 12 Buddhist monasteries and 100 houses in Ramu were destroyed. Through social media, a mob of over 6,000 people, including local Muslim community leaders, attacked the town of Ramu. Later investigation found that the image had been doctored and spread by an Islamic extremist group member in a coordinated attack, manipulating Islamic religious sentiment via fake news to target Buddhist minorities.
Banaji, Shakuntala, and Ram Bhat. “WhatsApp Vigilantes: An exploration of citizen reception and circulation of WhatsApp misinformation linked to mob violence in India.” London School of Economics and Political Science, 2019.
- London School of Economics and Political Science Associate Professor Shakuntala Banaji and Researcher Ram Bhat articulate how discriminated groups (Dalits, Muslims, Christians, and Adivasis) have been targeted by peer-to-peer communications spreading allegations of bovine related issues, child-snatching, and organ harvesting, culminating in violence against these groups with fatal consequences.
- WhatsApp messages work in tandem with ideas, tropes, messages, and stereotypes already in the public domain, providing “verification” of fake news.
- WhatsApp use is gendered, and users are predisposed to believe misinformation and spread misinformation, particularly if it targets a discriminated group that they already have negative and discriminatory feelings towards.
- Among most WhatsApp users, civic trust is based on ideological, family, and community ties.
- Restricting sharing, tracking, and reporting of misinformation using “beacon” features and imposing penalties on groups can serve to mitigate the harmful effects of fake news.
- Misinformation leading to violence has been on the rise worldwide. PolitiFact writer Daniel Funke and Susan Benkelman, former Director of Accountability Journalism at the American Press Institute, point to mob violence against Romas in France after rumors of kidnapping attempts circulated on Facebook and Snapchat; the immolation of two men in Puebla, Mexico following fake news spread on Whatsapp of a gang of organ harvesters on the prowl; and false kidnapping claims sent through Whatsapp fueling lynch mobs in India.
- Slow (re)action to fake news allows mis/disinformation to prey on vulnerable people and infiltrate society. Examples covered in the article discuss how fake news preys on older Americans who lack strong digital literacy. Virulent online rumors have made it difficult for citizens to separate fact from fiction during the Indian general election. Foreign adversaries like Russia are bribing Facebook users for their accounts in order to spread false political news in Ukraine.
- The article notes that increases in violence caused by disinformation are doubly enabled by “a lack of proper law enforcement” and inaction by technology companies. Facebook, Youtube, and Whatsapp have no coordinated, comprehensive plans to fight fake news and attempt to shift responsibility to “fact-checking partners.” Troublingly, it appears that some platforms deliberately delay the removal of mis/disinformation to attract more engagement. Only once facing intense pressure from policymakers does it seem that these companies remove misleading information.
- In the past decade, the number of plugged-in Myanmar citizens has skyrocketed to 39% of the population. All of these 21 million internet users are active on Facebook, where much political rhetoric occurs. Widespread fake news disseminated through Facebook has led to an increase in anti-Muslim sentiment and the spread of misleading, inflammatory headlines.
- Attempts to curtail fake news on Facebook are difficult. In Myanmar, a developing country where “the rule of law is weak,” monitoring and regulation on social media is not easily enforceable. Criticism from Myanmar and international governments and civil society organizations resulted in Facebook banning and suspending fake news accounts and pages and employing stricter, more invasive monitoring of citizen Facebook use — usually without their knowledge. However, despite Facebook’s key role in agitating and spreading fake news, no political or oversight bodies have “explicitly held the company accountable.”
- Nyi Nyi Kyaw, Visiting Fellow at the Yusof Ishak Institute in Singapore, notes a cyber law initiative set in motion by the Myanmar government to strengthen social media monitoring methods but is wary of Myanmar’s “human and technological capacity” to enforce these regulations.
- Researchers Stephan Lewandowsky and Sander van der Linden present a scan of conventional instances and tools to combat misinformation. They note the staying power and spread of sensational sound bites, especially in the political arena, and their real-life consequences on problems such as anti-vaccination campaigns, ethnically-charged violence in Myanmar, and mob lynchings in India spurred by Whatsapp rumors.
- To proactively stop misinformation, the authors introduce the psychological theory of “inoculation,” which forewarns people that they have been exposed to misinformation and alerts them to the ways by which they could be misled to make them more resilient to false information. The paper highlights numerous successes of inoculation in combating misinformation and presents it as a strategy to prevent disinformation-fueled violence.
- The authors then discuss best strategies to deploy fake news inoculation and generate “herd” cognitive immunity in the face of microtargeting and filter bubbles online.
Osmundsen, Mathias, Alexander Bor, Peter Bjerregaard Vahlstrup, Anja Bechmann, and Michael Bang Petersen. “Partisan polarization is the primary psychological motivation behind “fake news” sharing on Twitter.” American Political Science Review, 115, no.3, (2020): 999-1015.
- Mathias Osmundsen and colleagues explore the proliferation of fake news on digital platforms. Are those who share fake news “ignorant and lazy,” malicious actors, or playing political games online? Through a psychological mapping of over 2,000 Twitter users across 500,000 stories, the authors find that disruption and polarization fuel fake news dissemination more so than ignorance.
- Given the increasingly polarized American landscape, spreading fake news can help spread “partisan feelings,” increase interparty social and political cohesion, and call supporters to incideniary and violent action. Thus, misinformation prioritizes usefulness to reach end goals over accuracy and veracity of information.
- Overall, the authors find that those with low political awareness and media literacy are the least likely to share fake news. While older individuals were more likely to share fake news, the inability to identify real versus fake information was not a major contributor of motivating the spread of misinformation.
- For the most part, those who share fake news are knowledgeable about the political sphere and online spaces. They are primarily motivated to ‘troll’ or create online disruption, or to further their partisan stance. In the United States, right-leaning individuals are more likely to follow fake news because they “must turn to more extreme news sources” to find information aligned with their politics, while left-leaning people can find more credible sources from liberal and centrist outlets.
- James A. Piazza of Pennsylvania State University examines the role of online misinformation in driving distrust, political extremism, and political violence. He reviews some of the ongoing literature on online misinformation and disinformation in driving these and other adverse outcomes.
- Using data on incidents of terrorism from the Global Terrorism Database and three independent measures of disinformation derived from the Digital Society Project, Piazza finds “disinformation propagated through online social media outlets is statistically associated with increases in domestic terrorism in affected countries. The impact of disinformation on terrorism is mediated, significantly and substantially, through increased political polarization.”
- Piazza notes that his results support other literature that shows the real-world effects of online disinformation. He emphasizes the need for further research and investigation to better understand the issue.
Posetti, Julie, Nermine Aboulez, Kalina Bontcheva, Jackie Harrison, and Silvio Waisbord. “Online violence Against Women Journalists: A Global Snapshot of Incidence and Impacts.” United Nations Educational, Scientific and Cultural Organization, 2020.
- The survey focuses on incidence, impacts, and responses to online violence against women journalists that are a result of “coordinated disinformation campaigns leveraging misogyny and other forms of hate speech. There were 901 respondents, hailing from 125 countries, and covering various ethnicities.
- 73% of women journalists reported facing online violence and harassment in the course of their work, suggesting escalating gendered violence against women in online media.
- The impact of COVID-19 and populist politics is evident in the gender-based harassment and disinformation campaigns, the source of which is traced to political actors (37%) or anonymous/troll accounts (57%).
- Investigative reporting on gender issues, politics and elections, immigration and human rights abuses, or fake news itself seems to attract online retaliation and targeted disinformation campaigns against the reporters.
- District Police Chief of Jogulamba Gadwal, India, and Yale World Fellow (’17) Rema Rajeshwari writes about how misinformation and disinformation are becoming a growing problem and security threat in India. The fake news phenomenon has spread hatred, fueled sectarian tensions, and continues to diminish social trust in society.
- One example of this can be found in Jogulamba Gadwal, where videos and rumors were spread throughout social media about how the Parthis, a stigmatized tribal group, were committing acts of violence in the village. This led to a series of mob attacks and killings — “thirty-three people were killed in sixty-nine mob attacks since January 2018 due to rumors” — that could be traced to rumors spread on social media.
- More importantly, however, Rajeshwari elaborates on how self-regulation and local campaigns can be used as an effective intervention for mis/dis-information. As a police officer, Rajeshwari fought a battle that was both online and on the ground, including the formation of a group of “tech-savvy” cops who could monitor local social media content and flag inaccurate and/or malicious posts, and mobilizing local WhatsApp groups alongside village headmen who could encourage community members to not forward fake messages. These interventions effectively combined local traditions and technology to achieve an “early warning-focused deterrence.”
- Journalist Luke Taylor details the many incidents of how disinformation campaigns across Latin America have resulted in the mistreatment of health care workers during the Coronavirus pandemic. Examining case studies from Mexico and Colombia, Taylor finds that these mis/disinformation campaigns have resulted in health workers receiving death threats and being subject to acts of aggression.
- One instance of this link between disinformation and acts of aggression are the 47 reported cases of aggression towards health workers in Mexico and 265 reported complaints against health workers as well. The National Council to Prevent Discrimination noted these acts were the result of a loss of trust in government and government institutions, which was further exacerbated by conspiracy theories that circulated WhatsApp and other social media channels.
- Another example of false narratives can be seen in Colombia, where a politician theorized that a “covid cartel” of doctors were admitting COVID-19 patients to ICUs in order to receive payments (e.g., a cash payment of ~17,000 Columbian pesos for every dead patient with a covid-19 diagnosis). This false narrative of doctors being incentivized to increase beds for COVID-19 patients quickly spread across social media platforms, resulting in many of those who were ill to avoid seeking care. This rumor also led to doctors in Colombia receiving death threats and intimidation acts.
- The article provides case studies of how fake news can be used to intensify social conflict for political gains (e.g., by distracting citizens from having a conversation about critical issues and undermining the democratic process).
- The cases elaborated upon are 1) Pizzagate: a fake news story that linked human trafficking to a presidential candidate and a political party, and ultimately led to a shooting; 2) Russia’s Internet Research Agency: Russian agents created social media accounts to spread fake news that favored Donald Trump during the 2016 election, and even instigated online protests about social issues (e.g., a BLM protest); and 3) Cambridge Analytica: a British company that used unauthorized social media data for sensationalistic and inflammatory targeted US political advertisements.
- Notably, it points out that fake news undermines a citizen’s ability to participate in the democratic process and make accurate decisions in important elections.
- While some public narratives frame online disinformation and its influence on real-world violence as “unprecedented and unparalleled” to occurrences in the past. Professor Heidi Tworek of the University of British Columbia points out that “assumptions about the history of disinformation” have (and continue to) influence policymaking to combat fake news. She argues that today’s unprecedented events are rooted in tactics similar to those of the past, such as how Finnish policymakers invested in national communications strategy to fight foreign disinformation coming from Russia and the Soviet Union.
- She emphasizes the power of learning from historical events to guide modern methods of fighting political misinformation. Connecting today’s concerns of election fraud, foreign interference, and conspiracy theories to those of the past, such as “funding magazines [and] spreading rumors” on Soviet and American practices during the Cold War to further anti-opposition sentiment and hatred reinforces that disinformation is a long-standing problem.
- This article discusses instances where disinformation inflamed already existing social, political, and ideological cleavages, and ultimately caused violence. Specifically, it elaborates on instances from the US-Mexico border, India, Sri Lanka, and during the course of three Latin American elections.
- Though the cases are meant to be illustrative and highlight the spread of disinformation globally, the violence in these cases was shown to be affected by the distinct social fabric of each place. Their findings lend credence to the idea that disinformation helped spark violence in places that were already vulnerable and tense.
- Indeed, now that disinformation can be so quickly distributed using social media, coupled with declining trust in public institutions, low levels of media literacy, meager actions taken by social media companies, and government actors who exploit disinformation for political gain, there has been a rise of these cases globally. It is an interaction of factors such as distrust in traditional media and public institutions, lack of content moderation on social media, and ethnic divides that render societies vulnerable and susceptible to violence.
- One example of this is at the US/Mexico border, where disinformation campaigns have built on pre-existing xenophobia, and have led to instances of mob-violence and mass shootings. Inflamed by disinformation campaigns that migrant caravans contain criminals (e.g., invasion narratives often used to describe migrant caravans), the armed group United Constitutional Patriots (UCP) impersonated law enforcement and detained migrants at the US border, often turning them over to border officials. UCP has since been arrested by the FBI for impersonating law enforcement.
We welcome other sources we may have missed — please share any suggested additions with us at datastewards [at] thegovlab.org or The GovLab on Twitter.
Essay by Uma Kalkar, Andrew Young and Stefaan Verhulst in the Quaterly Inquiry of the Development Intelligence Lab: “…What are the key takeaways from our process and how does it relate to the Summit for Democracy? First, good questions serve as the bedrock for effective and data-driven decision-making across the governance ecosystem. Second, sourcing multidisciplinary and global experts allow us to paint a fuller picture of the hot-button issues and encourage a more nuanced understanding of priorities. Lastly, including the public as active participants in the process of designing questions can help to increase the legitimacy of and obtain a social impact for data efforts, as well as tap into the collective intelligence that exists across society….
A key focus for world leaders, civil society members, academics, and private sector representatives at the Summit for Democracy should not only be on how to promote open governance by democratising data and data science. It must also consider how we can democratise and improve the way we formulate and prioritise questions facing society. To paraphrase Albert Einstein’s famous quote: “If I had an hour to solve a problem and my life depended on the solution, I would spend the first 55 minutes determining the proper question to ask… for once I know the proper question, I could solve the problem in less than five minutes”….(More)”.
Book edited by Neeta Verma: “Technological innovations across the globe are bringing profound change to our society. Governments around the world are experiencing and embracing this technology-led shift. New platforms, emerging technologies, customizable products, and changing citizen demand and outlook towards government services are reshaping the whole journey. When it comes to the application of Information and Communication Technologies (ICT) in any sector, the Government of India has emerged as an early adopter of these technologies and has also focused on last-mile delivery of citizen-centric services.
Citizen Empowerment through Digital Transformation in Government takes us through the four-decade long transformational journey of various key sectors in India where ICT has played a major role in reimagining government services to citizens across the country. It touches upon the emergence of the National Informatics Centre as a premier technology institution of the Government of India and its collaborative efforts with the Central, State Governments, as well as the District level administration, to deliver best-in-class solutions.
Inspiring and informative, the book is filled with real-life transformation stories that have helped to lead the people and the Government of India to realize their vision of a digitally empowered nation….(More)”
Article by Jiangyan Li, Juelin Yin, Wei Shi, And Xiwei Yi: “As corporate lists and awards that rank and recognize firms for superior social reputation have proliferated in recent years, the field of CSR is also replete with various types of awards given out to firms or CEOs, such as Fortune’s “Most Admired Companies” rankings and “Best 100 Companies to Work For” lists. Such awards serve to both reward and incentivize firms to become more dedicated to CSR. Prior research has primarily focused on the effects of awards on award-winning firms; however, the effectiveness and implications of such awards as incentives to non-winning firms remain understudied. Therefore, in the article of “Keeping up with the Joneses: Role of CSR Awards in Incentivizing Non-Winners’ CSR” published by Business & Society, we are curious about whether such CSR awards could successfully incentivize non-winning firms to catch up with their winning competitors.
Drawing on the awareness-motivation-capability (AMC) framework developed in the competitive dynamics literature, we use a sample of Chinese listed firms from 2009 to 2015 to investigate how competitors’ CSR award winning can influence focal firms’ CSR. The empirical results show that non-winning firms indeed improve their CSR after their competitors have won CSR awards. However, non-winning firms’ improvement in CSR may vary in different scenarios. For instance, media exposure can play an important informational role in reducing information asymmetries and inducing competitive actions among competitors, therefore, non-winning firms’ improvement in CSR is more salient when award-winning firms are more visible in the media. Meanwhile, when CSR award winners perform better financially, non-winners will be more motivated to respond to their competitors’ wins. Further, firms with a higher level of prior CSR are more capable of improving their CSR and therefore are more likely to respond to their competitors’ wins…(More)”.
Book by Rajesh Veeraraghavan: “How can development programs deliver benefits to marginalized citizens in ways that expand their rights and freedoms? Political will and good policy design are critical but often insufficient due to resistance from entrenched local power systems. In Patching Development, Rajesh Veeraraghavan presents an ethnography of one of the largest development programs in the world, the Indian National Rural Employment Guarantee Act (NREGA), and examines NREGA’s implementation in the South Indian state of Andhra Pradesh. He finds that the local system of power is extremely difficult to transform, not because of inertia, but because of coercive counter strategy from actors at the last mile and their ability to exploit information asymmetries. Upper-level NREGA bureaucrats in Andhra Pradesh do not possess the capacity to change the power axis through direct confrontation with local elites, but instead have relied on a continuous series of responses that react to local implementation and information, a process of patching development. “Patching development” is a top-down, fine-grained, iterative socio-technical process that makes local information about implementation visible through technology and enlists participation from marginalized citizens through social audits. These processes are neither neat nor orderly and have led to a contentious sphere where the exercise of power over documents, institutions and technology is intricate, fluid and highly situated. A highly original account with global significance, this book casts new light on the challenges and benefits of using information and technology in novel ways to implement development programs….(More)”.
Article by JS: “This week, thousands of Chinese tech workers are sharing information about their working schedules in an online spreadsheet. Their goal is to inform each other and new employees about overtime practices at different companies.
This initiative for work-schedule transparency, titled Working Time, has gone viral. As of Friday—just three days after the project launched—the spreadsheet has already had millions of views and over 6000 entries. The creators also set up group chats on the Tencent-owned messaging platform, QQ, to invite discussion about the project—over 10000 people have joined as participants.
This initiative comes after the explosive 996.ICU campaign which took place in 2019 where hundreds of thousands of tech workers in the country participated in an online effort to demand the end of the 72-hour work week—9am to 9pm, 6 days a week.
This year, multiple tech companies—with encouragement from the government—have ended overtime work practices that forced employees to work on Saturdays (or in some cases, alternating Saturdays). This has effectively ended 996, which was illegal to begin with. While an improvement, the data collected from this online spreadsheet shows that most tech workers still work long hours, either “1095” or “11105” (10am to 9pm or 11am to 10pm, 5 days a week). The spreadsheet also shows a non-negligible number of workers still working 6 days week.
Like the 996.ICU campaign, the creators of this spreadsheet are using GitHub to circulate and share info about the project. The first commit was made on Tuesday, October 12th. Only a few days later, the repo has been starred over 9500 times….(More)”.
Article by Shen Lu: “China’s National Congress passed the highly anticipated Personal Information Protection Law on Friday, a significant piece of legislation that will provide Chinese citizens significant privacy protections while also bolstering Beijing’s ambitions to set international norms in data protection.
China’s PIPL is not only key to Beijing’s vision for a next-generation digital economy; it is also likely to influence other countries currently adopting their own data protection laws.
The new law clearly draws inspiration from the European Union’s General Data Protection Regulation, and like its precursor is an effort to respond to genuine grassroots demand for greater right to consumer privacy. But what distinguishes China’s PIPL from the GDPR and other laws on the books is China’s emphasis on national security, which is a broadly defined trump card that triggers data localization requirements and cross-border data flow restrictions….
The PIPL reinforces Beijing’s ambition to defend its digital sovereignty. If foreign entities “engage in personal information handling activities that violate the personal information rights and interests of citizens of the People’s Republic of China, or harm the national security or public interest of the People’s Republic of China,” China’s enforcement agencies may blacklist them, “limiting or prohibiting the provision of personal information to them.” And China may reciprocate against countries or regions that adopt “discriminatory prohibitions, limitations or other similar measures against the People’s Republic of China in the area of personal information protection.”…
Many Asian governments are in the process of writing or rewriting data protection laws. Vietnam, India, Pakistan and Sri Lanka have all inserted localization provisions in their respective data protection laws. “[The PIPL framework] can provide encouragement to countries that would be tempted to use the data protection law that includes data transfer provisions to add this national security component,” Girot said.
This new breed of data protection law could lead to a fragmented global privacy landscape. Localization requirements can be a headache for transnational tech companies, particularly cloud service providers. And the CAC, one of the data regulators in charge of implementing and enforcing the PIPL, is also tasked with implementing a national security policy, which could present a challenge to international cooperation….(More)“
Nighat Dad at New Scientist: “The swift progress of the Taliban in Afghanistan has been truly shocking…Though the Taliban spokesperson Zabihullah Mujahid told the press conference that it wouldn’t be seeking “revenge” against people who had opposed them, many Afghan people are understandably still worried. On top of this, they — including those who worked with Western forces and international NGOs, as well as foreign journalists — have been unable to leave the country, as flight capacity has been taken over by Western countries evacuating their citizens.
As such, people have been attempting to move quickly to erase their digital footprints, built up during the 20 years of the previous US-backed governments. Some Afghan activists have been reaching out to me directly to help them put in place robust mobile security and asking how to trigger a mass deletion of their data.
The last time the Taliban was in power, social media barely existed and smartphones had yet to take off. Now, around 4 million people in Afghanistan regularly use social media. Yet, despite the huge rise of digital technologies, a comparative rise in digital security hasn’t happened.
There are few digital security resources that are suitable for people in Afghanistan to use. The leading guide on how to properly delete your digital history by Human Rights First is a brilliant place to start. But unfortunately it is only available in English and unofficially in Farsi. There are also some other guides available in Farsi thanks to the thriving community of tech enthusiasts who have been working for human rights activists living in Iran for years.
However, many of these guides will still be unintelligible for those in Afghanistan who speak Dari or Pashto, for example…
People in Afghanistan who worked with Western forces also face an impossible choice as countries where they might seek asylum often require digital proof of their collaboration. Keep this evidence and they risk persecution from the Taliban, delete it and they may find their only way out no longer available.
Millions of people’s lives will now be vastly different due to the regime change. Digital security feels like one thing that could have been sorted out in advance. We are yet to see exactly how Taliban 2.0 will be different to that which went before. And while the so-called War on Terror appears to be over, I fear a digital terror offensive may just be beginning…(More).