Supporting Safer Digital Spaces


Report by Suzie Dunn, Tracy Vaillancourt and Heather Brittain: “Various forms of digital technology are being used to inflict significant harms online. This is a pervasive issue in online interactions, in particular with regard to technology-facilitated gender-based violence (TFGBV) and technology-facilitated violence (TFV) against LGBTQ+ people. This modern form of violence perpetuates gender inequality and discrimination against LGBTQ+ people and has significant impacts on its targets.

As part of a multi-year research project Supporting a Safer Internet (in partnership with the International Development Research Centre) exploring the prevalence and impacts of TFGBV experienced by women, transgender, gender non-conforming and gender-diverse people, as well as TFV against LGBTQ+ individuals, an international survey was conducted by Ipsos on behalf of the Centre for International Governance Innovation (CIGI). The survey examined the influence of gender and sexual orientation on people’s experiences with online harms, with a focus on countries in the Global South. Data was collected from 18,149 people of all genders in 18 countries.

The special report provides background information on TFGBV and TFV against LGBTQ+ people by summarizing some of the existing research on the topic. It then presents the quantitative data collected on people’s experiences with, and opinions on, online harms. A list of recommendations is provided for governments, technology companies, academics, researchers and civil society organizations on how they can contribute to addressing and ending TFV…(More)”

(Read the Supporting Safer Digital Spaces: Highlights here.; Read the French translation of the Highlights here.)

Detecting Human Rights Violations on Social Media during Russia-Ukraine War


Paper by Poli Nemkova, et al: “The present-day Russia-Ukraine military conflict has exposed the pivotal role of social media in enabling the transparent and unbridled sharing of information directly from the frontlines. In conflict zones where freedom of expression is constrained and information warfare is pervasive, social media has emerged as an indispensable lifeline. Anonymous social media platforms, as publicly available sources for disseminating war-related information, have the potential to serve as effective instruments for monitoring and documenting Human Rights Violations (HRV). Our research focuses on the analysis of data from Telegram, the leading social media platform for reading independent news in post-Soviet regions. We gathered a dataset of posts sampled from 95 public Telegram channels that cover politics and war news, which we have utilized to identify potential occurrences of HRV. Employing a mBERT-based text classifier, we have conducted an analysis to detect any mentions of HRV in the Telegram data. Our final approach yielded an F2 score of 0.71 for HRV detection, representing an improvement of 0.38 over the multilingual BERT base model. We release two datasets that contains Telegram posts: (1) large corpus with over 2.3 millions posts and (2) annotated at the sentence-level dataset to indicate HRVs. The Telegram posts are in the context of the Russia-Ukraine war. We posit that our findings hold significant implications for NGOs, governments, and researchers by providing a means to detect and document possible human rights violations…(More)” See also Data for Peace and Humanitarian Response? The Case of the Ukraine-Russia War

The Metaverse and Homeland Security


Report by Timothy Marler, Zara Fatima Abdurahaman, Benjamin Boudreaux, and Timothy R. Gulden: “The metaverse is an emerging concept and capability supported by multiple underlying emerging technologies, but its meaning and key characteristics can be unclear and will likely change over time. Thus, its relevance to some organizations, such as the U.S. Department of Homeland Security (DHS), can be unclear. This lack of clarity can lead to unmitigated threats and missed opportunities. It can also inhibit healthy public discourse and effective technology management generally. To help address these issues, this Perspective provides an initial review of the metaverse concept and how it might be relevant to DHS. As a critical first step with the analysis of any emerging technology, the authors review current definitions and identify key practical characteristics. Often, regardless of a precise definition, it is the fundamental capabilities that are central to discussion and management. Then, given a foundational understanding of what a metaverse entails, the authors summarize primary goals and relevant needs for DHS. Ultimately, in order to be relevant, technologies must align with actual needs for various organizations or users. By cross-walking exemplary DHS needs that stem from a variety of mission sets with pervasive characteristics of metaverses, the authors demonstrate that metaverses are, in fact, relevant to DHS. Finally, the authors identify specific threats and opportunities that DHS could proactively manage. Although this work focuses the discussion of threats and opportunities on DHS, it has broad implications. This work provides a foundation on which further discussions and research can build, minimizing disparities and discoordination in development and policy…(More)”.

How to decode modern conflicts with cutting-edge technologies


Blog by Mykola Blyzniuk: “In modern warfare, new technologies are increasingly being used to manipulate information and perceptions on the battlefield. This includes the use of deep fakes, or the malicious use of ICT (Information and Communication Technologies).

Likewise, emerging tech can be instrumental in documenting human rights violationstracking the movement of troops and weaponsmonitoring public sentiments and the effects of conflict on civilians and exposing propaganda and disinformation.

The dual use of new technologies in modern warfare highlights the need for further investigation. Here are two examples how the can be used to advance politial analysis and situational awareness…

The world of Natural Language Processing (NLP) technology took a leap with a recent study on the Russia-Ukraine conflict by Uddagiri Sirisha and Bolem Sai Chandana of the School of Computer Science and Engineering at Vellore Institute of Technology Andhra Pradesh ( VIT-AP) University in Amaravathi Andhra Pradesh, India.

The researchers developed a novel artificial intelligence model to analyze whether a piece of text is positive, negative or neutral in tone. This new model referred to as “ABSA-based ROBERTa-LSTM”, looks at not just the overall sentiment of a piece of text but also the sentiment towards specific aspects or entities mentioned in the text. The study took a pre-processed dataset of 484,221 tweets collected during April — May 2022 related to the Russia-Ukraine conflict and applied the model, resulting in a sentiment analysis accuracy of 94.7%, outperforming current techniques….(More)”.

Digital inclusion in peace processes – no silver bullet, but a major opportunity


Article by Peace Research Institute Oslo: “Digital inclusion is paving the way for women and other marginalized groups to participate in peace processes. Through digital platforms, those who are unable to participate in physical meetings, such as women with children, youth or disabled, can get their voices heard. However, digital technologies provide no silver bullet, and mitigating their risks requires careful context analysis and process design.  

Women remain underrepresented in peace processes, and even in cases where they are included, they may have difficulties to attend in-person meetings. Going beyond physical inclusion, digital inclusion offers a way to include a wider variety of people, views and interests in a peace process…

The most frequent aim of digital inclusion in peace processes is related to increased legitimacy and political support, as digital tools allow for wider participation, and a larger number and variety of voices to be heard. This, in turn, can increase the ownership of the process. Meetings, consultations and processes using easy and widely available technological platforms such as Zoom, Facebook and WhatsApp make participation easier for those who have often been excluded….

Digital technologies offer various functions for peacemaking and increased inclusion. Their utility can be seen in gathering, analysing and disseminating relevant data. For strategic communications, digital technologies offer tools to amplify and diversify messages. Additionally, they offer platforms for connecting actors and enabling collaboration between them…(More)”.

AI-assisted diplomatic decision-making during crises—Challenges and opportunities


Article by Neeti Pokhriyal and Till Koebe: “Recent academic works have demonstrated the efficacy of employing or integrating “non-traditional” data (e.g., social media, satellite imagery, etc) for situational awareness tasks…

Despite these successes, we identify four critical challenges unique to the area of diplomacy that needs to be considered within the growing AI and diplomacy community going ahead:

1. First, decisions during crises are almost always taken using limited or incomplete information. There may be deliberate misuse and obfuscation of data/signals between different parties involved. At the start of a crisis, information is usually limited and potentially biased, especially along socioeconomic and rural-urban lines as crises are known to exacerbate the vulnerabilities already existing in the populations. This requires AI tools to quantify and visualize calibrated uncertainty in their outputs in an appropriate manner.

2. Second, in many cases, human lives and livelihoods are at stake. Therefore, any forecast, reasoning, or recommendation provided by AI assistance needs to be explainable and transparent for authorized users, but also secure against unauthorized access as diplomatic information is often highly sensitive. The question of accountability in case of misleading AI assistance needs to be addressed beforehand.

3. Third, in complex situations with high stakes but limited information, cultural differences and value-laden judgment driven by personal experiences play a central role in diplomatic decision-making. This calls for the use of learning techniques that can incorporate domain knowledge and experience.

4. Fourth, diplomatic interests during crises are often multifaceted, resulting in deep mistrust in and strategic misuse of information. Social media data, when used for consular tasks, has been shown to be susceptible to various d-/misinformation campaigns, some by the public, others by state actors for strategic manipulation…(More)”

Five Enablers for a New Phase of Behavioral Science


Article by Michael Hallsworth: “Over recent weeks I’ve been sharing parts of a “manifesto” that tries to give a coherent vision for the future of applied behavioral science. Stepping back, if I had to identify a theme that comes through the various proposals, it would be the need for self-reflective practice.

Behavioral science has seen a tremendous amount of growth and interest over the last decade, largely focused on expanding its uses and methods. My sense is it’s ready for a new phase of maturity. That maturity involves behavioral scientists reflecting on the various ways that their actions are shaped by structural, institutional, environmental, economic, and historical factors.

I’m definitely not exempt from this need for self-reflection. There are times when I’ve focused on a cognitive bias when I should have been spending more time exploring the context and motivations for a decision instead. Sometimes I’ve homed in on a narrow slice of a problem that we can measure, even if that means dispensing with wider systemic effects and challenges. Once I spent a long time trying to apply the language of heuristics and biases to explain why people were failing to use the urgent care alternatives to hospital emergency departments, before realizing that their behavior was completely reasonable.     

The manifesto critiques things like this, but it doesn’t have all the answers. Because it tries to both cover a lot of ground and go into detail, many of the hard knots of implementation go unpicked. The truth is that writing reports and setting goals is the easy part. Turning those goals into practice is much tougher; as behavioral scientists know, there is often a gap between intention and action.

Right now, I and others don’t always realize the ambitions set out in the manifesto. Changing that is going to take time and effort, and it will involve the discomfort of disrupting familiar practices. Some have made public commitments in this direction; my organization is working on upgrading its practices in line with proposals around making predictions prior to implementation, strengthening RCTs to cope with complexity, and enabling people to use behavioral science, among others.

The truth is that writing reports and setting goals is the easy part. Turning those goals into practice is much tougher; as behavioral scientists know, there is often a gap between intention and action.

But changes by individual actors will not be enough. The big issue is that several of the proposals require coordination. For example, one of the key ideas is the need for more multisite studies that are well coordinated and have clear goals. Another prioritizes developing international professional networks to support projects in low- and middle-income countries…(More)”.

The Coming Age of AI-Powered Propaganda


Essay by Josh A. Goldstein and Girish Sastry: “In the seven years since Russian operatives interfered in the 2016 U.S. presidential election, in part by posing as Americans in thousands of fake social media accounts, another technology with the potential to accelerate the spread of propaganda has taken center stage: artificial intelligence, or AI. Much of the concern has focused on the risks of audio and visual “deepfakes,” which use AI to invent images or events that did not actually occur. But another AI capability is just as worrisome. Researchers have warned for years that generative AI systems trained to produce original language—“language models,” for short—could be used by U.S. adversaries to mount influence operations. And now, these models appear to be on the cusp of enabling users to generate a near limitless supply of original text with limited human effort. This could improve the ability of propagandists to persuade unwitting voters, overwhelm online information environments, and personalize phishing emails. The danger is twofold: not only could language models sway beliefs; they could also corrode public trust in the information people rely on to form judgments and make decisions.

The progress of generative AI research has outpaced expectations. Last year, language models were used to generate functional proteins, beat human players in strategy games requiring dialogue, and create online assistants. Conversational language models have come into wide use almost overnight: more than 100 million people used OpenAI’s ChatGPT program in the first two months after it was launched, in December 2022, and millions more have likely used the AI tools that Google and Microsoft introduced soon thereafter. As a result, risks that seemed theoretical only a few years ago now appear increasingly realistic. For example, the AI-powered “chatbot” that powers Microsoft’s Bing search engine has shown itself to be capable of attempting to manipulate users—and even threatening them.

As generative AI tools sweep the world, it is hard to imagine that propagandists will not make use of them to lie and mislead…(More)”.

How the Digital Transformation Changed Geopolitics


Paper by Dan Ciuriak: “In the late 2000s, a set of connected technological developments – introduction of the iPhone, deep learning through stacked neural nets, and application of GPUs to neural nets – resulted in the generation of truly astronomical amounts of data and provided the tools to exploit it. As the world emerged from the Great Financial Crisis of 2008-2009, data was decisively transformed from a mostly valueless by-product – “data exhaust” – to the “new oil”, the essential capital asset of the data-driven economy, and the “new plutonium” when deployed in social and political applications. This economy featured steep economies of scale, powerful economies of scope, network externalities in many applications, and pervasive information asymmetry. Strategic commercial policies at the firm and national levels were incentivized by the newfound scope to capture economic rents, destabilizing the rules-based system for trade and investment. At the same time, the new disruptive general-purpose technologies built on the nexus of Big Data, machine learning and artificial intelligence reconfigured geopolitical rivalry in several ways: by shifting great power rivalry onto new and critical grounds on which none had a decisive established advantage; by creating new vulnerabilities to information warfare in societies, especially open societies; and by enhancing the tools for social manipulation and the promotion of political personality cults. Machine learning, which essentially industrialized the very process of learning, drove an acceleration in the pace of innovation, which precipitated industrial policies driven by the desire to capture first mover advantage and by the fear of falling behind.

These developments provide a unifying framework to understand the progressive unravelling of the US-led global system as the decade unfolded, despite the fact that all the major innovations that drove the transition were within the US sphere and the US enjoyed first mover advantages. This is in stark contrast to the previous major economic transition to the knowledge-based economy, in which case US leadership on the key innovations extended its dominance for decades and indeed powered its rise to its unipolar moment. The world did not respond well to the changed technological and economic conditions and hence we are war: hot war, cold war, technological war, trade war, social war, and internecine political war. This paper focuses on the role of technological and economic conditions in shaping geopolitics, which is critical to understand if we are to respond to the current world disorder and to prepare to handle the coming transition in technological and economic conditions to yet another new economic era based on machine knowledge capital…(More)”.

Innovation Power: Why Technology Will Define the Future of Geopolitics


Essay by Eric Schmidt: “When Russian forces marched on Kyiv in February 2022, few thought Ukraine could survive. Russia had more than twice as many soldiers as Ukraine. Its military budget was more than ten times as large. The U.S. intelligence community estimated that Kyiv would fall within one to two weeks at most.

Outgunned and outmanned, Ukraine turned to one area in which it held an advantage over the enemy: technology. Shortly after the invasion, the Ukrainian government uploaded all its critical data to the cloud, so that it could safeguard information and keep functioning even if Russian missiles turned its ministerial offices into rubble. The country’s Ministry of Digital Transformation, which Ukrainian President Volodymyr Zelensky had established just two years earlier, repurposed its e-government mobile app, Diia, for open-source intelligence collection, so that citizens could upload photos and videos of enemy military units. With their communications infrastructure in jeopardy, the Ukrainians turned to Starlink satellites and ground stations provided by SpaceX to stay connected. When Russia sent Iranian-made drones across the border, Ukraine acquired its own drones specially designed to intercept their attacks—while its military learned how to use unfamiliar weapons supplied by Western allies. In the cat-and-mouse game of innovation, Ukraine simply proved nimbler. And so what Russia had imagined would be a quick and easy invasion has turned out to be anything but.

Ukraine’s success can be credited in part to the resolve of the Ukrainian people, the weakness of the Russian military, and the strength of Western support. But it also owes to the defining new force of international politics: innovation power. Innovation power is the ability to invent, adopt, and adapt new technologies. It contributes to both hard and soft power. High-tech weapons systems increase military might, new platforms and the standards that govern them provide economic leverage, and cutting-edge research and technologies enhance global appeal. There is a long tradition of states harnessing innovation to project power abroad, but what has changed is the self-perpetuating nature of scientific advances. Developments in artificial intelligence in particular not only unlock new areas of scientific discovery; they also speed up that very process. Artificial intelligence supercharges the ability of scientists and engineers to discover ever more powerful technologies, fostering advances in artificial intelligence itself as well as in other fields—and reshaping the world in the process…(More)”.