Deep Fake


By Emil Verhulst

/diːp feɪk/

An image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said (Merriam-Webster).

The term “deepfake” was first coined in late 2017 by a Reddit user who “shared pornographic videos that used open source face-swapping technology.” Since then, the term has expanded to include a harmful alteration or manipulation of digital media – from audio to landscapes. For example, researchers applied AI techniques to modify aerial imagery, which could potentially lead governments astray or spread false information

“Adversaries may use fake or manipulated information to impact our understanding of the world,” says a spokesperson for the National Geospatial-Intelligence Agency, part of the Pentagon that oversees the collection, analysis, and distribution of geospatial information.”

Audio can also be deepfaked. In 2019, a mysterious case emerged involving a UK-based energy company and its Germany-based parent company. The CEO of the UK energy company received a call from his boss, or at least who he thought was his boss. This “boss” told him to send around 200,000 dollars to a supplier in Hungary:

“The €220,000 was moved to Mexico and channeled to other accounts, and the energy firm—which was not identified—reported the incident to its insurance company, Euler Hermes Group SA. An official with Euler Hermes said the thieves used artificial intelligence to create a deepfake of the German executive’s voice.”

This incident, among others, indicates a rise in crime associated with deep fakes. Deep fakes can alter our perception of reality. In particular, it can prove dangerous in precarious social and political climates, in which false information can incite violence or hate speech online. 

So what is the science behind deep fakes?

Deep fakes are usually created using Generative Adversarial Networks, or GANs. This process is a subfield of AI known as Machine Learning (ML). Machine learning is the use of computer systems that can learn without following instructions, and instead learn using statistics and algorithms to dissect patterns in data. To create a deep fake, two ML algorithms called “neural networks” work in conjunction – one creates fake data (videos, images, audio, etc) that replicates the original data (usually a video or audio from another person), while the other identifies the counterfeit data, competing with the other neural network. The networks compete for iterations of the final product, until there is no difference between the real and fake data. 

Deep fakes will only continue to become more prevalent in the coming years. As they pose a threat to journalism, online speech, and internet safety, we must remain vigilant about our intake of new information online. 

Digital Twins


/ˈdɪʤɪtl twɪnz/

A digital representation of a physical asset which can be used to monitor, visualise, predict and make decisions about it (Open Data Institute).

Digital twin technologies are driven by sensors that collect data in real-time, enabling a digital representation of a physical process or product. Digital twins can help businesses or decision-makers maintain, optimize, or monitor physical assets, providing specific insights into their health and performance. A traffic model, for example, can be used to monitor and manage real-time pedestrian and road traffic in a city. Energy companies such as General Electric and Chevron use digital twins to monitor wind turbines. Digital twins can also help decision-makers at the state and local level better plan infrastructure or monitor city assets. In Sustainable Cities: Big Data, Artificial Intelligence and the Rise of Green, “Cy-phy” Cities, Claudio Scardovi describes how cities can create digital twins, leveraging data and AI, to test strategies for increasing sustainability, inclusivity, and resilience: 

“Global cities are facing an almost unprecedented challenge of change. As they re-emerge from the Covid 19 pandemic and get ready to face climate change and other, potentially existential threats, they need to look for new ways to support wealth and wellbeing creation […] New digital technologies could be used to design digital and physical twins of cities that are able to feed into each other to optimize their working and ability to create new wealth and wellbeing.” 

The UK National Infrastructure Commission created a framework to support the development of digital twins. Similarly, many European countries encourage urban digital twin initiatives:

“Urban digital twins are a virtual representation of a city’s physical assets, using data, data analytics and machine learning to help stimulate models that can be updated and changed (real-time) as their physical equivalents change. [..]  In terms of rationale, they can bring cost efficiencies, operational efficiencies, better crisis management, more openness and better informed decision-making, more participatory governance or better urban planning.”

Sometimes, however, digital twins fail to accurately reflect real-world developments, leading users to make poor decisions. Researchers Fei Tao and Qinglin Qi in “Make more digital twins” describe data challenges in digital twin technologies, such as inconsistencies with data types and scattered ownership:

“Missing or erroneous data can distort results and obscure faults. The wobbling of a wind turbine, say, would be missed if vibration sensors fail. Beijing-based power company BKC Technology struggled to work out that an oil leak was causing a steam turbine to overheat. It turned out that lubricant levels were missing from its digital twin.”

The uptake of digital twins requires both public and private sector collaboration and improved data infrastructures. As the Open Data Institute describes, digital twins depend on a culture of openness: “open data, open culture, open standards and collaborative models that build trust, reduce cost, and create more value.”

Digital Vigilantism


/ˈdɪʤɪtl ˈvɪʤɪləntɪz(ə)m/

A process where citizens are collectively offended by other citizen activity and respond through coordinated retaliation on digital media, including mobile devices and social media platforms (Daniel Trottier, 2017). 

Following the storming of the US Capitol on January 6, 2021, Washington’s Metropolitan Police Department (MPD) released an open call for help identifying rioters. The attack was heavily documented through live stream footage and photos posted to social media; thousands of citizens mobilized to parse through this media to identify and prosecute perpetrators. For example, researchers at the University of Toronto’s Citizen Lab presented photo and video evidence of potential suspects to the FBI, without posting any names publicly. 

This was not the first time citizens have organized to identify individuals involved in a harmful act. In 2015, after the Boston Marathon bombing, members of the public used Reddit and other platforms to conduct a parallel investigation, sharing and searching for information to uncover key information. In both of these cases, many of these amateur investigations had mixed results, with many uninvolved persons shamed and harassed

Acts of digital vigilantism, also referred to as ‘e-vigilantism,’ ‘cyber vigilantism,’ or ‘digilantism’ (Wehmhoener, 2010), are not always directed at matters of national security—these crowdsourced investigations occur as a result of general moral outrage from citizens who seek to distribute justice to groups or individuals they believe have committed an improper act. Often, these allegations can be the result of conspiracy theories, rumors, and a general miasma of distrust. After someone released a video on the internet of a cyclist assaulting two children on a bike trail in 2020, digital vigilantes sought out and subsequently misidentified the perpetrator. Over the coming weeks, this innocent party received threatening messages from angry internet sleuths, who circulated his personal information across social media – including his address.

Digital vigilantism occurs through the sharing of data or information through digital platforms, especially social media. Johnny Nhan, Laura Huey, Ryan Broll, in Digilantism: An Analysis of Crowdsourcing and the Boston Marathon, describe the Reddit community that organized following the Boston Marathon bombing:

“Although some posters focused on technical aspects of the crime in order to identify the perpetrators and understand their motives, others sought a different route. These posters were more interested in discussing whether the attacks were linked to an organized violent extremist group or were instead the work of a so-called ‘lone wolf’ actor. Although different in content from other forms of speculation offered online, these posts similarly were phrased in ways that suggested the poster had some deeper knowledge and/or experience of the field of violent extremism.”

As described above, those partaking in these crowdsourced investigations have a range of motivations—some, well-intentioned and others not. In addition, this crowdsourcing can slow down official investigations by bombarding authorities with unhelpful and false information. 

The fallout from digital vigilantism can also affect targets in a number of ways—from wrongful shaming and harassment online, to death threats lasting several weeks. Daniel Troitter, in the paper “Denunciation and doxing: towards a conceptual model of digital vigilantism,” warns of the social harms caused by digital vigilantism:

“Denunciation may provoke other forms of mediated and embodied activities, including harassment and bullying, threats, and physical violence, often overlapping with gendered persecution and racism. As for longer-term outcomes, researchers can also consider how the reputation and broader social standing of the target and participants are understood and expressed both in news reports as well as accounts by participants […] They may consider references to detrimental life events for targets, for example, an inability to sustain employment, being excommunicated from their community, in addition to physical interventions.”

While many are weary of the illicit behavior that digital vigilantism sanctions, from online harassment to mob organizing leading to physical acts of violence, others acknowledge the collective intelligence practices and their profound impact on societal participation. James Walsh notes:

“[S]uch a transformation in societal participation led to a shift from a deputisation to an autonomization paradigm, referring to the voluntary, or self-appointed, involvement of citizens in the regulatory gatekeeping network. This refers to grassroot mobilisation, rather than governments mobilising the public, with groups of citizens spontaneously aligning themselves with authorities’ aims and objectives.”

Sources and Further Readings: