Cloud Empires: How Digital Platforms Are Overtaking the State and How We Can Regain Control


Book by Vili Lehdonvirta: “The early Internet was a lawless place, populated by scam artists who made buying or selling anything online risky business. Then Amazon, eBay, Upwork, and Apple established secure digital platforms for selling physical goods, crowdsourcing labor, and downloading apps. These tech giants have gone on to rule the Internet like autocrats. How did this happen? How did users and workers become the hapless subjects of online economic empires? The Internet was supposed to liberate us from powerful institutions. In Cloud Empires, digital economy expert Vili Lehdonvirta explores the rise of the platform economy into statelike dominance over our lives and proposes a new way forward.

Digital platforms create new marketplaces and prosperity on the Internet, Lehdonvirta explains, but they are ruled by Silicon Valley despots with little or no accountability. Neither workers nor users can “vote with their feet” and find another platform because in most cases there isn’t one. And yet using antitrust law and decentralization to rein in the big tech companies has proven difficult. Lehdonvirta tells the stories of pioneers who helped create—or resist—the new social order established by digital platform companies. The protagonists include the usual suspects—Amazon founder Jeff Bezos, Travis Kalanick of Uber, and Bitcoin’s inventor Satoshi Nakamoto—as well as Kristy Milland, labor organizer of Amazon’s Mechanical Turk, and GoFundMe, a crowdfunding platform that has emerged as an ersatz stand-in for the welfare state. Only if we understand digital platforms for what they are—institutions as powerful as the state—can we begin the work of democratizing them…(More)”.

Writing the Revolution


Book by Heather Ford: “A close reading of Wikipedia’s article on the Egyptian Revolution reveals the complexity inherent in establishing the facts of events as they occur and are relayed to audiences near and far.

Wikipedia bills itself as an encyclopedia built on neutrality, authority, and crowd-sourced consensus. Platforms like Google and digital assistants like Siri distribute Wikipedia’s facts widely, further burnishing its veneer of impartiality. But as Heather Ford demonstrates in Writing the Revolution, the facts that appear on Wikipedia are often the result of protracted power struggles over how data are created and used, how history is written and by whom, and the very definition of facts in a digital age.

In Writing the Revolution, Ford looks critically at how the Wikipedia article about the 2011 Egyptian Revolution evolved over the course of a decade, both shaping and being shaped by the Revolution as it happened. When data are published in real time, they are subject to an intense battle over their meaning across multiple fronts. Ford answers key questions about how Wikipedia’s so-called consensus is arrived at; who has the power to write dominant histories and which knowledges are actively rejected; how these battles play out across the chains of circulation in which data travel; and whether history is now written by algorithms…(More)”

The Data4COVID-19 Review: Assessing the Use of Non-Traditional Data During a Pandemic Crisis


Report by Hannah Chafetz, Andrew J. Zahuranec, Sara Marcucci, Behruz Davletov, and Stefaan Verhulst: “As the last two years of the COVID-19 pandemic demonstrate, pandemics pose major challenges on all levels–with cataclysmic effects on society. 

Decision-makers from around the world have sought to mitigate the consequences of COVID-19 through the use of data, including data from non-traditional sources such as social media, wastewater, and credit card and telecommunications companies. However, there has been little research into how non-traditional data initiatives were designed or what impacts they had on COVID-19 responses. 

Over the last eight months, The GovLab, with the support of The Knight Foundation, has sought to fill this gap by conducting a study about how non-traditional data (NTD) sources have been used during COVID-19. 

On October 31st, The GovLab published the report: “The COVID-19 Review: Assessing the Use of Non-Traditional Data During a Pandemic Crisis.” The report details how decision makers around the world have used non-traditional sources through a series of briefings intended for a generalist audience. 

The briefings describe and assess how non-traditional data initiatives were designed, planned, and implemented, as well as the project results. 

Findings

The briefings uncovered several findings about why, where, when, and how NTD was used during COVID-19, including that:

  • Officials increasingly called for the use of NTD to answer questions where and when traditional data such as surveys and case data were not sufficient or could not be leveraged. However, the collection and use of traditional data was often needed to validate insights.
  • NTD sources were primarily used to understand populations’ health, mobility (or physical movements), economic activity, and sentiment of the pandemic. In comparison with previous dynamic crises, COVID-19 was a watershed moment in terms of access to and re-use of non-traditional data in those four areas.
  • The majority of NTD initiatives were fragmented and uncoordinated, reflecting the larger fragmented COVID-19 response. Many projects were focused on responding to COVID-19 after outbreaks occurred. This pattern reflected an overall lack of preparedness for the pandemic and need for the rapid development of initiatives to address its consequences.
  • NTD initiatives frequently took the form of cross-sectoral data partnerships or collaborations developed to respond to specific needs. Many institutions did not have the systems and infrastructure in place for these collaborations to be sustainable.
  • Many of the NTD initiatives involving granular, personal data were implemented without the necessary social license to do so–leading to public concerns about ethics and hindering public trust in non-traditional data. 

Stefaan Verhulst, Co-Founder and Chief R&D of The GovLab explains: “The use of NTD offers growing potential during crisis situations. When managed responsibly, NTD use can help us understand the current state of the crisis, forecast how it will progress, and respond to different aspects of it in real-time.”…(More)”.

Improving Access and Management of Public Transit ITS Data


Report by the National Academies: “With the proliferation of automated vehicle location (AVL), automated passenger counters (APCs), and automated fare collection (AFC), transit agencies are collecting increasingly granular data on service performance, ridership, customer behavior, and financial recovery. While granular intelligent transportation systems (ITS) data can meaningfully improve transit decision-making, transit agencies face many challenges in accessing, validating, storing, and analyzing these data sets. These challenges are made more difficult in that the tools for managing and analyzing transit ITS data generally cannot, at this point, be shared across transit agencies because of variation in data collection systems and data formats. Multiple vendors provide ITS hardware and software, and data formats vary by vendor. Moreover, agencies may employ a patchwork of ITS that has been acquired and modified over time, leading to further consistency challenges.
Standardization of data structures and tools can help address these challenges. Not only can standardization streamline data transfer, validation, and database structuring, it encourages the development of analysis tools that can be used across transit agencies, as has been the case with route and schedule data, standardized in the General Transit Feed Specification (GTFS) format..(More)”.

Democracy by Design: Perspectives for Digitally Assisted, Participatory Upgrades of Society


Paper by Dirk Helbing et al: “The technological revolution, particularly the availability of more data and more powerful computational tools, has led to the emergence of a new scientific area called Computational Diplomacy. Our work focuses on a popular subarea of it. In recent years, there has been a surge of interest in using digital technologies to promote more participatory forms of democracy. While there are numerous potential benefits to using digital tools to enhance democracy, significant challenges must be addressed. It is essential to ensure that digital technologies are used in an accessible, equitable, and fair manner rather than reinforcing existing power imbalances. This paper investigates how digital tools can be used to help design more democratic societies by investigating three key research areas: (1) the role of digital technologies in facilitating civic engagement in collective decision-making; (2) the use of digital tools to improve transparency and accountability in gover-nance; and (3) the potential for digital technologies to enable the formation of more inclusive and representative democracies. We argue that more research on how digital technologies can be used to support democracy upgrade is needed, and we make some recommendations for future research in this direction…(More)”.

How Technology Companies Are Shaping the Ukraine Conflict


Article by Abishur Prakash: “Earlier this year, Meta, the company that owns Facebook and Instagram, announced that people could create posts calling for violence against Russia on its social media platforms. This was unprecedented. One of the world’s largest technology firms very publicly picked sides in a geopolitical conflict. Russia was now not just fighting a country but also multinational companies with financial stakes in the outcome. In response, Russia announced a ban on Instagram within its borders. The fallout was significant. The ban, which eventually included Facebook, cost Meta close to $2 billion.

Through the war in Ukraine, technology companies are showing how their decisions can affect geopolitics, which is a massive shift from the past. Technology companies have been either dragged into conflicts because of how customers were using their services (e.g., people putting their houses in the West Bank on Airbnb) or have followed the foreign policy of governments (e.g., SpaceX supplying Internet to Iran after the United States removed some sanctions)…(More)”.

Artificial intelligence in government: Concepts, standards, and a unified framework


Paper by Vincent J. Straub, Deborah Morgan, Jonathan Bright, Helen Margetts: “Recent advances in artificial intelligence (AI) and machine learning (ML) hold the promise of improving government. Given the advanced capabilities of AI applications, it is critical that these are embedded using standard operational procedures, clear epistemic criteria, and behave in alignment with the normative expectations of society. Scholars in multiple domains have subsequently begun to conceptualize the different forms that AI systems may take, highlighting both their potential benefits and pitfalls. However, the literature remains fragmented, with researchers in social science disciplines like public administration and political science, and the fast-moving fields of AI, ML, and robotics, all developing concepts in relative isolation. Although there are calls to formalize the emerging study of AI in government, a balanced account that captures the full breadth of theoretical perspectives needed to understand the consequences of embedding AI into a public sector context is lacking. Here, we unify efforts across social and technical disciplines by using concept mapping to identify 107 different terms used in the multidisciplinary study of AI. We inductively sort these into three distinct semantic groups, which we label the (a) operational, (b) epistemic, and (c) normative domains. We then build on the results of this mapping exercise by proposing three new multifaceted concepts to study AI-based systems for government (AI-GOV) in an integrated, forward-looking way, which we call (1) operational fitness, (2) epistemic completeness, and (3) normative salience. Finally, we put these concepts to work by using them as dimensions in a conceptual typology of AI-GOV and connecting each with emerging AI technical measurement standards to encourage operationalization, foster cross-disciplinary dialogue, and stimulate debate among those aiming to reshape public administration with AI…(More)”.

Computing the News: Data Journalism and the Search for Objectivity


Book by Sylvain Parasie: “…examines how data journalists and news organizations have navigated the tensions between traditional journalistic values and new technologies. Offering an in-depth analysis of how computing has become part of the daily practices of journalists, this book proposes ways for journalism to evolve in order to serve democratic societies…(More)”.

Democratised and declassified: the era of social media war is here


Essay by David V. Gioe & Ken Stolworthy: “In October 1962, Adlai Stevenson, US ambassador to the United Nations, grilled Soviet Ambassador Valerian Zorin about whether the Soviet Union had deployed nuclear-capable missiles to Cuba. While Zorin waffled (and didn’t know in any case), Stevenson went in for the kill: ‘I am prepared to wait for an answer until Hell freezes over… I am also prepared to present the evidence in this room.’ Stevenson then theatrically revealed several poster-sized photographs from a US U-2 spy plane, showing Soviet missile bases in Cuba, directly contradicting Soviet claims to the contrary. It was the first time that (formerly classified) imagery intelligence (IMINT) had been marshalled as evidence to publicly refute another state in high-stakes diplomacy, but it also revealed the capabilities of US intelligence collection to a stunned audience. 

During the Cuban missile crisis — and indeed until the end of the Cold War — such exquisite airborne and satellite collection was exclusively the purview of the US, UK and USSR. The world (and the world of intelligence) has come a long way in the past 60 years. By the time President Putin launched his ‘special military operation’ in Ukraine in late February 2022, IMINT and geospatial intelligence (GEOINT) was already highly democratised. Commercial satellite companies, such as Maxar or Google Earth, provide high resolution images free of charge. Thanks to such ubiquitous imagery online, anyone could see – in remarkable clarity – that the Russian military was massing on Ukraine’s border. Geolocation stamped photos and user generated videos uploaded to social media platforms, such as Telegram or TikTok, enabled  further refinement of – and confidence in – the view of Russian military activity. And continued citizen collection showed a change in Russian positions over time without waiting for another satellite to pass over the area. Of course, such a show of force was not guaranteed to presage an invasion, but there was no hiding the composition and scale of the build-up. 

Once the Russians actually invaded, there was another key development – the democratisation of near real-time battlefield awareness. In a digitally connected context, everyone can be a sensor or intelligence collector, wittingly or unwittingly. This dispersed and crowd-sourced collection against the Russian campaign was based on the huge number of people taking pictures of Russian military equipment and formations in Ukraine and posting them online. These average citizens likely had no idea what exactly they were snapping a picture of, but established military experts on the internet do. Sometimes within minutes, internet platforms such as Twitter had threads and threads of what the pictures were, and what they revealed, providing what intelligence professionals call Russian ‘order of battle’…(More)”.

Could an algorithm predict the next pandemic?


Article by Simon Makin: “Leap is a machine-learning algorithm that uses sequence data to classify influenza viruses as either avian or human. The model had been trained on a huge number of influenza genomes — including examples of H5N8 — to learn the differences between those that infect people and those that infect birds. But the model had never seen an H5N8 virus categorized as human, and Carlson was curious to see what it made of this new subtype.

Somewhat surprisingly, the model identified it as human with 99.7% confidence. Rather than simply reiterating patterns in its training data, such as the fact that H5N8 viruses do not typically infect people, the model seemed to have inferred some biological signature of compatibility with humans. “It’s stunning that the model worked,” says Carlson. “But it’s one data point; it would be more stunning if I could do it a thousand more times.”

The zoonotic process of viruses jumping from wildlife to people causes most pandemics. As climate change and human encroachment on animal habitats increase the frequency of these events, understanding zoonoses is crucial to efforts to prevent pandemics, or at least to be better prepared.

Researchers estimate that around 1% of the mammalian viruses on the planet have been identified1, so some scientists have attempted to expand our knowledge of this global virome by sampling wildlife. This is a huge task, but over the past decade or so, a new discipline has emerged — one in which researchers use statistical models and machine learning to predict aspects of disease emergence, such as global hotspots, likely animal hosts or the ability of a particular virus to infect humans. Advocates of such ‘zoonotic risk prediction’ technology argue that it will allow us to better target surveillance to the right areas and situations, and guide the development of vaccines and therapeutics that are most likely to be needed.

However, some researchers are sceptical of the ability of predictive technology to cope with the scale and ever-changing nature of the virome. Efforts to improve the models and the data they rely on are under way, but these tools will need to be a part of a broader effort if they are to mitigate future pandemics…(More)”.