Clément Mabi in Réseaux: “This paper posits that digital participatory democracy can be seen as a new anchor of participatory governmentality. Conveniently called “digital democracy”, its implementation contributes to the spread of a particular conception of government through participation, influenced by digital literacy and its principles of self-organization and interactivity. By studying the deployment and trajectory of the so-called “civic tech” movement in France, the aim is to show that the project of democratic openness embodied by the movement has gradually narrowed down to a logic of services, for the purposes of institutions. The “great national debate” triggered a shift in this trajectory. While part of the community complied with the government’s request to facilitate participation, the debate also gave unprecedented visibility to critics who contributed to the emergence of a different view of the role of digital technologies in democracy….(More)“.
Book edited by Fred S. Roberts and Igor A. Sheremet: “The growth of a global digital economy has enabled rapid communication, instantaneous movement of funds, and availability of vast amounts of information. With this come challenges such as the vulnerability of digitalized sociotechnological systems (STSs) to destructive events (earthquakes, disease events, terrorist attacks). Similar issues arise for disruptions to complex linked natural and social systems (from changing climates, evolving urban environments, etc.). This book explores new approaches to the resilience of sociotechnological and natural-social systems in a digital world of big data, extraordinary computing capacity, and rapidly developing methods of Artificial Intelligence….
The world-wide COVID-19 pandemic illustrates the vulnerability of our healthcare systems, supply chains, and social infrastructure, and confronts our notions of what makes a system resilient. We have found that use of AI tools can lead to problems when unexpected events occur. On the other hand, the vast amounts of data available from sensors, satellite images, social media, etc. can also be used to make modern systems more resilient.
Papers in the book explore disruptions of complex networks and algorithms that minimize departure from a previous state after a disruption; introduce a multigrammatical framework for the technological and resource bases of today’s large-scale industrial systems and the transformations resulting from disruptive events; and explain how robotics can enhance pre-emptive measures or post-disaster responses to increase resiliency. Other papers explore current directions in data processing and handling and principles of FAIRness in data; how the availability of large amounts of data can aid in the development of resilient STSs and challenges to overcome in doing so. The book also addresses interactions between humans and built environments, focusing on how AI can inform today’s smart and connected buildings and make them resilient, and how AI tools can increase resilience to misinformation and its dissemination….(More)”.
Cal Newport at The New Yorker: “In early 2017, a French labor law went into effect that attempted to preserve the so-called right to disconnect. Companies with fifty or more employees were required to negotiate specific policies about the use of e-mail after work hours, with the goal of reducing the time that workers spent in their in-boxes during the evening or over the weekend. Myriam El Khomri, the minister of labor at the time, justified the new law, in part, as a necessary step to reduce burnout. The law is unwieldy, but it points toward a universal problem, one that’s become harder to avoid during the recent shift toward a more frenetic and improvisational approach to work: e-mail is making us miserable.
To study the effects of e-mail, a team led by researchers from the University of California, Irvine, hooked up forty office workers to wireless heart-rate monitors for around twelve days. They recorded the subjects’ heart-rate variability, a common technique for measuring mental stress. They also monitored the employees’ computer use, which allowed them to correlate e-mail checks with stress levels. What they found would not surprise the French. “The longer one spends on email in [a given] hour the higher is one’s stress for that hour,” the authors noted. In another study, researchers placed thermal cameras below each subject’s computer monitor, allowing them to measure the tell-tale “heat blooms” on a person’s face that indicate psychological distress. They discovered that batching in-box checks—a commonly suggested “solution” to improving one’s experience with e-mail—is not necessarily a panacea. For those people who scored highly in the trait of neuroticism, batching e-mails actually made them more stressed, perhaps because of worry about all of the urgent messages they were ignoring. The researchers also found that people answered e-mails more quickly when under stress but with less care—a text-analysis program called Linguistic Inquiry and Word Count revealed that these anxious e-mails were more likely to contain words that expressed anger. “While email use certainly saves people time effort in communicating, it also comes at a cost, the authors of the two studies concluded. Their recommendation? To “suggest that organizations make a concerted effort to cut down on email traffic.”
Other researchers have found similar connections between e-mail and unhappiness. A study, published in 2019, looked at long-term trends in the health of a group of nearly five thousand Swedish workers. They found that repeated exposure to “high information and communication technology demands” (translation: a need to be constantly connected) were associated with “suboptimal” health outcomes. This trend persisted even after they adjusted the statistics for potential complicating factors such as age, sex, socioeconomic status, health behavior, body-mass index, job strain, and social support. Of course, we don’t really need data to capture something that so many of us feel intuitively. I recently surveyed the readers of my blog about e-mail. “It’s slow and very frustrating. . . . I often feel like email is impersonal and a waste of time,” one respondent said. “I’m frazzled—just keeping up,” another admitted. Some went further. “I feel an almost uncontrollable need to stop what I’m doing to check email,” one person reported. “It makes me very depressed, anxious and frustrated.”…(More)”
European Parliament Think Tank: “Given the central role that online platforms (OPs) play in the digital economy, questions arise about their responsibility in relation to illegal/harmful content or products hosted in the frame of their operation. Against this background, this study reviews the main legal/regulatory challenges associated with OP operations and analyses the incentives for OPs, their users and third parties to detect and remove illegal/harmful and dangerous material, content and/or products. To create a functional classification which can be used for regulatory purposes, it discusses the notion of OPs and attempts to categorise them under multiple criteria. The study then maps and critically assesses the whole range of OP liabilities, taking hard and soft law, self-regulation and national legislation into consideration, whenever relevant. Finally, the study puts forward policy options for an efficient EU liability regime: (i) maintaining the status quo; (ii) awareness-raising and media literacy; (iii)promoting self-regulation; (iv) establishing co-regulation mechanisms and tools; (v) adoptingstatutory legislation; (vi) modifying OPs’ secondaryliability by employing two different models – (a) byclarifying the conditions for liability exemptionsprovided by the e-Commerce Directive or (b) byestablishing a harmonised regime of liability….(More)”.
Nesta Report by Sinead Mac Manus and Alice Clay: “The last decade has seen exponential growth in the amount of data generated, collected and analysed to provide insights across all aspects of industry. Healthcare is no exception. We are increasingly seeing the value of using health and care data to prevent ill health, improve health outcomes for people and provide new insights into disease and treatments.
Bringing together common themes across the existing research, this report sets out two interlinked challenges to building a data-driven health and care system. This is interspersed with best practice examples of the potential of data to improve health and care, as well as cautionary tales of what can happen when this is done badly.
The first challenge we explore is how to increase citizens’ trust and transparency in data sharing. The second challenge is how to unlock the value of health and care data.
We are excited about the role for participatory futures – a set of techniques that systematically engage people to imagine and create more sustainable, inclusive futures – in helping governments and other organisations work with citizens to engage them in debate about their health and care data to build a data-driven health and care system for the benefit of all….(More)”.
Essay by Yuval Noah Harari in the Financial Times: “…The Covid year has exposed an even more important limitation of our scientific and technological power. Science cannot replace politics. When we come to decide on policy, we have to take into account many interests and values, and since there is no scientific way to determine which interests and values are more important, there is no scientific way to decide what we should do.
For example, when deciding whether to impose a lockdown, it is not sufficient to ask: “How many people will fall sick with Covid-19 if we don’t impose the lockdown?”. We should also ask: “How many people will experience depression if we do impose a lockdown? How many people will suffer from bad nutrition? How many will miss school or lose their job? How many will be battered or murdered by their spouses?”
Even if all our data is accurate and reliable, we should always ask: “What do we count? Who decides what to count? How do we evaluate the numbers against each other?” This is a political rather than scientific task. It is politicians who should balance the medical, economic and social considerations and come up with a comprehensive policy.
Similarly, engineers are creating new digital platforms that help us function in lockdown, and new surveillance tools that help us break the chains of infection. But digitalisation and surveillance jeopardise our privacy and open the way for the emergence of unprecedented totalitarian regimes. In 2020, mass surveillance has become both more legitimate and more common. Fighting the epidemic is important, but is it worth destroying our freedom in the process? It is the job of politicians rather than engineers to find the right balance between useful surveillance and dystopian nightmares.
Three basic rules can go a long way in protecting us from digital dictatorships, even in a time of plague. First, whenever you collect data on people — especially on what is happening inside their own bodies — this data should be used to help these people rather than to manipulate, control or harm them. My personal physician knows many extremely private things about me. I am OK with it, because I trust my physician to use this data for my benefit. My physician shouldn’t sell this data to any corporation or political party. It should be the same with any kind of “pandemic surveillance authority” we might establish….(More)”.
Oliver Dowden at the Financial Times: “As you read this, thousands of people are receiving a message that will change their lives: a simple email or text, inviting them to book their Covid jab. But what has powered the UK’s remarkable vaccine rollout isn’t just our NHS, but the data that sits underneath it — from the genetic data used to develop the vaccine right through to the personal health data enabling that “ping” on their smartphone.
After years of seeing data solely through the lens of risk, Covid-19 has taught us just how much we have to lose when we don’t use it.
As I launch the competition to find the next Information Commissioner, I want to set a bold new approach that capitalises on all we’ve learnt during the pandemic, which forced us to share data quickly, efficiently and responsibly for the public good. It is one that no longer sees data as a threat, but as the great opportunity of our time.
Until now, the conversation about data has revolved around privacy — and with good reason. A person’s digital footprint can tell you not just vital statistics like age and gender, but their personal habits.
Our first priority is securing this valuable personal information. The UK has a long and proud tradition of defending privacy, and a commitment to maintaining world-class data protection standards now that we’re outside the EU. That was recognised last week in the bloc’s draft decisions on the ‘adequacy’ of our data protection rules — the agreement that data can keep flowing freely between the EU and UK.
We fully intend to maintain those world-class standards. But to do so, we do not need to copy and paste the EU’s rule book, the General Data Protection Regulation (GDPR), word-for-word. Countries as diverse as Israel and Uruguay have successfully secured adequacy with Brussels despite having their own data regimes. Not all of those were identical to GDPR, but equal doesn’t have to mean the same. The EU doesn’t hold the monopoly on data protection.
So, having come a long way in learning how to manage data’s risks, the UK is going to start making more of its opportunities….(More)”.
Essay by Sun-ha Hong: “In a society beset with black-boxed algorithms and vast surveillance systems, transparency is often hailed as liberal democracy’s superhero. It’s a familiar story: inject the public with information to digest, then await their rational deliberation and improved decision making. Whether in discussions of facial recognition software or platform moderation, we run into the argument that transparency will correct the harmful effects of algorithmic systems. The trouble is that in our movies and comic books, superheroes are themselves deus ex machina: black boxes designed to make complex problems disappear so that the good guys can win. Too often, transparency is asked to save the day on its own, under the assumption that disinformation or abuse of power can be shamed away with information.
Transparency without adequate support, however, can quickly become fuel for speculation and misunderstanding….
All this is part of a broader pattern in which the very groups who should be held accountable by the data tend to be its gatekeepers. Facebook is notorious for transparency-washing strategies, in which it dangles data access like a carrot but rarely follows through in actually delivering it. When researchers worked to create more independent means of holding Facebook accountable — as New York University’s Ad Observatory did last year, using volunteer researchers to build a public database of ads on the platform — Facebook threatened to sue them. Despite the lofty rhetoric around Facebook’s Oversight Board (often described as a “Supreme Court” for the platform), it falls into the same trap of transparency without power: the scope is limited to individual cases of content moderation, with no binding authority over the company’s business strategy, algorithmic design, or even similar moderation cases in the future.
Here, too, the real bottleneck is not information or technology, but power: the legal, political and economic pressure necessary to compel companies like Facebook to produce information and to act on it. We see this all too clearly when ordinary people do take up this labour of transparency, and attempt to hold technological systems accountable. In August 2020, Facebook users reported the Kenosha Guard group more than 400 times for incitement of violence. But Facebook declined to take any action until an armed shooter travelled to Kenosha, Wisconsin, and killed two protesters. When transparency is compromised by the concentration of power, it is often the vulnerable who are asked to make up the difference — and then to pay the price.
Transparency cannot solve our problems on its own. In his book The Rise of the Right to Know, journalism scholar Michael Schudson argues that transparency is better understood as a “secondary or procedural morality”: a tool that only becomes effective by other means. We must move beyond the pernicious myth of transparency as a universal solution, and address the distribution of economic and political power that is the root cause of technologically amplified irrationality and injustice….(More)”.
Andrew Zahuranec, Andrew Young and Stefaan G. Verhulst at the OECD Participo Blog Series:
“What does the public expect from data-driven responses to the COVID-19 pandemic? And under what conditions?” These are the motivating questions behind The Data Assembly, a recent initiative by The GovLab at New York University Tandon School of Engineering — an action research center that aims to help institutions work more openly, collaboratively, effectively, and legitimately.
Launched with support from The Henry Luce Foundation, The Data Assembly solicited diverse, actionable public input on data re-use for crisis response in the United States. In particular, we sought to engage the public on how to facilitate, if deemed acceptable, the use of data that was collected for a particular purpose for informing COVID-19. One additional objective was to inform the broader emergence of data collaboration— through formal and ad hoc arrangements between the public sector, civil society, and those in the private sector — by evaluating public expectation and concern with current institutional, contractual, and technical structures and instruments that may underpin these partnerships.
The Data Assembly used a new methodology that re-imagines how organisations can engage with society to better understand local expectations regarding data re-use and related issues. This work goes beyond soliciting input from just the “usual suspects”. Instead, data assemblies provide a forum for a much more diverse set of participants to share their insights and voice their concerns.
This article is informed by our experience piloting The Data Assembly in New York City in summer 2020. It provides an overview of The Data Assembly’s methodology and outcomes and describes major elements of the effort to support organisations working on similar issues in other cities, regions, and countries….(More)”.
Book edited by Ann Blair, Paul Duguid, Anja-Silvia Goeing, and Anthony Grafton: “Thanks to modern technological advances, we now enjoy seemingly unlimited access to information. Yet how did information become so central to our everyday lives, and how did its processing and storage make our data-driven era possible? This volume is the first to consider these questions in comprehensive detail, tracing the global emergence of information practices, technologies, and more, from the premodern era to the present. With entries spanning archivists to algorithms and scribes to surveilling, this is the ultimate reference on how information has shaped and been shaped by societies.
Written by an international team of experts, the book’s inspired and original long- and short-form contributions reconstruct the rise of human approaches to creating, managing, and sharing facts and knowledge. Thirteen full-length chapters discuss the role of information in pivotal epochs and regions, with chief emphasis on Europe and North America, but also substantive treatment of other parts of the world as well as current global interconnections. More than 100 alphabetical entries follow, focusing on specific tools, methods, and concepts—from ancient coins to the office memo, and censorship to plagiarism. The result is a wide-ranging, deeply immersive collection that will appeal to anyone drawn to the story behind our modern mania for an informed existence….(More)”.