Shadowbanning Is Big Tech’s Big Problem


Essay by Gabriel Nicholas: “Sometimes, it feels like everyone on the internet thinks they’ve been shadowbanned. Republican politicians have been accusing Twitter of shadowbanning—that is, quietly suppressing their activity on the site—since at least 2018, when for a brief period, the service stopped autofilling the usernames of Representatives Jim Jordan, Mark Meadows, and Matt Gaetz, as well as other prominent Republicans, in its search bar. Black Lives Matter activists have been accusing TikTok of shadowbanning since 2020, when, at the height of the George Floyd protests, it sharply reduced how frequently their videos appeared on users’ “For You” pages. …When the word shadowban first appeared in the web-forum backwaters of the early 2000s, it meant something more specific. It was a way for online-community moderators to deal with trolls, shitposters, spam bots, and anyone else they deemed harmful: by making their posts invisible to everyone but the posters themselves. But throughout the 2010s, as the social web grew into the world’s primary means of sharing information and as content moderation became infinitely more complicated, the word became more common, and much more muddled. Today, people use shadowban to refer to the wide range of ways platforms may remove or reduce the visibility of their content without telling them….

According to new research I conducted at the Center for Democracy and Technology (CDT), nearly one in 10 U.S. social-media users believes they have been shadowbanned, and most often they believe it is for their political beliefs or their views on social issues. In two dozen interviews I held with people who thought they had been shadowbanned or worked with people who thought they had, I repeatedly heard users say that shadowbanning made them feel not just isolated from online discourse, but targeted, by a sort of mysterious cabal, for breaking a rule they didn’t know existed. It’s not hard to imagine what happens when social-media users believe they are victims of conspiracy…(More)”.

Rethinking gamified democracy as frictional: a comparative examination of the Decide Madrid and vTaiwan platforms


Paper by Yu-Shan Tseng: “Gamification in digital design harnesses game-like elements to create rewarding and competitive systems that encourage desirable user behaviour by influencing users’ bodily actions and emotions. Recently, gamification has been integrated into platforms built to fix democratic problems such as boredom and disengagement in political participation. This paper draws on an ethnographic study of two such platforms – Decide Madrid and vTaiwan – to problematise the universal, techno-deterministic account of digital democracy. I argue that gamified democracy is frictional by nature, a concept borrowed from cultural and social geographies. Incorporating gamification into interface design does not inherently enhance the user’s enjoyment, motivation and engagement through controlling their behaviours. ‘Friction’ in the user experience includes various emotional predicaments and tactical exploitation by more advanced users. Frictional systems in the sphere of digital democracy are neither positive nor negative per se. While they may threaten systemic inclusivity or hinder users’ abilities to organise and implement policy changes, friction can also provide new impetus to advance democratic practices…(More)”.

Governance of the Inconceivable


Essay by Lisa Margonelli: “How do scientists and policymakers work together to design governance for technologies that come with evolving and unknown risks? In the Winter 1985 Issues, seven experts reflected on the possibility of a large nuclear conflict triggering a “nuclear winter.” These experts agreed that the consequences would be horrifying: even beyond radiation effects, for example, burning cities could put enough smoke in the atmosphere to block sunlight, lowering ground temperatures and threatening people, crops, and other living things. In the same issue, former astronaut and then senator John Glenn wrote about the prospects for several nuclear nonproliferation agreements he was involved in negotiating. This broad discussion of nuclear weapons governance in Issues—involving legislators Glenn and then senator Al Gore as well as scientists, Department of Defense officials, and weapons designers—reflected the discourse of the time. In the culture at large, fears of nuclear annihilation became ubiquitous, and today you can easily find danceable playlists containing “38 Essential ’80s Songs About Nuclear Anxiety.”

But with the end of the Cold War, the breakup of the Soviet Union, and the rapid growth of a globalized economy and culture, these conversations receded from public consciousness. Issues has not run an article on nuclear weapons since 2010, when an essay argued that exaggerated fear of nuclear weapons had led to poor policy decisions. “Albert Einstein memorably proclaimed that nuclear weapons ‘have changed everything except our way of thinking,’” wrote political scientist John Mueller. “But the weapons actually seem to have changed little except our way of thinking, as well as our ways of declaiming, gesticulating, deploying military forces, and spending lots of money.”

All these old conversations suddenly became relevant again as our editorial team worked on this issue. On February 27, when Vladimir Putin ordered Russia’s nuclear weapons put on “high alert” after invading Ukraine, United Nations Secretary-General Antonio Guterres declared that “the mere idea of a nuclear conflict is simply unconceivable.” But, in the space of a day, what had long seemed inconceivable was suddenly being very actively conceived….(More)”.

The challenges of protecting data and rights in the metaverse


Article by Urvashi Aneja: “Virtual reality systems work by capturing extensive biological data about a user’s body, including pupil dilation, eye movement, facial expressions, skin temperature, and emotional responses to stimuli. Spending just 20 minutes in a VR simulation leaves nearly 2 million unique recordings of body language.

Existing data protection frameworks are woefully inadequate for dealing with the privacy implications of these technologies. Data collection is involuntary and continuous, rendering the notion of consent almost impossible. Research also shows that five minutes of VR data, with all personally identifiable information stripped, could be correctly identified using a machine learning algorithm with 95% accuracy. This type of data isn’t covered by most biometrics laws.

But a lot more than individual privacy is at stake. Such data will enable what human rights lawyer Brittan Heller has called “biometric psychography” referring to the gathering and use of biological data to reveal intimate details about a user’s likes, dislikes, preferences, and interests. In VR experiences, it is not only a user’s outward behavior that is captured, but also their emotional reactions to specific situations, through features such as pupil dilation or change in facial expressions….(More)”

Time to recognize authorship of open data


Nature Editorial: “At times, it seems there’s an unstoppable momentum towards the principle that data sets should be made widely available for research purposes (also called open data). Research funders all over the world are endorsing the open data-management standards known as the FAIR principles (which ensure data are findable, accessible, interoperable and reusable). Journals are increasingly asking authors to make the underlying data behind papers accessible to their peers. Data sets are accompanied by a digital object identifier (DOI) so they can be easily found. And this citability helps researchers to get credit for the data they generate.

But reality sometimes tells a different story. The world’s systems for evaluating science do not (yet) value openly shared data in the same way that they value outputs such as journal articles or books. Funders and research leaders who design these systems accept that there are many kinds of scientific output, but many reject the idea that there is a hierarchy among them.

In practice, those in powerful positions in science tend not to regard open data sets in the same way as publications when it comes to making hiring and promotion decisions or awarding memberships to important committees, or in national evaluation systems. The open-data revolution will stall unless this changes….

Universities, research groups, funding agencies and publishers should, together, start to consider how they could better recognize open data in their evaluation systems. They need to ask: how can those who have gone the extra mile on open data be credited appropriately?

There will always be instances in which researchers cannot be given access to human data. Data from infants, for example, are highly sensitive and need to pass stringent privacy and other tests. Moreover, making data sets accessible takes time and funding that researchers don’t always have. And researchers in low- and middle-income countries have concerns that their data could be used by researchers or businesses in high-income countries in ways that they have not consented to.

But crediting all those who contribute their knowledge to a research output is a cornerstone of science. The prevailing convention — whereby those who make their data open for researchers to use make do with acknowledgement and a citation — needs a rethink. As long as authorship on a paper is significantly more valued than data generation, this will disincentivize making data sets open. The sooner we change this, the better….(More)”.

Artificial intelligence is creating a new colonial world order


Series by  Karen Hao: “…Over the last few years, an increasing number of scholars have argued that the impact of AI is repeating the patterns of colonial history. European colonialism, they say, was characterized by the violent capture of land, extraction of resources, and exploitation of people—for example, through slavery—for the economic enrichment of the conquering country. While it would diminish the depth of past traumas to say the AI industry is repeating this violence today, it is now using other, more insidious means to enrich the wealthy and powerful at the great expense of the poor….

MIT Technology Review’s new AI Colonialism series, which will be publishing throughout this week, digs into these and other parallels between AI development and the colonial past by examining communities that have been profoundly changed by the technology. In part one, we head to South Africa, where AI surveillance tools, built on the extraction of people’s behaviors and faces, are re-entrenching racial hierarchies and fueling a digital apartheid.

In part two, we head to Venezuela, where AI data-labeling firms found cheap and desperate workers amid a devastating economic crisis, creating a new model of labor exploitation. The series also looks at ways to move away from these dynamics. In part three, we visit ride-hailing drivers in Indonesia who, by building power through community, are learning to resist algorithmic control and fragmentation. In part four, we end in Aotearoa, the Maori name for New Zealand, where an Indigenous couple are wresting back control of their community’s data to revitalize its language.

Together, the stories reveal how AI is impoverishing the communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires. They also suggest how AI could be so much more—a way for the historically dispossessed to reassert their culture, their voice, and their right to determine their own future.

That is ultimately the aim of this series: to broaden the view of AI’s impact on society so as to begin to figure out how things could be different. It’s not possible to talk about “AI for everyone” (Google’s rhetoric), “responsible AI” (Facebook’s rhetoric), or “broadly distribut[ing]” its benefits (OpenAI’s rhetoric) without honestly acknowledging and confronting the obstacles in the way….(More)”.

How Democracies Spy on Their Citizens 


Ronan Farrow at the New Yorker: “…Commercial spyware has grown into an industry estimated to be worth twelve billion dollars. It is largely unregulated and increasingly controversial. In recent years, investigations by the Citizen Lab and Amnesty International have revealed the presence of Pegasus on the phones of politicians, activists, and dissidents under repressive regimes. An analysis by Forensic Architecture, a research group at the University of London, has linked Pegasus to three hundred acts of physical violence. It has been used to target members of Rwanda’s opposition party and journalists exposing corruption in El Salvador. In Mexico, it appeared on the phones of several people close to the reporter Javier Valdez Cárdenas, who was murdered after investigating drug cartels. Around the time that Prince Mohammed bin Salman of Saudi Arabia approved the murder of the journalist Jamal Khashoggi, a longtime critic, Pegasus was allegedly used to monitor phones belonging to Khashoggi’s associates, possibly facilitating the killing, in 2018. (Bin Salman has denied involvement, and NSO said, in a statement, “Our technology was not associated in any way with the heinous murder.”) Further reporting through a collaboration of news outlets known as the Pegasus Project has reinforced the links between NSO Group and anti-democratic states. But there is evidence that Pegasus is being used in at least forty-five countries, and it and similar tools have been purchased by law-enforcement agencies in the United States and across Europe. Cristin Flynn Goodwin, a Microsoft executive who has led the company’s efforts to fight spyware, told me, “The big, dirty secret is that governments are buying this stuff—not just authoritarian governments but all types of governments.”…(More)”.

Why AI Failed to Live Up to Its Potential During the Pandemic


Essay by Bhaskar Chakravorti: “The pandemic could have been the moment when AI made good on its promising potential. There was an unprecedented convergence of the need for fast, evidence-based decisions and large-scale problem-solving with datasets spilling out of every country in the world. Instead, AI failed in myriad, specific ways that underscore where this technology is still weak: Bad datasets, embedded bias and discrimination, susceptibility to human error, and a complex, uneven global context all caused critical failures. But, these failures also offer lessons on how we can make AI better: 1) we need to find new ways to assemble comprehensive datasets and merge data from multiple sources, 2) there needs to be more diversity in data sources, 3) incentives must be aligned to ensure greater cooperation across teams and systems, and 4) we need international rules for sharing data…(More)”.

Research Handbook of Policy Design


Handbook edited by B. G. Peters and Guillaume Fontaine: “…The difference between policy design and policy making lies in the degree of encompassing consciousness involved in designing, which includes policy formulation, implementation and evaluation. Consequently there are differences in degrees of consciousness within the same kind of activity, from the simplest expression of “non-design”, which refers to the absence of clear intention or purpose, to “re-design”, which is the most common, incremental way to proceed, to “full design”, which suggests the attempt to control all the process by government or some other controlling actor. There are also differences in kind, from program design (at
the micro-level of intervention) to singular policy design, to meta-design when dealing with complex problems that require cross-sectorial coordination. Eventually, there are different forms or expressions (technical, political, ideological) and different patterns (transfer, innovation, accident or experiment) of policy design.
Unlike other forms of design, such as engineering or architecture, policy design exhibits specific features because of the social nature of policy targeting and modulation, which involves humans as objects and subjects with their values, conflicts, and other characteristics (Peters, 2018, p. 5). Thus, policy design is the attempt to integrate different understandings of a policy problem with different conceptions of the policy instruments to be utilized, and the different values according to which a government assess the outcomes pursued by this policy as expected, satisfactory, acceptable, and so forth. Those three components of design – causation, instruments and values – must then be combined to create a coherent plan for intervention. We will define this fourth component of design as “intervention”, meaning that there must be some strategic sense of how to make the newly designed policy work. This component requires not only an understanding of the specific policy being designed but also how that policy will mesh with the array of policies already operating. Thus, there is the need to think about some “meta-design” issues about coordination and coherence, as well as the usual challenges of implementation…(More)”.

Better data for better therapies: The case for building health data platforms


Paper by Matthias Evers, Lucy Pérez, Lucas Robke, and Katarzyna Smietana: “Despite expanding development pipelines, many pharmaceutical companies find themselves focusing on the same limited number of derisked areas and mechanisms of action in, for example, immuno-oncology. This “herding” reflects the challenges of advancing understanding of disease and hence of developing novel therapeutic approaches. The full promise of innovation from data, AI, and ML has not yet materialized.

It is increasingly evident that one of the main reasons for this is insufficient high-quality, interconnected human data that go beyond just genes and corresponding phenotypes—the data needed by scientists to form concepts and hypotheses and by computing systems to uncover patterns too complex for scientists to understand. Only such high-quality human data would allow deployment of AI and ML, combined with human ingenuity, to unravel disease biology and open up new frontiers to prevention and cure. Here, therefore, we suggest a way of overcoming the data impediment and moving toward a systematic, nonreductionist approach to disease understanding and drug development: the establishment of trusted, large-scale platforms that collect and store the health data of volunteering participants. Importantly, such platforms would allow participants to make informed decisions about who could access and use their information to improve the understanding of disease….(More)”.