Opening Up to Open Science


Essay by Chelle Gentemann, Christopher Erdmann and Caitlin Kroeger: “The modern Hippocratic Oath outlines ethical standards that physicians worldwide swear to uphold. “I will respect the hard-won scientific gains of those physicians in whose steps I walk,” one of its tenets reads, “and gladly share such knowledge as is mine with those who are to follow.”

But what form, exactly, should knowledge-sharing take? In the practice of modern science, knowledge in most scientific disciplines is generally shared through peer-reviewed publications at the end of a project. Although publication is both expected and incentivized—it plays a key role in career advancement, for example—many scientists do not take the extra step of sharing data, detailed methods, or code, making it more difficult for others to replicate, verify, and build on their results. Even beyond that, professional science today is full of personal and institutional incentives to hold information closely to retain a competitive advantage.

This way of sharing science has some benefits: peer review, for example, helps to ensure (even if it never guarantees) scientific integrity and prevent inadvertent misuse of data or code. But the status quo also comes with clear costs: it creates barriers (in the form of publication paywalls), slows the pace of innovation, and limits the impact of research. Fast science is increasingly necessary, and with good reason. Technology has not only improved the speed at which science is carried out, but many of the problems scientists study, from climate change to COVID-19, demand urgency. Whether modeling the behavior of wildfires or developing a vaccine, the need for scientists to work together and share knowledge has never been greater. In this environment, the rapid dissemination of knowledge is critical; closed, siloed knowledge slows progress to a degree society cannot afford. Imagine the consequences today if, as in the 2003 SARS disease outbreak, the task of sequencing genomes still took months and tools for labs to share the results openly online didn’t exist. Today’s challenges require scientists to adapt and better recognize, facilitate, and reward collaboration.

Open science is a path toward a collaborative culture that, enabled by a range of technologies, empowers the open sharing of data, information, and knowledge within the scientific community and the wider public to accelerate scientific research and understanding. Yet despite its benefits, open science has not been widely embraced…(More)”

Responsiveness of open innovation to COVID-19 pandemic: The case of data for good


Paper by Francesco Scotti, Francesco Pierri, Giovanni Bonaccorsi, and Andrea Flori: “Due to the COVID-19 pandemic, countries around the world are facing one of the most severe health and economic crises of recent history and human society is called to figure out effective responses. However, as current measures have not produced valuable solutions, a multidisciplinary and open approach, enabling collaborations across private and public organizations, is crucial to unleash successful contributions against the disease. Indeed, the COVID-19 represents a Grand Challenge to which joint forces and extension of disciplinary boundaries have been recognized as main imperatives. As a consequence, Open Innovation represents a promising solution to provide a fast recovery. In this paper we present a practical application of this approach, showing how knowledge sharing constitutes one of the main drivers to tackle pressing social needs. To demonstrate this, we propose a case study regarding a data sharing initiative promoted by Facebook, the Data For Good program. We leverage a large-scale dataset provided by Facebook to the research community to offer a representation of the evolution of the Italian mobility during the lockdown. We show that this repository allows to capture different patterns of movements on the territory with increasing levels of detail. We integrate this information with Open Data provided by the Lombardy region to illustrate how data sharing can also provide insights for private businesses and local authorities. Finally, we show how to interpret Data For Good initiatives in light of the Open Innovation Framework and discuss the barriers to adoption faced by public administrations regarding these practices…(More)”.

Shadowbanning Is Big Tech’s Big Problem


Essay by Gabriel Nicholas: “Sometimes, it feels like everyone on the internet thinks they’ve been shadowbanned. Republican politicians have been accusing Twitter of shadowbanning—that is, quietly suppressing their activity on the site—since at least 2018, when for a brief period, the service stopped autofilling the usernames of Representatives Jim Jordan, Mark Meadows, and Matt Gaetz, as well as other prominent Republicans, in its search bar. Black Lives Matter activists have been accusing TikTok of shadowbanning since 2020, when, at the height of the George Floyd protests, it sharply reduced how frequently their videos appeared on users’ “For You” pages. …When the word shadowban first appeared in the web-forum backwaters of the early 2000s, it meant something more specific. It was a way for online-community moderators to deal with trolls, shitposters, spam bots, and anyone else they deemed harmful: by making their posts invisible to everyone but the posters themselves. But throughout the 2010s, as the social web grew into the world’s primary means of sharing information and as content moderation became infinitely more complicated, the word became more common, and much more muddled. Today, people use shadowban to refer to the wide range of ways platforms may remove or reduce the visibility of their content without telling them….

According to new research I conducted at the Center for Democracy and Technology (CDT), nearly one in 10 U.S. social-media users believes they have been shadowbanned, and most often they believe it is for their political beliefs or their views on social issues. In two dozen interviews I held with people who thought they had been shadowbanned or worked with people who thought they had, I repeatedly heard users say that shadowbanning made them feel not just isolated from online discourse, but targeted, by a sort of mysterious cabal, for breaking a rule they didn’t know existed. It’s not hard to imagine what happens when social-media users believe they are victims of conspiracy…(More)”.

Rethinking gamified democracy as frictional: a comparative examination of the Decide Madrid and vTaiwan platforms


Paper by Yu-Shan Tseng: “Gamification in digital design harnesses game-like elements to create rewarding and competitive systems that encourage desirable user behaviour by influencing users’ bodily actions and emotions. Recently, gamification has been integrated into platforms built to fix democratic problems such as boredom and disengagement in political participation. This paper draws on an ethnographic study of two such platforms – Decide Madrid and vTaiwan – to problematise the universal, techno-deterministic account of digital democracy. I argue that gamified democracy is frictional by nature, a concept borrowed from cultural and social geographies. Incorporating gamification into interface design does not inherently enhance the user’s enjoyment, motivation and engagement through controlling their behaviours. ‘Friction’ in the user experience includes various emotional predicaments and tactical exploitation by more advanced users. Frictional systems in the sphere of digital democracy are neither positive nor negative per se. While they may threaten systemic inclusivity or hinder users’ abilities to organise and implement policy changes, friction can also provide new impetus to advance democratic practices…(More)”.

Governance of the Inconceivable


Essay by Lisa Margonelli: “How do scientists and policymakers work together to design governance for technologies that come with evolving and unknown risks? In the Winter 1985 Issues, seven experts reflected on the possibility of a large nuclear conflict triggering a “nuclear winter.” These experts agreed that the consequences would be horrifying: even beyond radiation effects, for example, burning cities could put enough smoke in the atmosphere to block sunlight, lowering ground temperatures and threatening people, crops, and other living things. In the same issue, former astronaut and then senator John Glenn wrote about the prospects for several nuclear nonproliferation agreements he was involved in negotiating. This broad discussion of nuclear weapons governance in Issues—involving legislators Glenn and then senator Al Gore as well as scientists, Department of Defense officials, and weapons designers—reflected the discourse of the time. In the culture at large, fears of nuclear annihilation became ubiquitous, and today you can easily find danceable playlists containing “38 Essential ’80s Songs About Nuclear Anxiety.”

But with the end of the Cold War, the breakup of the Soviet Union, and the rapid growth of a globalized economy and culture, these conversations receded from public consciousness. Issues has not run an article on nuclear weapons since 2010, when an essay argued that exaggerated fear of nuclear weapons had led to poor policy decisions. “Albert Einstein memorably proclaimed that nuclear weapons ‘have changed everything except our way of thinking,’” wrote political scientist John Mueller. “But the weapons actually seem to have changed little except our way of thinking, as well as our ways of declaiming, gesticulating, deploying military forces, and spending lots of money.”

All these old conversations suddenly became relevant again as our editorial team worked on this issue. On February 27, when Vladimir Putin ordered Russia’s nuclear weapons put on “high alert” after invading Ukraine, United Nations Secretary-General Antonio Guterres declared that “the mere idea of a nuclear conflict is simply unconceivable.” But, in the space of a day, what had long seemed inconceivable was suddenly being very actively conceived….(More)”.

The challenges of protecting data and rights in the metaverse


Article by Urvashi Aneja: “Virtual reality systems work by capturing extensive biological data about a user’s body, including pupil dilation, eye movement, facial expressions, skin temperature, and emotional responses to stimuli. Spending just 20 minutes in a VR simulation leaves nearly 2 million unique recordings of body language.

Existing data protection frameworks are woefully inadequate for dealing with the privacy implications of these technologies. Data collection is involuntary and continuous, rendering the notion of consent almost impossible. Research also shows that five minutes of VR data, with all personally identifiable information stripped, could be correctly identified using a machine learning algorithm with 95% accuracy. This type of data isn’t covered by most biometrics laws.

But a lot more than individual privacy is at stake. Such data will enable what human rights lawyer Brittan Heller has called “biometric psychography” referring to the gathering and use of biological data to reveal intimate details about a user’s likes, dislikes, preferences, and interests. In VR experiences, it is not only a user’s outward behavior that is captured, but also their emotional reactions to specific situations, through features such as pupil dilation or change in facial expressions….(More)”

Time to recognize authorship of open data


Nature Editorial: “At times, it seems there’s an unstoppable momentum towards the principle that data sets should be made widely available for research purposes (also called open data). Research funders all over the world are endorsing the open data-management standards known as the FAIR principles (which ensure data are findable, accessible, interoperable and reusable). Journals are increasingly asking authors to make the underlying data behind papers accessible to their peers. Data sets are accompanied by a digital object identifier (DOI) so they can be easily found. And this citability helps researchers to get credit for the data they generate.

But reality sometimes tells a different story. The world’s systems for evaluating science do not (yet) value openly shared data in the same way that they value outputs such as journal articles or books. Funders and research leaders who design these systems accept that there are many kinds of scientific output, but many reject the idea that there is a hierarchy among them.

In practice, those in powerful positions in science tend not to regard open data sets in the same way as publications when it comes to making hiring and promotion decisions or awarding memberships to important committees, or in national evaluation systems. The open-data revolution will stall unless this changes….

Universities, research groups, funding agencies and publishers should, together, start to consider how they could better recognize open data in their evaluation systems. They need to ask: how can those who have gone the extra mile on open data be credited appropriately?

There will always be instances in which researchers cannot be given access to human data. Data from infants, for example, are highly sensitive and need to pass stringent privacy and other tests. Moreover, making data sets accessible takes time and funding that researchers don’t always have. And researchers in low- and middle-income countries have concerns that their data could be used by researchers or businesses in high-income countries in ways that they have not consented to.

But crediting all those who contribute their knowledge to a research output is a cornerstone of science. The prevailing convention — whereby those who make their data open for researchers to use make do with acknowledgement and a citation — needs a rethink. As long as authorship on a paper is significantly more valued than data generation, this will disincentivize making data sets open. The sooner we change this, the better….(More)”.

Artificial intelligence is creating a new colonial world order


Series by  Karen Hao: “…Over the last few years, an increasing number of scholars have argued that the impact of AI is repeating the patterns of colonial history. European colonialism, they say, was characterized by the violent capture of land, extraction of resources, and exploitation of people—for example, through slavery—for the economic enrichment of the conquering country. While it would diminish the depth of past traumas to say the AI industry is repeating this violence today, it is now using other, more insidious means to enrich the wealthy and powerful at the great expense of the poor….

MIT Technology Review’s new AI Colonialism series, which will be publishing throughout this week, digs into these and other parallels between AI development and the colonial past by examining communities that have been profoundly changed by the technology. In part one, we head to South Africa, where AI surveillance tools, built on the extraction of people’s behaviors and faces, are re-entrenching racial hierarchies and fueling a digital apartheid.

In part two, we head to Venezuela, where AI data-labeling firms found cheap and desperate workers amid a devastating economic crisis, creating a new model of labor exploitation. The series also looks at ways to move away from these dynamics. In part three, we visit ride-hailing drivers in Indonesia who, by building power through community, are learning to resist algorithmic control and fragmentation. In part four, we end in Aotearoa, the Maori name for New Zealand, where an Indigenous couple are wresting back control of their community’s data to revitalize its language.

Together, the stories reveal how AI is impoverishing the communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires. They also suggest how AI could be so much more—a way for the historically dispossessed to reassert their culture, their voice, and their right to determine their own future.

That is ultimately the aim of this series: to broaden the view of AI’s impact on society so as to begin to figure out how things could be different. It’s not possible to talk about “AI for everyone” (Google’s rhetoric), “responsible AI” (Facebook’s rhetoric), or “broadly distribut[ing]” its benefits (OpenAI’s rhetoric) without honestly acknowledging and confronting the obstacles in the way….(More)”.

How Democracies Spy on Their Citizens 


Ronan Farrow at the New Yorker: “…Commercial spyware has grown into an industry estimated to be worth twelve billion dollars. It is largely unregulated and increasingly controversial. In recent years, investigations by the Citizen Lab and Amnesty International have revealed the presence of Pegasus on the phones of politicians, activists, and dissidents under repressive regimes. An analysis by Forensic Architecture, a research group at the University of London, has linked Pegasus to three hundred acts of physical violence. It has been used to target members of Rwanda’s opposition party and journalists exposing corruption in El Salvador. In Mexico, it appeared on the phones of several people close to the reporter Javier Valdez Cárdenas, who was murdered after investigating drug cartels. Around the time that Prince Mohammed bin Salman of Saudi Arabia approved the murder of the journalist Jamal Khashoggi, a longtime critic, Pegasus was allegedly used to monitor phones belonging to Khashoggi’s associates, possibly facilitating the killing, in 2018. (Bin Salman has denied involvement, and NSO said, in a statement, “Our technology was not associated in any way with the heinous murder.”) Further reporting through a collaboration of news outlets known as the Pegasus Project has reinforced the links between NSO Group and anti-democratic states. But there is evidence that Pegasus is being used in at least forty-five countries, and it and similar tools have been purchased by law-enforcement agencies in the United States and across Europe. Cristin Flynn Goodwin, a Microsoft executive who has led the company’s efforts to fight spyware, told me, “The big, dirty secret is that governments are buying this stuff—not just authoritarian governments but all types of governments.”…(More)”.

Why AI Failed to Live Up to Its Potential During the Pandemic


Essay by Bhaskar Chakravorti: “The pandemic could have been the moment when AI made good on its promising potential. There was an unprecedented convergence of the need for fast, evidence-based decisions and large-scale problem-solving with datasets spilling out of every country in the world. Instead, AI failed in myriad, specific ways that underscore where this technology is still weak: Bad datasets, embedded bias and discrimination, susceptibility to human error, and a complex, uneven global context all caused critical failures. But, these failures also offer lessons on how we can make AI better: 1) we need to find new ways to assemble comprehensive datasets and merge data from multiple sources, 2) there needs to be more diversity in data sources, 3) incentives must be aligned to ensure greater cooperation across teams and systems, and 4) we need international rules for sharing data…(More)”.