Politics and Open Science: How the European Open Science Cloud Became Reality (the Untold Story)


Jean-Claude Burgelman at Data Intelligence: “This article will document how the European Open Science Cloud (EOSC) emerged as one of the key policy intentions to foster Open Science (OS) in Europe. It will describe some of the typical, non-rational roadblocks on the way to implement EOSC. The article will also argue that the only way Europe can take care of its research data in a way that fits the European specificities fully, is by supporting EOSC.

It is fair to say—note the word FAIR here—that realizing the European Open Science Cloud (EOSC) is now part and parcel of the European Data Science (DS) policy. In particular since EOSC will be from 2021 in the hands of the independent EOSC Association and thus potentially way out of the so-called “Brussels Bubble”.

This article will document the whole story of how EOSC emerged in this “bubble” as one of the policy intentions to foster Open Science (OS) in Europe. In addition, it will describe some of the typical, non-rational roadblocks on the way to implement EOSC. The article will also argue that the only way Europe can take care of its research data in a way that fits the European specificities fully, is by supporting EOSC….(More)”

How a Google Street View image of your house predicts your risk of a car accident


MIT Technology Review: “Google Street View has become a surprisingly useful way to learn about the world without stepping into it. People use it to plan journeys, to explore holiday destinations, and to virtually stalk friends and enemies alike.

But researchers have found more insidious uses. In 2017 a team of researchers used the images to study the distribution of car types in the US and then used that data to determine the demographic makeup of the country. It turns out that the car you drive is a surprisingly reliable proxy for your income level, your education, your occupation, and even the way you vote in elections.

Street view of houses in Poland

Now a different group has gone even further. Łukasz Kidziński at Stanford University in California and Kinga Kita-Wojciechowska at the University of Warsaw in Poland have used Street View images of people’s houses to determine how likely they are to be involved in a car accident. That’s valuable information that an insurance company could use to set premiums.

The result raises important questions about the way personal information can leak from seemingly innocent data sets and whether organizations should be able to use it for commercial purposes.

Insurance data

The researchers’ method is straightforward. They began with a data set of 20,000 records of people who had taken out car insurance in Poland between 2013 and 2015. These were randomly selected from the database of an undisclosed insurance company.

Each record included the address of the policyholder and the number of damage claims he or she made during the 2013–’15 period. The insurer also shared its own prediction of future claims, calculated using its state-of-the-art risk model that takes into account the policyholder’s zip code and the driver’s age, sex, claim history, and so on.

The question that Kidziński and Kita-Wojciechowska investigated is whether they could make a more accurate prediction using a Google Street View image of the policyholder’s house….(More)”.

Mechanisms of power inscription into IT governance: Lessons from two national digital identity systems


Paper by Rony Medaglia, Ben Eaton, Jonas Hedman, and Edgar A. Whitley: “Establishing IT governance arrangements is a deeply political process, where relationships of power play a crucial role. While the importance of power relationships is widely acknowledged in IS literature, specific mechanisms whereby the consequences of power relationships affect IT governance arrangements are still under‐researched. This study investigates the way power relationships are inscribed in the governance of digital identity systems in Denmark and the United Kingdom, where public and private actors are involved. Drawing on the theoretical lens of circuits of power, we contribute to research on the role of power in IT governance by identifying two distinct mechanisms of power inscription into IT governance: power cultivation and power limitation….(More)“.

Monitoring the R-Citizen in the Time of Coronavirus


Paper by John Flood and Monique Lewis: “The COVID pandemic has overwhelmed many countries in their attempts at tracking and tracing people infected with the disease. Our paper examines how tracking and tracing is done looking at manual and technological means. It raises the issues around efficiency and privacy, etc. The paper investigates more closely the approaches taken by two countries, namely Taiwan and the UK. It shows how tracking and tracing can be handled sensitively and openly compared to the bungled attempts of the UK that have led to the greatest number of dead in Europe. The key messages are that all communications around tracking and tracing need to open, clear, without confusion and delivered by those closest to the communities receiving the messages.This occurred in Taiwan but in the UK the central government chose to close out local government and other local resources. The highly centralised dirigiste approach of the government alienated much of the population who came to distrust government. As local government was later brought into the COVID fold the messaging improved. Taiwan always remained open in its communications, even allowing citizens to participate in improving the technology around COVID. Taiwan learnt from its earlier experiences with SARS, whereas the UK ignored its pandemic planning exercises from earlier years and even experimented with crude ideas of herd immunity by letting the disease rip through the population–an idea soon abandoned.

We also derive a new type of citizen from the pandemic, namely the R citizen. This unfortunate archetype is both a blessing and a curse. If the citizen scores over 1 the disease accelerates and the R citizen is chastised, whereas if the citizen declines to zero it disappears but receives no plaudits for their behaviour. The R citizen can neither exist or die, rather like Schrödinger’s cat. R citizens are of course datafied individuals who are assemblages of data and are treated as distinct from humans. We argue they cannot be so distinguished without rendering them inhuman. This is as much a moral category as it is a scientific one….(More)”.

Governance models for redistribution of data value


Essay by Maria Savona: “The growth of interest in personal data has been unprecedented. Issues of privacy violation, power abuse, practices of electoral behaviour manipulation unveiled in the Cambridge Analytica scandal, and a sense of imminent impingement of our democracies are at the forefront of policy debates. Yet, these concerns seem to overlook the issue of concentration of equity value (stemming from data value, which I use interchangeably here) that underpins the current structure of big tech business models. Whilst these quasi-monopolies own the digital infrastructure, they do not own the personal data that provide the raw material for data analytics. 

The European Commission has been at the forefront of global action to promote convergence of the governance of data (privacy), including, but not limited to, the General Data Protection Regulation (GDPR) (European Commission 2016), enforced in May 2018. Attempts to enforce similar regulations are emerging around the world, including the California Consumer Privacy Act, which came into effect on 1 January 2020. Notwithstanding greater awareness among citizens around the use of their data, companies find that complying with GDPR is, at best, a useless nuisance. 

Data have been seen as ‘innovation investment’ since the beginning of the 1990s. The first edition of the Oslo Manual, the OECD’s international guidelines for collecting and using data on innovation in firms, dates back to 19921 and included the collection of databases on employee best practices as innovation investments. Data are also measured as an ‘intangible asset’ (Corrado et al. 2009 was one of the pioneering studies). What has changed over the last decade? The scale of data generation today is such that its management and control might have already gone well beyond the capacity of the very tech giants we are all feeding. Concerns around data governance and data privacy might be too little and too late. 

In this column, I argue that economists have failed twice: first, to predict the massive concentration of data value in the hands of large platforms; and second, to account for the complexity of the political economy aspects of data accumulation. Based on a pair of recent papers (Savona 2019a, 2019b), I systematise recent research and propose a novel data rights approach to redistribute data value whilst not undermining the range of ethical, legal, and governance challenges that this poses….(More)”.

Digital contention in a divided society


Book by Paul Reilly: “How are platforms such as Facebook and Twitter used by citizens to frame contentious parades and protests in ‘post-conflict’ Northern Ireland? What do these contentious episodes tell us about the potential of information and communication technologies to promote positive intergroup contact in the deeply divided society?

These issues are addressed in what is the first in-depth qualitative exploration of how social media were used during the union flag protests (December 2012-March 2013) and the Ardoyne parade disputes (July 2014 and 2015). The book focuses on the extent to which affective publics, mobilised and connected via expressions of solidarity on social media, appear to escalate or de-escalate sectarian tensions caused by these hybrid media events. It also explores whether citizen activity on these online platforms has the potential to contribute to peacebuilding in Northern Ireland….(More)”.

Scholarly publishing needs regulation


Essay by Jean-Claude Burgelman: “The world of scientific communication has changed significantly over the past 12 months. Understandably, the amazing mobilisation of research and scholarly publishing in an effort to mitigate the effects of Covid-19 and find a vaccine has overshadowed everything else. But two other less-noticed events could also have profound implications for the industry and the researchers who rely on it.

On 10 January 2020, Taylor and Francis announced its acquisition of one of the most innovative small open-access publishers, F1000 Research. A year later, on 5 January 2021, another of the big commercial scholarly publishers, Wiley, paid nearly $300 million for Hindawi, a significant open-access publisher in London.

These acquisitions come alongside rapid change in publishers’ functions and business models. Scientific publishing is no longer only about publishing articles. It’s a knowledge industry—and it’s increasingly clear it needs to be regulated like one.

The two giant incumbents, Springer Nature and Elsevier, are already a long way down the road to open access, and have built up impressive in-house capacity. But Wiley, and Taylor and Francis, had not. That’s why they decided to buy young open-access publishers. Buying up a smaller, innovative competitor is a well-established way for an incumbent in any industry to expand its reach, gain the ability to do new things and reinvent its business model—it’s why Facebook bought WhatsApp and Instagram, for example.

New regulatory approach

To understand why this dynamic demands a new regulatory approach in scientific publishing, we need to set such acquisitions alongside a broader perspective of the business’s transformation into a knowledge industry. 

Monopolies, cartels and oligopolies in any industry are a cause for concern. By reducing competition, they stifle innovation and push up prices. But for science, the implications of such a course are particularly worrying. 

Science is a common good. Its products—and especially its spillovers, the insights and applications that cannot be monopolised—are vital to our knowledge societies. This means that having four companies control the worldwide production of car tyres, as they do, has very different implications to an oligopoly in the distribution of scientific outputs. The latter situation would give the incumbents a tight grip on the supply of knowledge.

Scientific publishing is not yet a monopoly, but Europe at least is witnessing the emergence of an oligopoly, in the shape of Elsevier, Springer Nature, Wiley, and Taylor and Francis. The past year’s acquisitions have left only two significant independent players in open-access publishing—Frontiers and MDPI, both based in Switzerland….(More)”.

A recommendation and risk classification system for connecting rough sleepers to essential outreach services


Paper by Harrison Wilde et al: “Rough sleeping is a chronic experience faced by some of the most disadvantaged people in modern society. This paper describes work carried out in partnership with Homeless Link (HL), a UK-based charity, in developing a data-driven approach to better connect people sleeping rough on the streets with outreach service providers. HL’s platform has grown exponentially in recent years, leading to thousands of alerts per day during extreme weather events; this overwhelms the volunteer-based system they currently rely upon for the processing of alerts. In order to solve this problem, we propose a human-centered machine learning system to augment the volunteers’ efforts by prioritizing alerts based on the likelihood of making a successful connection with a rough sleeper. This addresses capacity and resource limitations whilst allowing HL to quickly, effectively, and equitably process all of the alerts that they receive. Initial evaluation using historical data shows that our approach increases the rate at which rough sleepers are found following a referral by at least 15% based on labeled data, implying a greater overall increase when the alerts with unknown outcomes are considered, and suggesting the benefit in a trial taking place over a longer period to assess the models in practice. The discussion and modeling process is done with careful considerations of ethics, transparency, and explainability due to the sensitive nature of the data involved and the vulnerability of the people that are affected….(More)”.

The Rise of Urban Commons


Blogpost by Alessandra Quarta and Antonio Vercellone: “In the last ten years, the concept of the commons became popular in social studies and political activism and in some countries domestic lawyers have shared the interest for this notion. Even if an (existing or proposed) statutory definition of the commons is still very rare, lawyers get familiar with the concept of the commons through the filter of property law, where such a concept has been quite discredited. In fact, approaching property law, many students of different legal traditions learn the origins of property rights revolving on the “tragedy of the commons”, the “parable” made famous by Garrett Hardin in the late nineteen-sixties. According to this widespread narrative, the impossibility to avoid the over-exploitation of those resources managed through an open-access regime determines the necessity of allocating private property rights. In this classic argument, the commons appear in a negative light: they represent the impossibility for a community to manage shared resources without concentrating all the decision-making powers in the hand of a single owner or of a central government. Moreover, they represent the wasteful inefficiency of the Feudal World.

This vision has dominated social and economic studies until 1998, when Elinor Ostrom published her famous book Governing the commons, offering the results of her research on resources managed by communities in different parts of the world. Ostrom, awarded with the Nobel Prize in 2009, demonstrated that the commons are not necessarily a tragedy and a place of no-law. In fact, local communities generally define principles for their government and sharing in a resilient way avoiding the tragedy to occur. Moreover, Ostrom defined a set of principles for checking if the commons are managed efficiently and can compete with both private and public arrangements of resource management.

Later on, under an institutional perspective, the commons became the tool of contestation of political and economic mainstream dogmas, including the unquestionable efficiency of both the market and private property in the allocation of resources. The research of new tools for managing resources has been carried out in several experimentations that generally occurs at the local and urban level: scholars and practitioners define these experiences as ‘urban commons’….(More)”.

From Journalistic Ethics To Fact-Checking Practices: Defining The Standards Of Content Governance In The Fight Against Disinformation


Paper by Paolo Cavaliere: “This article claims that the practices undertaken by digital platforms to counter disinformation, under the EU Action Plan against Disinformation and the Code of Practice, mark a shift in the governance of news media content. While professional journalism standards have been used for long, both within and outside the industry, to assess the accuracy of news content and adjudicate on media conduct, the platforms are now resolving to different fact-checking routines to moderate and curate their content.
The article will demonstrate how fact-checking organisations have different working methods than news operators and ultimately understand and assess ‘accuracy’ in different ways. As a result, this new and enhanced role for platforms and fact-checkers as curators of content impacts on how content is distributed to the audience and, thus, on media freedom. Depending on how the fact-checking standards and working routines will consolidate in the near future, however, this trend offers an actual opportunity to improve the quality of news and the right to receive information…(More)”.