Global Fishing Watch: Pooling Data and Expertise to Combat Illegal Fishing


Data Collaborative Case Study by Michelle Winowatan, Andrew Young, and Stefaan Verhulst: “

Global Fishing Watch, originally set up through a collaboration between Oceana, SkyTruth and Google, is an independent nonprofit organization dedicated to advancing responsible stewardship of our oceans through increased transparency in fishing activity and scientific research. Using big data processing and machine learning, Global Fishing Watch visualizes, tracks, and shares data about global fishing activity in near-real time and for free via their public map. To date, the platform tracks approximately 65,000 commercial fishing vessels globally. These insights have been used in a number of academic publications, ocean advocacy efforts, and law enforcement activities.

Data Collaborative Model: Based on the typology of data collaborative practice areas, Global Fishing Watch is an example of the data pooling model of data collaboration, specifically a public data pool. Public data pools co-mingle data assets from multiple data holders — including governments and companies — and make those shared assets available on the web. This approach enabled the data stewards and stakeholders involved in Global Fishing Watch to bring together multiple data streams from both public- and private-sector entities in a single location. This single point of access provides the public and relevant authorities with user-friendly access to actionable, previously fragmented data that can drive efforts to address compliance in fisheries and illegal fishing around the world.

Data Stewardship Approach: Global Fishing Watch also provides a clear illustration of the importance of data stewards. For instance, representatives from Google Earth Outreach, one of the data holders, played an important stewardship role in seeking to connect and coordinate with SkyTruth and Oceana, two important nonprofit environmental actors who were working separately prior to this initiative. The brokering of this partnership helped to bring relevant data assets from the public and private sectors to bear in support of institutional efforts to address the stubborn challenge of illegal fishing.

Read the full case study here.”

Open Democracy and Digital Technologies


Paper by Hélène Landemore: “…looks at the connection between democratic theory and technological constraints, and argues for renovating our paradigm of democracy to make the most of the technological opportunities offered by the digital revolution. The most attractive normative theory of democracy currently available—Habermas’ model of a two-track deliberative sphere—is, for all its merits, a self-avowed rationalization of representative democracy, a system born in the 18th century under different epistemological, conceptual, and technological constraints. In this
paper I show the limits of this model and defend instead an alternative paradigm of democracy I call “open democracy,” in which digital technologies are assumed to make it possible to transcend a number of dichotomies, including that between ordinary citizens and democratic representatives.

Rather than just imagining a digitized version or extension of existing institutions and practices—representative democracy as we know it—I thus take the opportunities offered by the digital revolution (its technological “affordances,” in the jargon) to envision new democratic institutions and means of democratic empowerment, some of which are illustrated in the vignette with which this paper started. In other words, rather that start from what is— our electoral democracies, I start from what democracy could mean, if we reinvented it more or less from scratch today with the help of digital technologies.

The first section lays out the problems with and limits of our current practice and theory of democracy.


The second section traces these problems to conceptual design flaws partially induced by 18th century conceptual, epistemological, and technological constraints.


Section three lays out an alternative theory of democracy I call “open democracy,” which avoids some of these design flaws, and introduces the institutional features of this new paradigm that are specifically enabled by digital technologies: deliberation and democratic representation….(More)”.

Taming the Beast: Harnessing Blockchains in Developing Country Governments


Paper by Raúl Zambrano: “Amid pressing demands to achieve critical sustainable development goals, governments in developing countries face the additional complex task of embracing new digital technologies such as blockchains. This paper develops a framework interlinking development, technology, and government institutions that policymakers and development practitioners could use to address such a conundrum. State capacity and democratic governance are introduced as drivers in the overall analysis. With this in hand, blockchain technology is revisited from the perspective of governments in the Global South, identifying in the process key traits and proposing a new typology. An overview of the status of blockchain deployments in the Global South follows, complemented by a closer look at country examples to distill trends, patterns and risks. The paper closes with a discussion of the findings, highlighting both challenges and opportunities for governments. It also provides basic guidance to development practitioners interested in enhancing current programming using blockchains as an enabler….(More)”

The Case for an Institutionally Owned Knowledge Infrastructure


Article by James W. Weis, Amy Brand and Joi Ito: “Science and technology are propelled forward by the sharing of knowledge. Yet despite their vital importance in today’s innovation-driven economy, our knowledge infrastructures have failed to scale with today’s rapid pace of research and discovery.

For example, academic journals, the dominant dissemination platforms of scientific knowledge, have not been able to take advantage of the linking, transparency, dynamic communication and decentralized authority and review that the internet enables. Many other knowledge-driven sectors, from journalism to law, suffer from a similar bottleneck — caused not by a lack of technological capacity, but rather by an inability to design and implement efficient, open and trustworthy mechanisms of information dissemination.

Fortunately, growing dissatisfaction with current knowledge-sharing infrastructures has led to a more nuanced understanding of the requisite features that such platforms must provide. With such an understanding, higher education institutions around the world can begin to recapture the control and increase the utility of the knowledge they produce.

When the World Wide Web emerged in the 1990s, an era of robust scholarship based on open sharing of scientific advancements appeared inevitable. The internet — initially a research network — promised a democratization of science, universal access to the academic literature and a new form of open publishing that supported the discovery and reuse of knowledge artifacts on a global scale. Unfortunately, however, that promise was never realized. Universities, researchers and funding agencies, for the most part, failed to organize and secure the investment needed to build scalable knowledge infrastructures, and publishing corporations moved in to solidify their position as the purveyors of knowledge.

In the subsequent decade, such publishers have consolidated their hold. By controlling the most prestigious journals, they have been able to charge for access — extracting billions of dollars in subscription fees while barring much of the world from the academic literature. Indeed, some of the world’s wealthiest academic institutions are no longer able or willing to pay the subscription costs required.

Further, by controlling many of the most prestigious journals, publishers have also been able to position themselves between the creation and consumption of research, and so wield enormous power over peer review and metrics of scientific impact. Thus, they are able to significantly influence academic reputation, hirings, promotions, career progressions and, ultimately, the direction of science itself.

But signs suggest that the bright future envisioned in the early days of the internet is still within reach. Increasing awareness of, and dissatisfaction with, the many bottlenecks that the commercial monopoly on research information has imposed are stimulating new strategies for developing the future’s knowledge infrastructures. One of the most promising is the shift toward infrastructures created and supported by academic institutions, the original creators of the information being shared, and nonprofit consortia like the Collaborative Knowledge Foundation and the Center for Open Science.

Those infrastructures should fully exploit the technological capabilities of the World Wide Web to accelerate discovery, encourage more research support and better structure and transmit knowledge. By aligning academic incentives with socially beneficial outcomes, such a system could enrich the public while also amplifying the technological and societal impact of investment in research and innovation.

We’ve outlined below the three areas in which a shift to an academically owned platforms would yield the highest impact.

  • Truly Open Access
  • Meaningful Impact Metrics
  • Trustworthy Peer Review….(More)”.

Bridging the Elite-Grassroots Divide Among Anticorruption Activists


Abigail Bellows at the Carnegie Endowment for International Peace: “Corruption-fueled political change is occurring at a historic rate—but is not necessarily producing the desired systemic reforms. There are many reasons for this, but one is the dramatic dissipation of public momentum after a transition. In countries like Armenia, the surge in civic participation that generated 2018’s Velvet Revolution largely evaporated after the new government assumed power. That sort of civic demobilization makes it difficult for government reformers, facing stubbornly entrenched interests, to enact a transformative agenda.

The dynamics in Armenia reflect a trend across the anticorruption landscape, which is also echoed in other sectors. As the field has become more professionalized, anticorruption nongovernment organizations (NGOs) have developed the legal and technical expertise to serve as excellent counterparts/watchdogs for government. Yet this strength can also be a hurdle when it comes to building credibility with the everyday people they seek to represent. The result is a disconnect between elite and grassroots actors, which is problematic at multiple levels:

  • Technocratic NGOs lack the “people power” to advance their policy recommendations and are exposed to attack as illegitimate or foreign-sponsored.
  • Grassroots networks struggle to turn protest energy into targeted demands and lasting reform, which can leave citizens frustrated and disillusioned about democracy itself.
  • Government reformers lack the sustained popular mandate to deliver on the ambitious agenda they promised, leaving them politically vulnerable to the next convulsion of public anger at corruption.

Two strategies can help civil society address this challenge. First, organizations can seek to hybridize, with in-house capacities for both policy analysis and mass mobilization. Alternatively, organizations can build formal or informal coalitions between groups operating at the elite and grassroots levels, respectively. Both strategies pose challenges: learning new skills, weaving together distinct organizational cultures and methodologies, and defining demands that are both technically sound and publicly appealing. In many instances, coalition-building will be an easier road given it does not require altering internal organizational and personnel structures. Political windows-of-opportunity on anticorruption may lend urgency to this difficult task and help crystallize what both sides have to gain from increased partnership….(More)“.

Trusted smart statistics: Motivations and principles


Paper by Fabio Ricciato et al : “In this contribution we outline the concept of Trusted Smart Statistics as the natural evolution of official statistics in the new datafied world. Traditional data sources, namely survey and administrative data, represent nowadays a valuable but small portion of the global data stock, much thereof being held in the private sector. The availability of new data sources is only one aspect of the global change that concerns official statistics. Other aspects, more subtle but not less important, include the changes in perceptions, expectations, behaviours and relations between the stakeholders. The environment around official statistics has changed: statistical offices are not any more data monopolists, but one prominent species among many others in a larger (and complex) ecosystem. What was established in the traditional world of legacy data sources (in terms of regulations, technologies, practices, etc.) is not guaranteed to be sufficient any more with new data sources.

Trusted Smart Statistics is not about replacing existing sources and processes, but augmenting them with new ones. Such augmentation however will not be only incremental: the path towards Trusted Smart Statistics is not about tweaking some components of the legacy system but about building an entirely new system that will coexist with the legacy one. In this position paper we outline some key design principles for the new Trusted Smart Statistics system. Taken collectively they picture a system where the smart and trust aspects enable and reinforce each other. A system that is more extrovert towards external stakeholders (citizens, private companies, public authorities) with whom Statistical Offices will be sharing computation, control, code, logs and of course final statistics, without necessarily sharing the raw input data….(More)”.

Towards adaptive governance in big data health research: implementing regulatory principles


Chapter by Alessandro Blasimme and Effy Vayena: “While data-enabled health care systems are in their infancy, biomedical research is rapidly adopting the big data paradigm. Digital epidemiology for example, already employs data generated outside the public health care system – that is, data generated without the intent of using them for epidemiological research – to understand and prevent patterns of diseases in populations (Salathé 2018)(Salathé 2018). Precision medicine – pooling together genomic, environmental and lifestyle data – also represents a prominent example of how data integration can drive both fundamental and translational research in important medical domains such as oncology (D. C. Collins et al. 2017). All of this requires the collection, storage, analysis and distribution of massive amounts of personal information as well as the use of state-of-the art data analytics tools to uncover healthand disease related patterns.


The realization of the potential of big data in health evokes a necessary commitment to a sense of “continuity” articulated in three distinct ways: a) from data generation to use (as in the data enabled learning health care ); b) from research to clinical practice e.g. discovery of new mutations in the context of diagnostics; c) from strictly speaking health data (Vayena and Gasser 2016) e.g. clinical records, to less so e.g. tweets used in digital epidemiology. These continuities face the challenge of regulatory and governance approaches that were designed for clear data taxonomies, for a less blurred boundary between research and clinical practice, and for rules that focused mostly on data generation and less on their eventual and multiple uses.

The result is significant uncertainty about how responsible use of such large amounts of sensitive personal data could be fostered. In this chapter we focus on the uncertainties surrounding the use of biomedical big data in the context of health research. Are new criteria needed to review biomedical big data research projects? Do current mechanisms, such as informed consent, offer sufficient protection to research participants’ autonomy and privacy in this new context? Do existing oversight mechanisms ensure transparency and accountability in data access and sharing? What monitoring tools are available to assess how personal data are used over time? Is the equitable distribution of benefits accruing from such data uses considered, or can it be ensured? How is the public being involved – if at all – with decisions about creating and using large data
repositories for research purposes? What is the role that IT (information technology) players, and especially big ones, acquire in research? And what regulatory instruments do we have to ensure that such players do not undermine the independence of research?…(More)”.

Responsible data sharing in a big data-driven translational research platform: lessons learned


Paper by S. Kalkman et al: “The sharing of clinical research data is increasingly viewed as a moral duty [1]. Particularly in the context of making clinical trial data widely available, editors of international medical journals have labeled data sharing a highly efficient way to advance scientific knowledge [2,3,4]. The combination of even larger datasets into so-called “Big Data” is considered to offer even greater benefits for science, medicine and society [5]. Several international consortia have now promised to build grand-scale, Big Data-driven translational research platforms to generate better scientific evidence regarding disease etiology, diagnosis, treatment and prognosis across various disease areas [6,7,8].

Despite anticipated benefits, large-scale sharing of health data is charged with ethical questions. Stakeholders have been urged to consider how to manage privacy and confidentiality issues, ensure valid informed consent, and determine who gets to decide about data access [9]. More fundamentally, new data sharing activities prompt questions about social justice and public trust [10]. To balance potential benefits and ethical considerations, data sharing platforms require guidance for the processes of interaction and decision-making. In the European Union (EU), legal norms specified for the sharing of personal data for health research, most notably those set out in the General Data Protection Regulation (GDPR) (EU 2016/679), remain open to interpretation and offer limited practical guidance to researchers [12,12,13]. Striking in this regard is that the GDPR itself stresses the importance of adherence to ethical standards, when broad consent is put forward as a legal basis for the processing of personal data. For example, Recital 33 of the GDPR states that data subjects should be allowed to give “consent to certain areas of scientific research when in keeping with recognised ethical standards for scientific research” [14]. In fact, the GDPR actually encourages data controllers to establish self-regulating mechanisms, such as a code of conduct. To foster responsible and sustainable data sharing in translational research platforms, ethical guidance and governance is therefore necessary. Here, we define governance as ‘the processes of interaction and decision-making among the different stakeholders that are involved in a collective problem that lead to the creation, reinforcement, or reproduction of social norms and institutions’…(More)”.

Peopling Europe through Data Practices


Introduction to Special Issue of Science, Technology & Human Values by Baki Cakici, Evelyn Ruppert and Stephan Scheel: “Politically, Europe has been unable to address itself to a constituted polity and people as more than an agglomeration of nation-states. From the resurgence of nationalisms to the crisis of the single currency and the unprecedented decision of a member state to leave the European Union (EU), core questions about the future of Europe have been rearticulated: Who are the people of Europe? Is there a European identity? What does it mean to say, “I am European?” Where does Europe begin and end? and Who can legitimately claim to be a part of a “European” people?

The special issue (SI) seeks to contest dominant framings of the question “Who are the people of Europe?” as only a matter of government policies, electoral campaigns, or parliamentary debates. Instead, the contributions start from the assumption that answers to this question exist in data practices where people are addressed, framed, known, and governed as European. The central argument of this SI is that it is through data practices that the EU seeks to simultaneously constitute its population as a knowable, governable entity, and as a distinct form of peoplehood where common personhood is more important than differences….(More)”.

Dissent in Consensusland: An Agonistic Problematization of Multi-stakeholder Governance


Martin Fougère and Nikodemus Solitander at the Journal of Business Ethics: “Multi-stakeholder initiatives involve actors from several spheres of society (market, civil society and state) in collaborative arrangements to reach objectives typically related to sustainable development. In political CSR literature, these arrangements have been framed as improvements to transnational governance and as being somehow democratic.

We draw on Mouffe’s works on agonistic pluralism to problematize the notion that consensus-led multi-stakeholder initiatives bring more democratic control on corporate power. We examine two initiatives which address two very different issue areas: the Roundtable on Sustainable Palm Oil (RSPO) and the Bangladesh Accord on Fire and Building Safety (The Accord).

We map the different kinds of adversarial relations involved in connection with the issues meant to be governed by the two initiatives, and find those adversarial relations to take six main shapes, affecting the initiatives in different ways: (1) competing regulatory initiatives; (2) pressure-response relations within multi-stakeholder initiatives; (3) pressure-response relations between NGOs and states through multi-stakeholder initiatives; (4) collaboration and competition between multi-stakeholder initiatives and states; (5) pressure-response relations between civil society actors and multi-stakeholder initiatives; and (6) counter-hegemonic movements against multi-stakeholder initiatives as hegemonic projects.

We conclude that multi-stakeholder initiatives cannot be democratic by themselves, and we argue that business and society researchers should not look at democracy or politics only internally to these initiatives, but rather study how issue areas are regulated through interactions between a variety of actors—both within and without the multi-stakeholder initiatives—who get to have a legitimate voice in this regulation….(More)”.