Global Fishing Watch: Pooling Data and Expertise to Combat Illegal Fishing


Data Collaborative Case Study by Michelle Winowatan, Andrew Young, and Stefaan Verhulst: “

Global Fishing Watch, originally set up through a collaboration between Oceana, SkyTruth and Google, is an independent nonprofit organization dedicated to advancing responsible stewardship of our oceans through increased transparency in fishing activity and scientific research. Using big data processing and machine learning, Global Fishing Watch visualizes, tracks, and shares data about global fishing activity in near-real time and for free via their public map. To date, the platform tracks approximately 65,000 commercial fishing vessels globally. These insights have been used in a number of academic publications, ocean advocacy efforts, and law enforcement activities.

Data Collaborative Model: Based on the typology of data collaborative practice areas, Global Fishing Watch is an example of the data pooling model of data collaboration, specifically a public data pool. Public data pools co-mingle data assets from multiple data holders — including governments and companies — and make those shared assets available on the web. This approach enabled the data stewards and stakeholders involved in Global Fishing Watch to bring together multiple data streams from both public- and private-sector entities in a single location. This single point of access provides the public and relevant authorities with user-friendly access to actionable, previously fragmented data that can drive efforts to address compliance in fisheries and illegal fishing around the world.

Data Stewardship Approach: Global Fishing Watch also provides a clear illustration of the importance of data stewards. For instance, representatives from Google Earth Outreach, one of the data holders, played an important stewardship role in seeking to connect and coordinate with SkyTruth and Oceana, two important nonprofit environmental actors who were working separately prior to this initiative. The brokering of this partnership helped to bring relevant data assets from the public and private sectors to bear in support of institutional efforts to address the stubborn challenge of illegal fishing.

Read the full case study here.”

Trusted smart statistics: Motivations and principles


Paper by Fabio Ricciato et al : “In this contribution we outline the concept of Trusted Smart Statistics as the natural evolution of official statistics in the new datafied world. Traditional data sources, namely survey and administrative data, represent nowadays a valuable but small portion of the global data stock, much thereof being held in the private sector. The availability of new data sources is only one aspect of the global change that concerns official statistics. Other aspects, more subtle but not less important, include the changes in perceptions, expectations, behaviours and relations between the stakeholders. The environment around official statistics has changed: statistical offices are not any more data monopolists, but one prominent species among many others in a larger (and complex) ecosystem. What was established in the traditional world of legacy data sources (in terms of regulations, technologies, practices, etc.) is not guaranteed to be sufficient any more with new data sources.

Trusted Smart Statistics is not about replacing existing sources and processes, but augmenting them with new ones. Such augmentation however will not be only incremental: the path towards Trusted Smart Statistics is not about tweaking some components of the legacy system but about building an entirely new system that will coexist with the legacy one. In this position paper we outline some key design principles for the new Trusted Smart Statistics system. Taken collectively they picture a system where the smart and trust aspects enable and reinforce each other. A system that is more extrovert towards external stakeholders (citizens, private companies, public authorities) with whom Statistical Offices will be sharing computation, control, code, logs and of course final statistics, without necessarily sharing the raw input data….(More)”.

Towards adaptive governance in big data health research: implementing regulatory principles


Chapter by Alessandro Blasimme and Effy Vayena: “While data-enabled health care systems are in their infancy, biomedical research is rapidly adopting the big data paradigm. Digital epidemiology for example, already employs data generated outside the public health care system – that is, data generated without the intent of using them for epidemiological research – to understand and prevent patterns of diseases in populations (Salathé 2018)(Salathé 2018). Precision medicine – pooling together genomic, environmental and lifestyle data – also represents a prominent example of how data integration can drive both fundamental and translational research in important medical domains such as oncology (D. C. Collins et al. 2017). All of this requires the collection, storage, analysis and distribution of massive amounts of personal information as well as the use of state-of-the art data analytics tools to uncover healthand disease related patterns.


The realization of the potential of big data in health evokes a necessary commitment to a sense of “continuity” articulated in three distinct ways: a) from data generation to use (as in the data enabled learning health care ); b) from research to clinical practice e.g. discovery of new mutations in the context of diagnostics; c) from strictly speaking health data (Vayena and Gasser 2016) e.g. clinical records, to less so e.g. tweets used in digital epidemiology. These continuities face the challenge of regulatory and governance approaches that were designed for clear data taxonomies, for a less blurred boundary between research and clinical practice, and for rules that focused mostly on data generation and less on their eventual and multiple uses.

The result is significant uncertainty about how responsible use of such large amounts of sensitive personal data could be fostered. In this chapter we focus on the uncertainties surrounding the use of biomedical big data in the context of health research. Are new criteria needed to review biomedical big data research projects? Do current mechanisms, such as informed consent, offer sufficient protection to research participants’ autonomy and privacy in this new context? Do existing oversight mechanisms ensure transparency and accountability in data access and sharing? What monitoring tools are available to assess how personal data are used over time? Is the equitable distribution of benefits accruing from such data uses considered, or can it be ensured? How is the public being involved – if at all – with decisions about creating and using large data
repositories for research purposes? What is the role that IT (information technology) players, and especially big ones, acquire in research? And what regulatory instruments do we have to ensure that such players do not undermine the independence of research?…(More)”.

Official Statistics 4.0: Verified Facts for People in the 21st Century


Book by Walter J. Radermacher: “This book explores official statistics and their social function in modern societies. Digitisation and globalisation are creating completely new opportunities and risks, a context in which facts (can) play an enormously important part if they are produced with a quality that makes them credible and purpose-specific. In order for this to actually happen, official statistics must continue to actively pursue the modernisation of their working methods.

This book is not about the technical and methodological challenges associated with digitisation and globalisation; rather, it focuses on statistical sociology, which scientifically deals with the peculiarities and pitfalls of governing-by-numbers, and assigns statistics a suitable position in the future informational ecosystem. Further, the book provides a comprehensive overview of modern issues in official statistics, embodied in a historical and conceptual framework that endows it with different and innovative perspectives. Central to this work is the quality of statistical information provided by official statistics. The implementation of the UN Sustainable Development Goals in the form of indicators is another driving force in the search for answers, and is addressed here….(More)”

Accelerating Medicines Partnership (AMP): Improving Drug Research Efficiency through Biomarker Data Sharing


Data Collaborative Case Study by Michelle Winowatan, Andrew Young, and Stefaan Verhulst: “Accelerating Medicines Partnership (AMP) is a cross-sector data-sharing partnership in the United States between the National Institutes of Health (NIH), the Food and Drug Administration (FDA), multiple biopharmaceutical and life science companies, as well as non-profit organizations that seeks to improve the efficiency of developing new diagnostics and treatments for several types of disease. To achieve this goal, the partnership created a pre-competitive collaborative ecosystem where the biomedical community can pool data and resources that are relevant to the prioritized disease areas. A key component of the partnership is to make biomarkers data available to the medical research community through online portals.

Data Collaboratives Model: Based on our typology of data collaborative models, AMP is an example of the data pooling model of data collaboration, specifically a public data pool. Public data pools co-mingle data assets from multiple data holders — in this case pharmaceutical companies — and make those shared assets available on the web. Pools often limit contributions to approved partners (as public data pools are not crowdsourcing efforts), but access to the shared assets is open, enabling independent re-uses.

Data Stewardship Approach: Data stewardship is built into the partnership through the establishment of an executive committee, which governs the entire partnership, and a steering committee for each disease area, which governs each of the sub-projects within AMP. These committees consist of representatives from the institutional partners involved in AMP and perform data stewards function including enabling inter-institutional engagement as well as intra-institutional coordination, data audit and assessment of value and risk, communication of findings, and nurture the collaboration to sustainability….(Full Case Study)”.

Why the Global South should nationalise its data


Ulises Ali Mejias at AlJazeera: “The recent coup in Bolivia reminds us that poor countries rich in resources continue to be plagued by the legacy of colonialism. Anything that stands in the way of a foreign corporation’s ability to extract cheap resources must be removed.

Today, apart from minerals and fossil fuels, corporations are after another precious resource: Personal data. As with natural resources, data too has become the target of extractive corporate practices.

As sociologist Nick Couldry and I argue in our book, The Costs of Connection: How Data is Colonizing Human Life and Appropriating It for Capitalism, there is a new form of colonialism emerging in the world: data colonialism. By this, we mean a new resource-grab whereby human life itself has become a direct input into economic production in the form of extracted data.

We acknowledge that this term is controversial, given the extreme physical violence and structures of racism that historical colonialism employed. However, our point is not to say that data colonialism is the same as historical colonialism, but rather to suggest that it shares the same core function: extraction, exploitation, and dispossession.

Like classical colonialism, data colonialism violently reconfigures human relations to economic production. Things like land, water, and other natural resources were valued by native people in the precolonial era, but not in the same way that colonisers (and later, capitalists) came to value them: as private property. Likewise, we are experiencing a situation in which things that were once primarily outside the economic realm – things like our most intimate social interactions with friends and family, or our medical records – have now been commodified and made part of an economic cycle of data extraction that benefits a few corporations.

So what could countries in the Global South do to avoid the dangers of data colonialism?…(More)”.

Industry and Public Sector Leaders Partner to Launch the Mobility Data Collaborative


Press Release: “The Mobility Data Collaborative (the Collaborative), a multi-sector forum with the goal of creating a framework to improve mobility through data, launches today…

New mobility services, such as shared cars, bikes, and scooters, are emerging and integrating into the urban transportation landscape across the globe. Data generated by these new mobility services offers an exciting opportunity to inform local policies and infrastructure planning. The Collaborative brings together key members from the public and private sectors to develop best practices to harness the potential of this valuable data to support safe, equitable, and livable streets.

The Collaborative will leverage the knowledge of its current and future members to solve the complex challenges facing shared mobility operators and the public agencies who manage access to infrastructure that these new services require. A critical component of this collaboration is providing an open and impartial forum for sharing information and developing best practices. 

Membership is open to public agencies, nonprofits, academic institutions and private companies….(More)”.

Platform Urbanism: Negotiating Platform Ecosystems in Connected Cities


Book by Sarah Barns: “This book reflects on what it means to live as urban citizens in a world increasingly shaped by the business and organisational logics of digital platforms. Where smart city strategies promote the roll-out of internet of things (IoT) technologies and big data analytics by city governments worldwide, platform urbanism responds to the deep and pervasive entanglements that exist between urban citizens, city services and platform ecosystems today.    

Recent years have witnessed a backlash against major global platforms, evidenced by burgeoning literatures on platform capitalism, the platform society, platform surveillance and platform governance, as well as regulatory attention towards the market power of platforms in their dominance of global data infrastructure.  

This book responds to these developments and asks: How do platform ecosystems reshape connected cities? How do urban researchers and policy makers respond to the logics of platform ecosystems and platform intermediation? What sorts of multisensory urban engagements are rendered through platform interfaces and modalities? And what sorts of governance challenges and responses are needed to cultivate and champion the digital public spaces of our connected lives….(More)”.

What are hidden data treasuries and how can they help development outcomes?


Blogpost by Damien Jacques et al: “Cashew nuts in Burkina Faso can be seen growing from space. Such is the power of satellite technology, it’s now possible to observe the changing colors of fields as crops slowly ripen.

This matters because it can be used as an early warning of crop failure and food crisis – giving governments and aid agencies more time to organize a response.

Our team built an exhaustive crop type and yield estimation map in Burkina Faso, using artificial intelligence and satellite images from the European Space Agency. 

But building the map would not have been possible without a data set that GIZ, the German government’s international development agency, had collected for one purpose on the ground some years before – and never looked at again.

At Dalberg, we call this a “hidden data treasury” and it has huge potential to be used for good. 

Unlocking data potential

In the records of the GIZ Data Lab, the GPS coordinates and crop yield measurements of just a few hundred cashew fields were sitting dormant.

They’d been collected in 2015 to assess the impact of a program to train farmers. But through the power of machine learning, that data set has been given a new purpose.

Using Dalberg Data Insights’ AIDA platform, our team trained algorithms to analyze satellite images for cashew crops, track the crops’ color as they ripen, and from there, estimate yields for the area covered by the data.

From this, it’s now possible to predict crop failures for thousands of fields.

We believe this “recycling” of old data, when paired with artificial intelligence, can help to bridge the data gaps in low-income countries and meet the UN’s Sustainable Development Goals….(More)”.

Responsible Operations: Data Science, Machine Learning, and AI in Libraries


OCLC Research Position Paper by Thomas Padilla: “Despite greater awareness, significant gaps persist between concept and operationalization in libraries at the level of workflows (managing bias in probabilistic description), policies (community engagement vis-à-vis the development of machine-actionable collections), positions (developing staff who can utilize, develop, critique, and/or promote services influenced by data science, machine learning, and AI), collections (development of “gold standard” training data), and infrastructure (development of systems that make use of these technologies and methods). Shifting from awareness to operationalization will require holistic organizational commitment to responsible operations. The viability of responsible operations depends on organizational incentives and protections that promote constructive dissent…(More)”.