Data governance: Enhancing access to and sharing of data


OECD Recommendation: “Access to and sharing of data are increasingly critical for fostering data-driven scientific discovery and innovations across the private and public sectors globally and will play a role in solving societal challenges, including fighting COVID-19 and achieving the Sustainable Development Goals (SDGs). But restrictions to data access, sometimes compounded by a reluctance to share, and a growing awareness of the risks that come with data access and sharing, means economies and societies are not harnessing the full potential of data.


Adopted in October 2021, the OECD Recommendation on Enhancing Access to and Sharing of Data (EASD) is the first internationally agreed upon set of principles and policy guidance on how governments can maximise the cross-sectoral benefits of all types of data – personal, non-personal, open, proprietary, public and private – while protecting the rights of individuals and organisations.


The Recommendation intends to help governments develop coherent data governance policies and frameworks to unlock the potential benefits of data across and within sectors, countries, organisations, and communities. It aims to reinforce trust across the data ecosystem, stimulate investment in data and incentivise data access and sharing, and foster effective and responsible data access, sharing, and use across sectors and jurisdictions.


The Recommendation is a key deliverable of phase 3 of the OECD’s Going Digital project, focused on data governance for frowth and well-being. It was developed by three OECD Committees (Digital Economy Policy, Scientific and Technological Policy, and Public Governance) and acts as a common reference for existing and new OECD legal instruments related to data in areas such as research, health and digital government. It will provide a foundation stone for ongoing OECD work to help countries unlock the potential of data in the digital era….(More)”.

A History of the Data-Tracked User


Essay by Tanya Kant: “Who among us hasn’t blindly accepted a cookie notice or an inscrutable privacy policy, or been stalked by a creepy “personalized” ad? Tracking and profiling are now commonplace fixtures of the digital everyday. This stands even if you use tracker blockers, which have been likened to “using an umbrella in a hurricane.”

In most instances, data tracking is conducted in the name of offering “personalized experiences” to web users: individually targeted marketing, tailored newsfeeds, recommended products and content. To offer such experiences, platforms such as Facebook and Google use a dizzyingly extensive list of categories to track and profile people: gender, age, ethnicity, lifestyle and consumption preferences, language, voice recordings, facial recognition, location, political leanings, music and film taste, income, credit status, employment status, home ownership status, marital status — the list goes on….

As I explore in this case study, and as part of my work on algorithmic identity, data tracking does not just match the “right” people with the “right” products and services — it can dis­criminate, govern, and regulate web users in ways that demand close attention to the social and ethical implications of targeting.

It is not an overstatement to propose that data tracking underpins the online economy as we know it.

Commercial platform providers frame data tracking as inevitable: Data in exchange for a (personalized) service is presented as the best, and often the only, option for platform users. Yet this has not always been the case: In the mid-to-late 1990s, when the web was still in its infancy, “cyberspace” was largely celebrated as public, non-tracked space which afforded users freedom of anonymity. How then did the individual tracking of users come to dominate the web as a market practice?

The following timeline outlines a partial history of the data-tracked user. It centers largely on developments that have affected European (and to a lesser extent U.S.) web users. This timeline includes developments in commercial targeting in the EU and U.S. rather than global developments in algorithmic policing, spatial infrastructures, medicine, and education, all of which are related but deserve their own timelines. This brief history fits into ongoing conversations around algorithmic targeting by reminding us that being tracked and targeted emerges from a historically specific set of developments. Increased legal scrutiny of targeting means that individual targeting as we know it may soon change dramatically — though while the assumption that profiling web users equates to more profit, it’s more than likely that data tracking will persist in some form.


1940s. “Identity scoring” emerges: the categorization of individuals to calculate the benefits or risks of lending credit to certain groups of people….(More)”.

Mobile Big Data in the fight against COVID-19


Editorial to Special Collection of Data&Policy by Richard Benjamins, Jeanine Vos, and Stefaan Verhulst: “Almost two years into the COVID-19 pandemic, parts of the world feel like they may slowly be getting back to (a new) normal. Nevertheless, we know that the damage is still unfolding, and that much of the developing world Southeast Asia and Africa in particular — remain in a state of crisis. Given the global nature of this disease and the potential for mutant versions to develop and spread, a crisis anywhere is cause for concern everywhere. The world remains very much in the grip of this public health crisis.

From the beginning, there has been hope that data and technology could offer solutions to help inform governments’ response strategy and decision-making. Many of the expectations have been focused on mobile data analytics, and in particular the possibility of mobile network operators creating mobility insights and decision-making tools generated from anonymized and aggregated telco data. This hoped-for capability results from a growing group of mobile network operators investing in systems and capabilities to develop such decision-support products and services for public and private sector customers. The value of having such tools has been demonstrated in addressing different global challenges, ranging from the possibilities offered by models to better understand the spread of Zika in Brazil to interactive dashboards that aided emergency services during earthquakes and floods in Japan. Yet despite these experiences, many governments across the world still have limited awareness, capabilities, budgets and resources to leverage such tools in their efforts to limit the spread of COVID-19 using non-pharmaceutical interventions (NPI).

This special collection of papers we launched in Data & Policy examines both the potential of mobile data, as well as the challenges faced in delivering these tools to inform government decision-making. To date, the collection

Consisting of 11 papers from 71 researchers and experts from academia, industry, and government, the articles cover a wide range of geographies, including Argentina, Austria, Belgium, Brazil, Ecuador, Estonia, Europe (as a whole), France, Gambia, Germany, Ghana, Italy, Malawi, Nigeria, Nordics, and Spain. Responding to our call for case studies to illustrate the opportunities (and challenges) offered by mobile big data in the fight against COVID-19, the authors of these papers describe a number of examples of how mobile and mobile-related data have been used to address the medical, economic, socio-cultural and political aspects of the pandemic….(More)”.

Using location data responsibly in cities and local government


Article by Ben Hawes: “City and local governments increasingly recognise the power of location data to help them deliver more and better services, exactly where and when they are needed. The use of this data is going to grow, with more pressure to manage resources and emerging challenges including responding to extreme weather events and other climate impacts.

But using location data to target and manage local services comes with risks to the equitable delivery of services, privacy and accountability. To make the best use of these growing data resources, city leaders and their teams need to understand those risks and address them, and to be able to explain their uses of data to citizens.

The Locus Charter, launched earlier this year, is a set of common principles to promote responsible practice when using location data. The Charter could be very valuable to local governments, to help them navigate the challenges and realise the rewards offered by data about the places they manage….

Compared to private companies, local public bodies already have special responsibilities to ensure transparency and fairness. New sources of data can help, but can also generate new questions. Local governments have generally been able to improve services as they learned more about the people they served. Now they must manage the risks of knowing too much about people, and acting intrusively. They can also risk distorting service provision because their data about people in places is uneven or unrepresentative.

Many city and local governments fully recognise that data-driven delivery comes with risks, and are developing specific local data ethics frameworks to guide their work. Some of these, like Kansas City’s, are specifically aimed at managing data privacy. Others cover broader uses of data, like Greater Manchester’s Declaration for Intelligent and Responsible Data Practice (DTPR). DTPR is an open source communication standard that helps people understand how data is being used in public places.

London is engaging citizens on an Emerging Technology Charter, to explore new and ethically charged questions around data. Govlab supports an AI Localism repository of actions taken by local decision-makers to address the use of AI within a city or community. The EU Sherpa programme (Shaping the Ethical Dimensions of Smart Information Systems) includes a smart cities strand, and has published a case-study on the Ethics of Using Smart City AI and Big Data.

Smart city applications make it potentially possible to collect data in many ways, for many purposes, but the technologies cannot answer questions about what is appropriate. In The Smart Enough City: Putting Technology in its Place to Reclaim Our Urban Future (2019), author Ben Green describes examples when some cities have failed and others succeeded in judging what smart applications should be used.

Attention to what constitutes ethical practice with location data can give additional help to leaders making that kind of judgement….(More)”

The Case For Exploratory Social Sciences


Discussion Paper by Geoff Mulgan: “…Here I make the case for a new way of organising social science both in universities and beyond through creating sub-disciplines of ‘exploratory social science’ that would help to fill this gap. In the paper I show:
• how in the 18th and 19th centuries social sciences attempted to fuse interpretation and change
• how a series of trends – including quantification and abstraction – delivered advances but also squeezed out this capacity for radical design
• how these also encouraged some blind alleys for social science, including what I call ‘unrealistic realism’ and the futile search for eternal laws

I show some of the more useful counter-trends, including evolutionary thinking, systems models and complexity that create a more valid space for conscious design. I argue that now, at a time when we badly need better designs and strategies for the future, we face a paradoxical situation where the people with the deepest knowledge of fields are discouraged from systematic and creative exploration of the future, while those with the appetite and freedom to explore often lack the necessary knowledge…(More)”.

Empowering Local Communities Using Artificial Intelligence


Paper by Yen-Chia Hsu et al: “Many powerful Artificial Intelligence (AI) techniques have been engineered with the goals of high performance and accuracy. Recently, AI algorithms have been integrated into diverse and real-world applications. It has become an important topic to explore the impact of AI on society from a people-centered perspective. Previous works in citizen science have identified methods of using AI to engage the public in research, such as sustaining participation, verifying data quality, classifying and labeling objects, predicting user interests, and explaining data patterns. These works investigated the challenges regarding how scientists design AI systems for citizens to participate in research projects at a large geographic scale in a generalizable way, such as building applications for citizens globally to participate in completing tasks. In contrast, we are interested in another area that receives significantly less attention: how scientists co-design AI systems “with” local communities to influence a particular geographical region, such as community-based participatory projects. Specifically, this article discusses the challenges of applying AI in Community Citizen Science, a framework to create social impact through community empowerment at an intensely place-based local scale. We provide insights in this under-explored area of focus to connect scientific research closely to social issues and citizen needs…(More)”.

Listening to Users and Other Ideas for Building Trust in Digital Trade


Paper by Susan Ariel Aaronson: “This paper argues that if trade policymakers truly want to achieve data free flow with trust, they must address user concerns beyond privacy. Survey data reveals that users are also anxious about online harassment, malware, censorship and disinformation. The paper focuses on three such problems, specifically, internet shutdowns, censorship and ransomware (a form of malware), each of which can distort trade and make users feel less secure online. Finally, the author concludes that trade policymakers will need to rethink how they involve the broad public in digital trade policymaking if they want digital trade agreements to facilitate trust….(More)”.

A Vulnerable System: The History of Information Security in the Computer Age


Book by Andrew Stewart: “As threats to the security of information pervade the fabric of everyday life, A Vulnerable System describes how, even as the demand for information security increases, the needs of society are not being met. The result is that the confidentiality of our personal data, the integrity of our elections, and the stability of foreign relations between countries are increasingly at risk.

Andrew J. Stewart convincingly shows that emergency software patches and new security products cannot provide the solution to threats such as computer hacking, viruses, software vulnerabilities, and electronic spying. Profound underlying structural problems must first be understood, confronted, and then addressed.

A Vulnerable System delivers a long view of the history of information security, beginning with the creation of the first digital computers during the Cold War. From the key institutions of the so-called military industrial complex in the 1950s to Silicon Valley start-ups in the 2020s, the relentless pursuit of new technologies has come at great cost. The absence of knowledge regarding the history of information security has caused the lessons of the past to be forsaken for the novelty of the present, and has led us to be collectively unable to meet the needs of the current day. From the very beginning of the information age, claims of secure systems have been crushed by practical reality.

The myriad risks to technology, Stewart reveals, cannot be addressed without first understanding how we arrived at this moment. A Vulnerable System is an enlightening and sobering history of a topic that affects crucial aspects of our lives….(More)”.

Keeping labour data flowing during the COVID-19 pandemic


Blog by ILO: “The availability of data tends to be taken for granted by the vast majority of people. The COVID-19 pandemic illustrates this vividly: estimates of case numbers and deaths have been widely quoted throughout and assumed by most to be available on demand.

However, those responsible for compiling official statistics know all too well that, even at the best of times, providing high-quality data to meet even just a small part of user needs is incredibly challenging and, on the whole, very resource-intensive. That said, the world has, in general, been steadily moving in the right direction, with more and better data being produced over time.

At the end of 2019, most users and producers of statistics would have predicted, with good reason, that the trend of increasing data availability would continue in the new decade, not least in the field of labour statistics. What no one could foresee then is that one of the cornerstones of data collection for surveys, namely the ability to visit and interview respondents, could be undermined so rapidly and drastically as was the case in 2020 owing to the COVID-19 pandemic.

Various organizations and specialized agencies in the United Nations system, including the ILO and collectively through the Intersecretariat Working Group on Household Surveys, have sought to track the impact of COVID-19 on data collection. In March 2021, the ILO launched a global survey to understand better the extent to which the crisis had affected the compilation of official labour market statistics. Information was received from 110 countries, of which 97 had planned to complete a labour force survey (LFS) in 2020. The findings point to both the tremendous challenges faced and the remarkable efforts undertaken to provide information on the world of work during the pandemic.

Nearly half of countries had to suspend interviewing at some point in 2020

Close to half (46.4 per cent) of the countries with plans to conduct a LFS in 2020 had to suspend interviews at some point in the year.The highest levels of suspensions were reported by countries in Africa and the Arab States (70.6 per cent) and in the Americas (66.7 per cent). While some countries were able to attempt to recover those interviews later on, the majority were not, which means they completely lost data that had been expected to be available, creating a risk of gaps in data series for key labour market indicators, among others…(More)”

Licensure as Data Governance


Essay by Frank Pasquale: “…A licensure regime for data and the AI it powers would enable citizens to democratically shape data’s scope and proper use, rather than resigning ourselves to being increasingly influenced and shaped by forces beyond our control.To ground the case for more ex ante regulation, Part I describes the expanding scope of data collection, analysis, and use, and the threats that that scope poses to data subjects. Part II critiques consent-based models of data protection, while Part III examines the substantive foundation of licensure models. Part IV addresses a key challenge to my approach: the free expression concerns raised by the licensure of large-scale personal data collection, analysis, and use. Part V concludes with reflections on the opportunities created by data licensure frameworks and potential limitations upon them….(More)”.