The new ecosystem of trust: How data trusts, collaboratives and coops can help govern data for the maximum public benefit


Paper by Geoff Mulgan and Vincent Straub: The world is struggling to govern data. The challenge is to reduce abuses of all kinds, enhance accountability and improve ethical standards, while also ensuring that the maximum public and private value can also be derived from data.

Despite many predictions to the contrary the world of commercial data is dominated by powerful organisations. By contrast, there are few institutions to protect the public interest and those that do exist remain relatively weak. This paper argues that new institutions—an ecosystem of trust—are needed to ensure that uses of data are trusted and trustworthy. It advocates the creation of different kinds of data trust to fill this gap. It argues:

  • That we need, but currently lack, institutions that are good at thinking through, discussing, and explaining the often complex trade-offs that need to be made about data.
  • That the task of creating trust is different in different fields. Overly generic solutions will be likely to fail.
  • That trusts need to be accountable—in some cases to individual members where there is a direct relationship with individuals giving consent, in other cases to the broader public.
  • That we should expect a variety of types of data trust to form—some sharing data; some managing synthetic data; some providing a research capability; some using commercial data and so on. The best analogy is finance which over time has developed a very wide range of types of institution and governance.

This paper builds on a series of Nesta think pieces on data and knowledge commons published over the last decade and current practical projects that explore how data can be mobilised to improve healthcarepolicing, the jobs market and education. It aims to provide a framework for designing a new family of institutions under the umbrella title of data trusts, tailored to different conditions of consent, and different patterns of private and public value. It draws on the work of many others (including the work of GovLab and the Open Data Institute).

Introduction

The governance of personal data of all kinds has recently moved from being a very marginal specialist issue to one of general concern. Too much data has been misused, lost, shared, sold or combined with little involvement of the people most affected, and little ethical awareness on the part of the organisations in charge.

The most visible responses have been general ones—like the EU’s GDPR. But these now need to be complemented by new institutions that can be generically described as ‘data trusts’.

In current practice the term ‘trust’ is used to describe a very wide range of institutions. These include private trusts, a type of legal structure that holds and makes decisions about assets, such as property or investments, and involves trustors, trustees, and beneficiaries. There are also public trusts in fields like education with a duty to provide a public benefit. Examples include the Nesta Trust and the National Trust. There are trusts in business (e.g. to manage pension funds). And there are trusts in the public sector, such as the BBC Trust and NHS Foundation Trusts with remits to protect the public interest, at arms length from political decisions.

It’s now over a decade since the first data trusts were set up as private initiatives in response to anxieties about abuse. These were important pioneers though none achieved much scale or traction.

Now a great deal of work is underway around the world to consider what other types of trust might be relevant to data, so as to fill the governance vacuum—handling everything from transport data to personalised health, the internet of things to school records, and recognising the very different uses of data—by the state for taxation or criminal justice etc.; by academia for research; by business for use and resale; and to guide individual choices. This paper aims to feed into that debate.

1. The twin problems: trust and value

Two main clusters of problem are coming to prominence. The first cluster of problems involve misuseand overuse of data; the second set of problems involves underuse of data.

1.1. Lack of control fuels distrust

The first problem is a lack of control and agency—individuals feel unable to control data about their own lives (from Facebook links and Google searches to retail behaviour and health) and communities are unable to control their own public data (as in Sidewalk labs and other smart city projects that attempted to privatise public data). Lack of control leads to the risk of abuses of privacy, and a wider problem of decreasing trust—which survey evidence from the Open Data Institute (ODI) shows is key in determining the likelihood consumers will share their personal data (although this varies across countries). The lack of transparency regarding how personal data is then used to train algorithms making decisions only adds to the mistrust.

1.2 Lack of trust leads to a deficit of public value

The second, mirror cluster of problems concern value. Flows of data promise a lot: better ways to assess problems, understand options, and make decisions. But current arrangements make it hard for individuals to realise the greatest value from their own data, and they make it even harder for communities to safely and effectively aggregate, analyse and link data to solve pressing problems, from health and crime to mobility. This is despite the fact that many consumers are prepared to make trade-offs: to share data if it benefits themselves and others—a 2018 Nesta poll found, for example, that 73 per cent of people said they would share their personal data in an effort to improve public services if there was a simple and secure way of doing it. A key reason for the failure to maximise public value is the lack of institutions that are sufficiently trusted to make judgements in the public interest.

Attempts to answer these problems sometimes point in opposite directions—the one towards less free flow, less linking of data, the other towards more linking and combination. But any credible policy responses have to address both simultaneously.

2. The current landscape

The governance field was largely empty earlier this decade. It is now full of activity, albeit at an early stage. Some is legislative—like GDPR and equivalents being considered around the world. Some is about standards—like Verify, IHAN and other standards intended to handle secure identity. Some is more entrepreneurial—like the many Personal Data Stores launched over the last decade, from Mydexto SOLID, Citizen-me to digi.me. Some are experiments like the newly launched Amsterdam Data Exchange (Amdex) and the UK government’s recently announced efforts to fund data trust pilots to tackle wildlife conservation, working with the ODI. Finally, we are now beginning to see new institutions within government to guide and shape activity, notably the new Centre for Data Ethics and Innovation.

Many organisations have done pioneering work, including the ODI in the UK and NYU GovLab with its work on data collaboratives. At Nesta, as part of the Europe-wide DECODE consortium, we are helping to develop new tools to give people control of their personal data while the Next Generation Internet (NGI) initiative is focused on creating a more inclusive, human-centric and resilient internet—with transparency and privacy as two of the guiding pillars.

The task of governing data better brings together many elements, from law and regulation to ethics and standards. We are just beginning to see more serious discussion about tax and data—from the proposals to tax digital platforms turnover to more targeted taxes of data harvesting in public places or infrastructures—and more serious debate around regulation. This paper deals with just one part of this broader picture: the role of institutions dedicated to curating data in the public interest….(More)”.

Whose Rules? The Quest for Digital Standards


Stephanie Segal at CSIS: “Prime Minister Shinzo Abe of Japan made news at the World Economic Forum in Davos last month when he announced Japan’s aspiration to make the G20 summit in Osaka a launch pad for “world-wide data governance.” This is not the first time in recent memory that Japan has taken a leadership role on an issue of keen economic importance. Most notably, the Trans-Pacific Partnership (TPP) lives on as the Comprehensive and Progressive Agreement on Trans-Pacific Partnership (CPTPP), thanks in large part to Japan’s efforts to keep the trading bloc together after President Trump announced U.S. withdrawal from the TPP. But it’s in the area of data and digital governance that Japan’s efforts will perhaps be most consequential for future economic growth.

Data has famously been called “the new oil” in the global economy. A 2016 report by the McKinsey Global Institute estimated that global data flows contributed $2.8 trillion in value to the global economy back in 2014, while cross-border data flows and digital trade continue to be key drivers of global trade and economic growth. Japan’s focus on data and digital governance is therefore consistent with its recent efforts to support global growth, deepen global trade linkages, and advance regional and global standards.

Data governance refers to the rules directing the collection, processing, storage, and use of data. The proliferation of smart devices and the emergence of a data-driven Internet of Things portends an exponential growth in digital data. At the same time, recent reporting on overly aggressive commercial practices of personal data collection, as well as the separate topic of illegal data breaches, have elevated public awareness and interest in the laws and policies that govern the treatment of data, and personal data in particular. Finally, a growing appreciation of data’s central role in driving innovation and future technological and economic leadership is generating concern in many capitals that different data and digital governance standards and regimes will convey a competitive (dis)advantage to certain countries.

Bringing these various threads together—the inevitable explosion of digital data; the need to protect an individual’s right to privacy; and the appreciation that data has economic value and conveys economic advantage—is precisely why Japan’s initiative is both timely and likely to face significant challenges….(More)”.

State Capability, Policymaking and the Fourth Industrial Revolution


Demos Helsinki: “The world as we know it is built on the structures of the industrial era – and these structures are falling apart. Yet the vision of a new, sustainable and fair post-industrial society remains unclear. This discussion paper is the result of a collaboration between a group of organisations interested in the implications of the rapid technological development to policymaking processes and knowledge systems that inform policy decisions.

In the discussion paper, we set out to explore what the main opportunities and concerns that accompany the Fourth Industrial Revolution for policymaking and knowledge systems are particularly in middle-income countries. Overall, middle-income countries are home to five billion of the world’s seven billion people and 73 per cent of the world’s poor people; they represent about one-third of the global Gross Domestic Product (GDP) and are major engines of global growth (World Bank 2018).

The paper is co-produced with Capability (Finland), Demos Helsinki (Finland), HELVETAS Swiss Intercooperation (Switzerland), Politics & Ideas (global), Southern Voice (global), UNESCO Montevideo (Uruguay) and Using Evidence (Canada).

The guiding questions for this paper are:

– What are the critical elements of the Fourth Industrial Revolution?

– What does the literature say about the impact of this revolution on societies and economies, and in particular on middle-income countries?

– What are the implications of the Fourth Industrial Revolution for the achievement of the Sustainable Development Goals (SDGs) in middle-income countries?

– What does the literature say about the challenges for governance and the ways knowledge can inform policy during the Fourth Industrial Revolution?…(More)”.

Full discussion paper“State Capability, Policymaking and the Fourth Industrial Revolution: Do Knowledge Systems Matter?”

The privacy threat posed by detailed census data


Gillian Tett at the Financial Times: “Wilbur Ross suffered the political equivalent of a small(ish) black eye last month: a federal judge blocked the US commerce secretary’s attempts to insert a question about citizenship into the 2020 census and accused him of committing “egregious” legal violations.

The Supreme Court has agreed to hear the administration’s appeal in April. But while this high-profile fight unfolds, there is a second, less noticed, census issue about data privacy emerging that could have big implications for businesses (and citizens). Last weekend John Abowd, the Census Bureau’s chief scientist, told an academic gathering that statisticians had uncovered shortcomings in the protection of personal data in past censuses. There is no public evidence that anyone has actually used these weaknesses to hack records, and Mr Abowd insisted that the bureau is using cutting-edge tools to fight back. But, if nothing else, this revelation shows the mounting problem around data privacy. Or, as Mr Abowd, noted: “These developments are sobering to everyone.” These flaws are “not just a challenge for statistical agencies or internet giants,” he added, but affect any institution engaged in internet commerce and “bioinformatics”, as well as commercial lenders and non-profit survey groups. Bluntly, this includes most companies and banks.

The crucial problem revolves around what is known as “re-identification” risk. When companies and government institutions amass sensitive information about individuals, they typically protect privacy in two ways: they hide the full data set from outside eyes or they release it in an “anonymous” manner, stripped of identifying details. The census bureau does both: it is required by law to publish detailed data and protect confidentiality. Since 1990, it has tried to resolve these contradictory mandates by using “household-level swapping” — moving some households from one geographic location to another to generate enough uncertainty to prevent re-identification. This used to work. But today there are so many commercially-available data sets and computers are so powerful that it is possible to re-identify “anonymous” data by combining data sets. …

Thankfully, statisticians think there is a solution. The Census Bureau now plans to use a technique known as “differential privacy” which would introduce “noise” into the public statistics, using complex algorithms. This technique is expected to create just enough statistical fog to protect personal confidentiality in published data — while also preserving information in an encrypted form that statisticians can later unscramble, as needed. Companies such as Google, Microsoft and Apple have already used variants of this technique for several years, seemingly successfully. However, nobody has employed this system on the scale that the Census Bureau needs — or in relation to such a high stakes event. And the idea has sparked some controversy because some statisticians fear that even “differential privacy” tools can be hacked — and others fret it makes data too “noisy” to be useful….(More)”.

A Parent-To-Parent Campaign To Get Vaccine Rates Up


Alex Olgin at NPR: “In 2017, Kim Nelson had just moved her family back to her hometown in South Carolina. Boxes were still scattered around the apartment, and while her two young daughters played, Nelson scrolled through a newspaper article on her phone. It said religious exemptions for vaccines had jumped nearly 70 percent in recent years in the Greenville area — the part of the state she had just moved to.

She remembers yelling to her husband in the other room, “David, you have to get in here! I can’t believe this.”

Up until that point, Nelson hadn’t run into mom friends who didn’t vaccinate….

Nelson started her own group, South Carolina Parents for Vaccines. She began posting scientific articles online. She started responding to private messages from concerned parents with specific questions. She also found that positive reinforcement was important and would roam around the mom groups, sprinkling affirmations.

“If someone posts, ‘My child got their two-months shots today,’ ” Nelson says, she’d quickly post a follow-up comment: “Great job, mom!”

Nelson was inspired by peer-focused groups around the country doing similar work. Groups with national reach like Voices for Vaccines and regional groups like Vax Northwest in Washington state take a similar approach, encouraging parents to get educated and share facts about vaccines with other parents….

Public health specialists are raising concerns about the need to improve vaccination rates. But efforts to reach vaccine-hesitant parents often fail. When presented with facts about vaccine safety, parents often remained entrenched in a decision not to vaccinate.

Pediatricians could play a role — and many do — but they’re not compensated to have lengthy discussions with parents, and some of them find it a frustrating task. That has left an opening for alternative approaches, like Nelson’s.

Nelson thought it would be best to zero in on moms who were still on the fence about vaccines.

“It’s easier to pull a hesitant parent over than it is somebody who is firmly anti-vax,” Nelson says. She explains that parents who oppose vaccination often feel so strongly about it that they won’t engage in a discussion. “They feel validated by that choice — it’s part of community, it’s part of their identity.”…(More)”.

Open data governance and open governance: interplay or disconnect?


Blog Post by Ana Brandusescu, Carlos Iglesias, Danny Lämmerhirt, and Stefaan Verhulst (in alphabetical order): “The presence of open data often gets listed as an essential requirement toward “open governance”. For instance, an open data strategy is reviewed as a key component of many action plans submitted to the Open Government Partnership. Yet little time is spent on assessing how open data itself is governed, or how it embraces open governance. For example, not much is known on whether the principles and practices that guide the opening up of government — such as transparency, accountability, user-centrism, ‘demand-driven’ design thinking — also guide decision-making on how to release open data.

At the same time, data governance has become more complex and open data decision-makers face heightened concerns with regards to privacy and data protection. The recent implementation of the EU’s General Data Protection Regulation (GDPR) has generated an increased awareness worldwide of the need to prevent and mitigate the risks of personal data disclosures, and that has also affected the open data community. Before opening up data, concerns of data breaches, the abuse of personal information, and the potential of malicious inference from publicly available data may have to be taken into account. In turn, questions of how to sustain existing open data programs, user-centrism, and publishing with purpose gain prominence.

To better understand the practices and challenges of open data governance, we have outlined a research agenda in an earlier blog post. Since then, and perhaps as a result, governance has emerged as an important topic for the open data community. The audience attending the 5th International Open Data Conference (IODC) in Buenos Aires deemed governance of open data to be the most important discussion topic. For instance, discussions around the Open Data Charter principles during and prior to the IODC acknowledged the role of an integrated governance approach to data handling, sharing, and publication. Some conclude that the open data movement has brought about better governance, skills, technologies of public information management which becomes an enormous long-term value for government. But what does open data governance look like?

Understanding open data governance

To expand our earlier exploration and broaden the community that considers open data governance, we convened a workshop at the Open Data Research Symposium 2018. Bringing together open data professionals, civil servants, and researchers, we focused on:

  • What is open data governance?
  • When can we speak of “good” open data governance, and
  • How can the research community help open data decision-makers toward “good” open data governance?

In this session, open data governance was defined as the interplay of rules, standards, tools, principles, processes and decisions that influence what government data is opened up, how and by whom. We then explored multiple layers that can influence open data governance.

In the following, we illustrate possible questions to start mapping the layers of open data governance. As they reflect the experiences of session participants, we see them as starting points for fresh ethnographic and descriptive research on the daily practices of open data governance in governments….(More)”.

Using digital technologies to improve the design and enforcement of public policies


OECD Digital Economy Paper: “Digitalisation is having a profound impact on social and economic activity. While often benefiting from a very long history of public investment in R&D, digitalisation has been largely driven by the private sector. However, the combined adoption of new digital technologies, increased reliance upon new data sources, and use of advanced analytic methods hold significant potential to: i) improve the effectiveness and enforcement of public policies; ii) enable innovative policy design and impact evaluation, and; iii) expand citizen and stakeholder engagement in policy making and implementation. These benefits are likely to be greatest in policy domains where outcomes are only observable at significant cost and/or where there is significant heteroregeneity in responses across different agents. In this paper we provide a review of initiatives across a number of fields including: competition, education, environment, innovation, and taxation….(More)”.

Can transparency make extractive industries more accountable?


Blog by John Gaventa at IDS: “Over the last two decades great strides have been made in terms of holding extractive industries accountable.  As demonstrated at the Global Assembly of Publish What You Pay (PWYP), which I attended recently in Dakar, Senegal, more information than ever about revenue flows to governments from the oil gas and mining industries is now publicly available.  But new research suggests that such information disclosure, while important, is by itself not enough to hold companies to account, and address corruption.

… a recent study in Mozambique by researchers Nicholas Aworti and Adriano Adriano Nuvunga questions this assumption.  Supported by the Action for Empowerment and Accountability (A4EA) Research Programme, the research explored why greater transparency of information has not necessarily led to greater social and political action for accountability.

Like many countries in Africa, Mozambique is experiencing massive outside investments in recently discovered natural resources, including rich deposits of natural gas and oil, as well as coal and other minerals.  Over the last decade, NGOs like the Centre for Public Integrity, who helped facilitate the study, have done brave and often pioneering work to elicit information on the extractive industry, and to publish it in hard-hitting reports, widely reported in the press, and discussed at high-level stakeholder meetings.

Yet, as Aworti and Nuvunga summarise in a policy brief based on their research, ‘neither these numerous investigative reports nor the EITI validation reports have inspired social and political action such as public protest or state prosecution.’   Corruption continues, and despite the newfound mineral wealth, the country remains one of the poorest in Africa.

The authors ask, ‘If information disclosure has not been enough to galvanise citizen and institutional action, what could be the reason?’ The research found 18 other factors that affect whether information leads to action, including the quality of the information and how it is disseminated, the degree of citizen empowerment, the nature of the political regime, and the role of external donors in insisting on accountability….

The research and the challenges highlighted by the Mozambique case point to the need for new approaches.   At the Global Assembly in Dakar several hundred of PYWP’s more than 700 members from 45 countries gathered to discuss and to approve the organisation’s next strategic plan. Among other points, the plan calls for going beyond transparency –  to more intentionally use information to foster and promote citizen action,  strengthen  grassroots participation and voice on mining issues, and  improve links with other related civil society movements working on gender, climate and tax justice in the extractives field.

Coming at a time where increasing push back and repression threaten the space for citizens to speak truth to power, this is a bold call.  I chaired two sessions with PWYP activists who had been beaten, jailed, threatened or exiled for challenging mining companies, and 70 per cent of the delegates at the conference said their work had been affected by this more repressive environment….(More)”.

Tomorrow’s Data Heroes


Article by Florian GrönePierre Péladeau, and Rawia Abdel Samad: “Telecom companies are struggling to find a profitable identity in today’s digital sphere. What about helping customers control their information?…

By 2025, Alex had had enough. There no longer seemed to be any distinction between her analog and digital lives. Everywhere she went, every purchase she completed, and just about every move she made, from exercising at the gym to idly surfing the Web, triggered a vast flow of data. That in turn meant she was bombarded with personalized advertising messages, targeted more and more eerily to her. As she walked down the street, messages appeared on her phone about the stores she was passing. Ads popped up on her all-purpose tablet–computer–phone pushing drugs for minor health problems she didn’t know she had — until the symptoms appeared the next day. Worse, she had recently learned that she was being reassigned at work. An AI machine had mastered her current job by analyzing her use of the firm’s productivity software.

It was as if the algorithms of global companies knew more about her than she knew herself — and they probably did. How was it that her every action and conversation, even her thoughts, added to the store of data held about her? After all, it was her data: her preferences, dislikes, interests, friendships, consumer choices, activities, and whereabouts — her very identity — that was being collected, analyzed, profited from, and even used to manage her. All these companies seemed to be making money buying and selling this information. Why shouldn’t she gain some control over the data she generated, and maybe earn some cash by selling it to the companies that had long collected it free of charge?

So Alex signed up for the “personal data manager,” a new service that promised to give her control over her privacy and identity. It was offered by her U.S.-based connectivity company (in this article, we’ll call it DigiLife, but it could be one of many former telephone companies providing Internet services in 2025). During the previous few years, DigiLife had transformed itself into a connectivity hub: a platform that made it easier for customers to join, manage, and track interactions with media and software entities across the online world. Thanks to recently passed laws regarding digital identity and data management, including the “right to be forgotten,” the DigiLife data manager was more than window dressing. It laid out easy-to-follow choices that all Web-based service providers were required by law to honor….

Today, in 2019, personal data management applications like the one Alex used exist only in nascent form, and consumers have yet to demonstrate that they trust these services. Nor can they yet profit by selling their data. But the need is great, and so is the opportunity for companies that fulfill it. By 2025, the total value of the data economy as currently structured will rise to more than US$400 billion, and by monetizing the vast amounts of data they produce, consumers can potentially recapture as much as a quarter of that total.

Given the critical role of telecom operating companies within the digital economy — the central position of their data networks, their networking capabilities, their customer relationships, and their experience in government affairs — they are in a good position to seize this business opportunity. They might not do it alone; they are likely to form consortia with software companies or other digital partners. Nonetheless, for legacy connectivity companies, providing this type of service may be the most sustainable business option. It may also be the best option for the rest of us, as we try to maintain control in a digital world flooded with our personal data….(More)”.

Legitimate Change & The Critical Role of Cities


Blog by Indy Johar: “We are living in the midst of rapid change and mounting evidence of the fragility of public trust in societal institutions. Increasingly our means of change are restricted not by capital or capacity (though we often like to point at these shortfalls), but rather by our means to create legitimacy, or shared coherence as to the proposed direction of travel, even as the climate threats to our civilisation become increasingly paramount.

How do we address the growing fragility of legitimacy in our increasingly complex contexts? There are multiple forces, trends and drivers in play — including major demographic shifts, climate destabilisation, nutrient system hazards, and industrial revolution 4.0 consequences — which are creating feedback loops with second and third order spillovers and unintended or unimagined effects.

Cities are the sites where these complex systems knot together — including property rights, food systems, logistics, financial systems, water systems, human development institutions, schools, universities, etc. Transforming these underlying systems in an integrated manner is required in order to address the challenges we face and open up opportunities to create the full decarbonisation of our society, unlock inclusive innovation capacity of our economy, and build climate stabilisation resilience . This requires system innovation at the city scale.

It is this complexity, knot of systems of systems and the need for socially legitimate solutions, which is forcing a new architecture of legitimacy and the growing global calls for the strategic devolution of nation states — and the rise of the city. But this transition is about more than just nation states handing over power to cities (which to date has been much of the call — understandably). If cities are to be genuine “engines” of Human Development 2.0, where we can address and transcend our societal challenges to create a regenerative industrial revolution 4.0, they will need to transform the lock-in of systems and unleash the economies of scope, context and systems change to create a legitimate landscape for solutions in a complex the world. It is this latter work that needs to be developed and reimagined.

Remaking legitimacy involves remaking the deliberative and participatory infrastructure of civic debate and civic policy making. This needs to go beyond just new tools of opinion harvesting (whilst they do have a space and a need). We increasingly recognise addressing complex challenge requires deliberative processes if we are to avoid meaningless simplicity or meaningless solutions — either addressing averages that don’t exist, or wishing away reality as we are increasingly witnessing with the political denials of climate destabilisation….(More)”.