The Big Nine: How The Tech Titans and Their Thinking Machines Could Warp Humanity


Book by Amy Webb:”…A call-to-arms about the broken nature of artificial intelligence, and the powerful corporations that are turning the human-machine relationship on its head. We like to think that we are in control of the future of “artificial” intelligence. The reality, though, is that we–the everyday people whose data powers AI–aren’t actually in control of anything. When, for example, we speak with Alexa, we contribute that data to a system we can’t see and have no input into–one largely free from regulation or oversight. The big nine corporations–Amazon, Google, Facebook, Tencent, Baidu, Alibaba, Microsoft, IBM and Apple–are the new gods of AI and are short-changing our futures to reap immediate financial gain.

In this book, Amy Webb reveals the pervasive, invisible ways in which the foundations of AI–the people working on the system, their motivations, the technology itself–is broken. Within our lifetimes, AI will, by design, begin to behave unpredictably, thinking and acting in ways which defy human logic. The big nine corporations may be inadvertently building and enabling vast arrays of intelligent systems that don’t share our motivations, desires, or hopes for the future of humanity.

Much more than a passionate, human-centered call-to-arms, this book delivers a strategy for changing course, and provides a path for liberating us from algorithmic decision-makers and powerful corporations….(More)”

The Stanford Open Policing Project


About: “On a typical day in the United States, police officers make more than 50,000 traffic stops. Our team is gathering, analyzing, and releasing records from millions of traffic stops by law enforcement agencies across the country. Our goal is to help researchers, journalists, and policymakers investigate and improve interactions between police and the public.

Currently, a comprehensive, national repository detailing interactions between police and the public doesn’t exist. That’s why the Stanford Open Policing Project is collecting and standardizing data on vehicle and pedestrian stops from law enforcement departments across the country — and we’re making that information freely available. We’ve already gathered 130 million records from 31 state police agencies and have begun collecting data on stops from law enforcement agencies in major cities, as well.

We, the Stanford Open Policing Project, are an interdisciplinary team of researchers and journalists at Stanford University. We are committed to combining the academic rigor of statistical analysis with the explanatory power of data journalism….(More)”.

Opening the Government of Canada The Federal Bureaucracy in the Digital Age


Book by Amanda Clarke: “In the digital age, governments face growing calls to become more open, collaborative, and networked. But can bureaucracies abandon their closed-by-design mindsets and operations and, more importantly, should they?

Opening the Government of Canada presents a compelling case for the importance of a more open model of governance in the digital age – but a model that continues to uphold traditional democratic principles at the heart of the Westminster system. Drawing on interviews with public officials and extensive analysis of government documents and social media accounts, Clarke details the untold story of the Canadian federal bureaucracy’s efforts to adapt to new digital pressures from the mid-2000s onward. This book argues that the bureaucracy’s tradition of closed government, fuelled by today’s antagonistic political communications culture, is at odds with evolving citizen expectations and new digital policy tools, including social media, crowdsourcing, and open data. Amanda Clarke also cautions that traditional democratic principles and practices essential to resilient governance must not be abandoned in the digital age, which may justify a more restrained opening of our governing institutions than is currently proposed by many academics and governments alike.

Striking a balance between reform and tradition, Opening the Government of Canada concludes with a series of pragmatic recommendations that lay out a roadmap for building a democratically robust, digital-era federal government….(More)”.

The new ecosystem of trust: How data trusts, collaboratives and coops can help govern data for the maximum public benefit


Paper by Geoff Mulgan and Vincent Straub: The world is struggling to govern data. The challenge is to reduce abuses of all kinds, enhance accountability and improve ethical standards, while also ensuring that the maximum public and private value can also be derived from data.

Despite many predictions to the contrary the world of commercial data is dominated by powerful organisations. By contrast, there are few institutions to protect the public interest and those that do exist remain relatively weak. This paper argues that new institutions—an ecosystem of trust—are needed to ensure that uses of data are trusted and trustworthy. It advocates the creation of different kinds of data trust to fill this gap. It argues:

  • That we need, but currently lack, institutions that are good at thinking through, discussing, and explaining the often complex trade-offs that need to be made about data.
  • That the task of creating trust is different in different fields. Overly generic solutions will be likely to fail.
  • That trusts need to be accountable—in some cases to individual members where there is a direct relationship with individuals giving consent, in other cases to the broader public.
  • That we should expect a variety of types of data trust to form—some sharing data; some managing synthetic data; some providing a research capability; some using commercial data and so on. The best analogy is finance which over time has developed a very wide range of types of institution and governance.

This paper builds on a series of Nesta think pieces on data and knowledge commons published over the last decade and current practical projects that explore how data can be mobilised to improve healthcarepolicing, the jobs market and education. It aims to provide a framework for designing a new family of institutions under the umbrella title of data trusts, tailored to different conditions of consent, and different patterns of private and public value. It draws on the work of many others (including the work of GovLab and the Open Data Institute).

Introduction

The governance of personal data of all kinds has recently moved from being a very marginal specialist issue to one of general concern. Too much data has been misused, lost, shared, sold or combined with little involvement of the people most affected, and little ethical awareness on the part of the organisations in charge.

The most visible responses have been general ones—like the EU’s GDPR. But these now need to be complemented by new institutions that can be generically described as ‘data trusts’.

In current practice the term ‘trust’ is used to describe a very wide range of institutions. These include private trusts, a type of legal structure that holds and makes decisions about assets, such as property or investments, and involves trustors, trustees, and beneficiaries. There are also public trusts in fields like education with a duty to provide a public benefit. Examples include the Nesta Trust and the National Trust. There are trusts in business (e.g. to manage pension funds). And there are trusts in the public sector, such as the BBC Trust and NHS Foundation Trusts with remits to protect the public interest, at arms length from political decisions.

It’s now over a decade since the first data trusts were set up as private initiatives in response to anxieties about abuse. These were important pioneers though none achieved much scale or traction.

Now a great deal of work is underway around the world to consider what other types of trust might be relevant to data, so as to fill the governance vacuum—handling everything from transport data to personalised health, the internet of things to school records, and recognising the very different uses of data—by the state for taxation or criminal justice etc.; by academia for research; by business for use and resale; and to guide individual choices. This paper aims to feed into that debate.

1. The twin problems: trust and value

Two main clusters of problem are coming to prominence. The first cluster of problems involve misuseand overuse of data; the second set of problems involves underuse of data.

1.1. Lack of control fuels distrust

The first problem is a lack of control and agency—individuals feel unable to control data about their own lives (from Facebook links and Google searches to retail behaviour and health) and communities are unable to control their own public data (as in Sidewalk labs and other smart city projects that attempted to privatise public data). Lack of control leads to the risk of abuses of privacy, and a wider problem of decreasing trust—which survey evidence from the Open Data Institute (ODI) shows is key in determining the likelihood consumers will share their personal data (although this varies across countries). The lack of transparency regarding how personal data is then used to train algorithms making decisions only adds to the mistrust.

1.2 Lack of trust leads to a deficit of public value

The second, mirror cluster of problems concern value. Flows of data promise a lot: better ways to assess problems, understand options, and make decisions. But current arrangements make it hard for individuals to realise the greatest value from their own data, and they make it even harder for communities to safely and effectively aggregate, analyse and link data to solve pressing problems, from health and crime to mobility. This is despite the fact that many consumers are prepared to make trade-offs: to share data if it benefits themselves and others—a 2018 Nesta poll found, for example, that 73 per cent of people said they would share their personal data in an effort to improve public services if there was a simple and secure way of doing it. A key reason for the failure to maximise public value is the lack of institutions that are sufficiently trusted to make judgements in the public interest.

Attempts to answer these problems sometimes point in opposite directions—the one towards less free flow, less linking of data, the other towards more linking and combination. But any credible policy responses have to address both simultaneously.

2. The current landscape

The governance field was largely empty earlier this decade. It is now full of activity, albeit at an early stage. Some is legislative—like GDPR and equivalents being considered around the world. Some is about standards—like Verify, IHAN and other standards intended to handle secure identity. Some is more entrepreneurial—like the many Personal Data Stores launched over the last decade, from Mydexto SOLID, Citizen-me to digi.me. Some are experiments like the newly launched Amsterdam Data Exchange (Amdex) and the UK government’s recently announced efforts to fund data trust pilots to tackle wildlife conservation, working with the ODI. Finally, we are now beginning to see new institutions within government to guide and shape activity, notably the new Centre for Data Ethics and Innovation.

Many organisations have done pioneering work, including the ODI in the UK and NYU GovLab with its work on data collaboratives. At Nesta, as part of the Europe-wide DECODE consortium, we are helping to develop new tools to give people control of their personal data while the Next Generation Internet (NGI) initiative is focused on creating a more inclusive, human-centric and resilient internet—with transparency and privacy as two of the guiding pillars.

The task of governing data better brings together many elements, from law and regulation to ethics and standards. We are just beginning to see more serious discussion about tax and data—from the proposals to tax digital platforms turnover to more targeted taxes of data harvesting in public places or infrastructures—and more serious debate around regulation. This paper deals with just one part of this broader picture: the role of institutions dedicated to curating data in the public interest….(More)”.

State Capability, Policymaking and the Fourth Industrial Revolution


Demos Helsinki: “The world as we know it is built on the structures of the industrial era – and these structures are falling apart. Yet the vision of a new, sustainable and fair post-industrial society remains unclear. This discussion paper is the result of a collaboration between a group of organisations interested in the implications of the rapid technological development to policymaking processes and knowledge systems that inform policy decisions.

In the discussion paper, we set out to explore what the main opportunities and concerns that accompany the Fourth Industrial Revolution for policymaking and knowledge systems are particularly in middle-income countries. Overall, middle-income countries are home to five billion of the world’s seven billion people and 73 per cent of the world’s poor people; they represent about one-third of the global Gross Domestic Product (GDP) and are major engines of global growth (World Bank 2018).

The paper is co-produced with Capability (Finland), Demos Helsinki (Finland), HELVETAS Swiss Intercooperation (Switzerland), Politics & Ideas (global), Southern Voice (global), UNESCO Montevideo (Uruguay) and Using Evidence (Canada).

The guiding questions for this paper are:

– What are the critical elements of the Fourth Industrial Revolution?

– What does the literature say about the impact of this revolution on societies and economies, and in particular on middle-income countries?

– What are the implications of the Fourth Industrial Revolution for the achievement of the Sustainable Development Goals (SDGs) in middle-income countries?

– What does the literature say about the challenges for governance and the ways knowledge can inform policy during the Fourth Industrial Revolution?…(More)”.

Full discussion paper“State Capability, Policymaking and the Fourth Industrial Revolution: Do Knowledge Systems Matter?”

The privacy threat posed by detailed census data


Gillian Tett at the Financial Times: “Wilbur Ross suffered the political equivalent of a small(ish) black eye last month: a federal judge blocked the US commerce secretary’s attempts to insert a question about citizenship into the 2020 census and accused him of committing “egregious” legal violations.

The Supreme Court has agreed to hear the administration’s appeal in April. But while this high-profile fight unfolds, there is a second, less noticed, census issue about data privacy emerging that could have big implications for businesses (and citizens). Last weekend John Abowd, the Census Bureau’s chief scientist, told an academic gathering that statisticians had uncovered shortcomings in the protection of personal data in past censuses. There is no public evidence that anyone has actually used these weaknesses to hack records, and Mr Abowd insisted that the bureau is using cutting-edge tools to fight back. But, if nothing else, this revelation shows the mounting problem around data privacy. Or, as Mr Abowd, noted: “These developments are sobering to everyone.” These flaws are “not just a challenge for statistical agencies or internet giants,” he added, but affect any institution engaged in internet commerce and “bioinformatics”, as well as commercial lenders and non-profit survey groups. Bluntly, this includes most companies and banks.

The crucial problem revolves around what is known as “re-identification” risk. When companies and government institutions amass sensitive information about individuals, they typically protect privacy in two ways: they hide the full data set from outside eyes or they release it in an “anonymous” manner, stripped of identifying details. The census bureau does both: it is required by law to publish detailed data and protect confidentiality. Since 1990, it has tried to resolve these contradictory mandates by using “household-level swapping” — moving some households from one geographic location to another to generate enough uncertainty to prevent re-identification. This used to work. But today there are so many commercially-available data sets and computers are so powerful that it is possible to re-identify “anonymous” data by combining data sets. …

Thankfully, statisticians think there is a solution. The Census Bureau now plans to use a technique known as “differential privacy” which would introduce “noise” into the public statistics, using complex algorithms. This technique is expected to create just enough statistical fog to protect personal confidentiality in published data — while also preserving information in an encrypted form that statisticians can later unscramble, as needed. Companies such as Google, Microsoft and Apple have already used variants of this technique for several years, seemingly successfully. However, nobody has employed this system on the scale that the Census Bureau needs — or in relation to such a high stakes event. And the idea has sparked some controversy because some statisticians fear that even “differential privacy” tools can be hacked — and others fret it makes data too “noisy” to be useful….(More)”.

A Parent-To-Parent Campaign To Get Vaccine Rates Up


Alex Olgin at NPR: “In 2017, Kim Nelson had just moved her family back to her hometown in South Carolina. Boxes were still scattered around the apartment, and while her two young daughters played, Nelson scrolled through a newspaper article on her phone. It said religious exemptions for vaccines had jumped nearly 70 percent in recent years in the Greenville area — the part of the state she had just moved to.

She remembers yelling to her husband in the other room, “David, you have to get in here! I can’t believe this.”

Up until that point, Nelson hadn’t run into mom friends who didn’t vaccinate….

Nelson started her own group, South Carolina Parents for Vaccines. She began posting scientific articles online. She started responding to private messages from concerned parents with specific questions. She also found that positive reinforcement was important and would roam around the mom groups, sprinkling affirmations.

“If someone posts, ‘My child got their two-months shots today,’ ” Nelson says, she’d quickly post a follow-up comment: “Great job, mom!”

Nelson was inspired by peer-focused groups around the country doing similar work. Groups with national reach like Voices for Vaccines and regional groups like Vax Northwest in Washington state take a similar approach, encouraging parents to get educated and share facts about vaccines with other parents….

Public health specialists are raising concerns about the need to improve vaccination rates. But efforts to reach vaccine-hesitant parents often fail. When presented with facts about vaccine safety, parents often remained entrenched in a decision not to vaccinate.

Pediatricians could play a role — and many do — but they’re not compensated to have lengthy discussions with parents, and some of them find it a frustrating task. That has left an opening for alternative approaches, like Nelson’s.

Nelson thought it would be best to zero in on moms who were still on the fence about vaccines.

“It’s easier to pull a hesitant parent over than it is somebody who is firmly anti-vax,” Nelson says. She explains that parents who oppose vaccination often feel so strongly about it that they won’t engage in a discussion. “They feel validated by that choice — it’s part of community, it’s part of their identity.”…(More)”.

Open data governance and open governance: interplay or disconnect?


Blog Post by Ana Brandusescu, Carlos Iglesias, Danny Lämmerhirt, and Stefaan Verhulst (in alphabetical order): “The presence of open data often gets listed as an essential requirement toward “open governance”. For instance, an open data strategy is reviewed as a key component of many action plans submitted to the Open Government Partnership. Yet little time is spent on assessing how open data itself is governed, or how it embraces open governance. For example, not much is known on whether the principles and practices that guide the opening up of government — such as transparency, accountability, user-centrism, ‘demand-driven’ design thinking — also guide decision-making on how to release open data.

At the same time, data governance has become more complex and open data decision-makers face heightened concerns with regards to privacy and data protection. The recent implementation of the EU’s General Data Protection Regulation (GDPR) has generated an increased awareness worldwide of the need to prevent and mitigate the risks of personal data disclosures, and that has also affected the open data community. Before opening up data, concerns of data breaches, the abuse of personal information, and the potential of malicious inference from publicly available data may have to be taken into account. In turn, questions of how to sustain existing open data programs, user-centrism, and publishing with purpose gain prominence.

To better understand the practices and challenges of open data governance, we have outlined a research agenda in an earlier blog post. Since then, and perhaps as a result, governance has emerged as an important topic for the open data community. The audience attending the 5th International Open Data Conference (IODC) in Buenos Aires deemed governance of open data to be the most important discussion topic. For instance, discussions around the Open Data Charter principles during and prior to the IODC acknowledged the role of an integrated governance approach to data handling, sharing, and publication. Some conclude that the open data movement has brought about better governance, skills, technologies of public information management which becomes an enormous long-term value for government. But what does open data governance look like?

Understanding open data governance

To expand our earlier exploration and broaden the community that considers open data governance, we convened a workshop at the Open Data Research Symposium 2018. Bringing together open data professionals, civil servants, and researchers, we focused on:

  • What is open data governance?
  • When can we speak of “good” open data governance, and
  • How can the research community help open data decision-makers toward “good” open data governance?

In this session, open data governance was defined as the interplay of rules, standards, tools, principles, processes and decisions that influence what government data is opened up, how and by whom. We then explored multiple layers that can influence open data governance.

In the following, we illustrate possible questions to start mapping the layers of open data governance. As they reflect the experiences of session participants, we see them as starting points for fresh ethnographic and descriptive research on the daily practices of open data governance in governments….(More)”.

Can transparency make extractive industries more accountable?


Blog by John Gaventa at IDS: “Over the last two decades great strides have been made in terms of holding extractive industries accountable.  As demonstrated at the Global Assembly of Publish What You Pay (PWYP), which I attended recently in Dakar, Senegal, more information than ever about revenue flows to governments from the oil gas and mining industries is now publicly available.  But new research suggests that such information disclosure, while important, is by itself not enough to hold companies to account, and address corruption.

… a recent study in Mozambique by researchers Nicholas Aworti and Adriano Adriano Nuvunga questions this assumption.  Supported by the Action for Empowerment and Accountability (A4EA) Research Programme, the research explored why greater transparency of information has not necessarily led to greater social and political action for accountability.

Like many countries in Africa, Mozambique is experiencing massive outside investments in recently discovered natural resources, including rich deposits of natural gas and oil, as well as coal and other minerals.  Over the last decade, NGOs like the Centre for Public Integrity, who helped facilitate the study, have done brave and often pioneering work to elicit information on the extractive industry, and to publish it in hard-hitting reports, widely reported in the press, and discussed at high-level stakeholder meetings.

Yet, as Aworti and Nuvunga summarise in a policy brief based on their research, ‘neither these numerous investigative reports nor the EITI validation reports have inspired social and political action such as public protest or state prosecution.’   Corruption continues, and despite the newfound mineral wealth, the country remains one of the poorest in Africa.

The authors ask, ‘If information disclosure has not been enough to galvanise citizen and institutional action, what could be the reason?’ The research found 18 other factors that affect whether information leads to action, including the quality of the information and how it is disseminated, the degree of citizen empowerment, the nature of the political regime, and the role of external donors in insisting on accountability….

The research and the challenges highlighted by the Mozambique case point to the need for new approaches.   At the Global Assembly in Dakar several hundred of PYWP’s more than 700 members from 45 countries gathered to discuss and to approve the organisation’s next strategic plan. Among other points, the plan calls for going beyond transparency –  to more intentionally use information to foster and promote citizen action,  strengthen  grassroots participation and voice on mining issues, and  improve links with other related civil society movements working on gender, climate and tax justice in the extractives field.

Coming at a time where increasing push back and repression threaten the space for citizens to speak truth to power, this is a bold call.  I chaired two sessions with PWYP activists who had been beaten, jailed, threatened or exiled for challenging mining companies, and 70 per cent of the delegates at the conference said their work had been affected by this more repressive environment….(More)”.

Governance of artificial intelligence and personal health information


Jenifer Sunrise Winter in Digital Policy, Regulation and Governance: “This paper aims to assess the increasing challenges to governing the personal health information (PHI) essential for advancing artificial intelligence (AI) machine learning innovations in health care. Risks to privacy and justice/equity are discussed, along with potential solutions….

This paper argues that these characteristics of machine learning will overwhelm existing data governance approaches such as privacy regulation and informed consent. Enhanced governance techniques and tools will be required to help preserve the autonomy and rights of individuals to control their PHI. Debate among all stakeholders and informed critique of how, and for whom, PHI-fueled health AI are developed and deployed are needed to channel these innovations in societally beneficial directions.

Health data may be used to address pressing societal concerns, such as operational and system-level improvement, and innovations such as personalized medicine. This paper informs work seeking to harness these resources for societal good amidst many competing value claims and substantial risks for privacy and security….(More).