From Gutenberg to Google: The History of Our Future


Book by Tom Wheeler: “Network revolutions of the past have shaped the present and set the stage for the revolution we are experiencing today

In an era of seemingly instant change, it’s easy to think that today’s revolutions—in communications, business, and many areas of daily life—are unprecedented. Today’s changes may be new and may be happening faster than ever before. But our ancestors at times were just as bewildered by rapid upheavals in what we now call “networks”—the physical links that bind any society together.

In this fascinating book, former FCC chairman Tom Wheeler brings to life the two great network revolutions of the past and uses them to help put in perspective the confusion, uncertainty, and even excitement most people face today. The first big network revolution was the invention of movable-type printing in the fifteenth century. This book, its millions of predecessors, and even such broad trends as the Reformation, the Renaissance, and the multiple scientific revolutions of the past 500 years would not have been possible without that one invention. The second revolution came with the invention of the telegraph early in the nineteenth century. Never before had people been able to communicate over long distances faster than a horse could travel. Along with the development of the world’s first high-speed network—the railroad—the telegraph upended centuries of stability and literally redrew the map of the world.

Wheeler puts these past revolutions into the perspective of today, when rapid-fire changes in networking are upending the nature of work, personal privacy, education, the media, and nearly every other aspect of modern life. But he doesn’t leave it there. Outlining “What’s Next,” he describes how artificial intelligence, virtual reality, blockchain, and the need for cybersecurity are laying the foundation for a third network revolution….(More)”.

Consumers kinda, sorta care about their data


Kim Hart at Axios: “A full 81% of consumers say that in the past year they’ve become more concerned with how companies are using their data, and 87% say they’ve come to believe companies that manage personal data should be more regulated, according to a survey out Monday by IBM’s Institute for Business Value.

Yes, but: They aren’t totally convinced they should care about how their data is being used, and many aren’t taking meaningful action after privacy breaches, according to the survey. Despite increasing data risks, 71% say it’s worth sacrificing privacy given the benefits of technology.Show less

By the numbers:

  • 89% say technology companies need to be more transparent about their products
  • 75% say that in the past year they’ve become less likely to trust companies with their personal data
  • 88% say the emergence of technologies like AI increase the need for clear policies about the use of personal data.

The other side: Despite increasing awareness of privacy and security breaches, most consumers aren’t taking consequential action to protect their personal data.

  • Fewer than half (45%) report that they’ve updated privacy settings, and only 16% stopped doing business with an entity due to data misuse….(More)”.

You Do Not Need Blockchain: Eight Popular Use Cases And Why They Do Not Work


Blog Post by Ivan Ivanitskiy: “People are resorting to blockchain for all kinds of reasons these days. Ever since I started doing smart contract security audits in mid-2017, I’ve seen it all. A special category of cases is ‘blockchain use’ that seems logical and beneficial, but actually contains a problem that then spreads from one startup to another. I am going to give some examples of such problems and ineffective solutions so that you (developer/customer/investor) know what to do when somebody offers you to use blockchain this way.

1. Supply chain management

Let’s say you ordered some goods, and a carrier guarantees to maintain certain transportation conditions, such as keeping your goods cold. A proposed solution is to install a sensor in a truck that will monitor fridge temperature and regularly transmit the data to the blockchain. This way, you can make sure that the promised conditions are met along the entire route.

The problem here is not blockchain, but rather sensor, related. Being part of the physical world, the sensor is easy to fool. For example, a malicious carrier might only cool down a small fridge inside the truck in which they put the sensor, while leaving the goods in the non-refrigerated section of the truck to save costs.

I would describe this problem as:

Blockchain is not Internet of Things (IOT).

We will return to this statement a few more times. Even though blockchain does not allow for modification of data, it cannot ensure such data is correct.The only exception is on-chain transactions, when the system does not need the real world, with all necessary information already being within the blockchain, thus allowing the system to verify data (e.g. that an address has enough funds to proceed with a transaction).

Applications that submit information to a blockchain from the outside are called “oracles” (see article ‘Oracles, or Why Smart Contracts Haven’t Changed the World Yet?’ by Alexander Drygin). Until a solution to the problem with oracles is found, any attempt at blockchain-based supply chain management, like the case above, is as pointless as trying to design a plane without first developing a reliable engine.

I borrowed the fridge case from the article ‘Do you Need Blockchain’ by Karl Wüst and Arthur Gervais. I highly recommend reading this article and paying particular attention to the following diagram:

2. Object authenticity guarantee

Even though this case is similar to the previous one, I would like to single it out as it is presented in a different wrapper.

Say we make unique and expensive goods, such as watches, wines, or cars. We want our customers to be absolutely sure they are buying something made by us, so we link our wine bottle to a token supported by blockchain and put a QR code on it. Now, every step of the way (from manufacturer, to carrier, to store, to customer) is confirmed by a separate blockchain transaction and the customer can track their bottle online.

However, this system is vulnerable to a very simple threat: a dishonest seller can make a copy of a real bottle with a token, fill it with wine of lower quality, and either steal your expensive wine or sell it to someone who does not care about tokens. Why is it so easy? That’s right! Because…(More)”

Collective Emotions and Protest Vote


Paper by Carlo Altomonte, Gloria Gennaro and Francesco Passarelli: “We leverage on important findings in social psychology to build a behavioral theory of protest vote. An individual develops a feeling of resentment if she loses income over time while richer people do not, or if she does not gain as others do, i.e. when her relative deprivation increases. In line with the Intergroup Emotions Theory, this feeling is amplified if the individual identifies with a community experiencing the same feeling. Such a negative collective emotion, which we define as aggrievement, fuels the desire to take revenge against traditional parties and the richer elite, a common trait of populist rhetoric.

The theory predicts higher support for the protest party when individuals identify more strongly with their local community and when a higher share of community members are aggrieved. We test this theory using longitudinal data on British households and exploiting the emergence of the UK Independence Party (UKIP) in Great Britain in the 2010 and 2015 national elections. Empirical findings robustly support theoretical predictions. The psychological mechanism postulated by our theory survives the controls for alternative non-behavioral mechanisms (e.g. information sharing or political activism in local communities)….(More)”.

The Big Nine: How The Tech Titans and Their Thinking Machines Could Warp Humanity


Book by Amy Webb:”…A call-to-arms about the broken nature of artificial intelligence, and the powerful corporations that are turning the human-machine relationship on its head. We like to think that we are in control of the future of “artificial” intelligence. The reality, though, is that we–the everyday people whose data powers AI–aren’t actually in control of anything. When, for example, we speak with Alexa, we contribute that data to a system we can’t see and have no input into–one largely free from regulation or oversight. The big nine corporations–Amazon, Google, Facebook, Tencent, Baidu, Alibaba, Microsoft, IBM and Apple–are the new gods of AI and are short-changing our futures to reap immediate financial gain.

In this book, Amy Webb reveals the pervasive, invisible ways in which the foundations of AI–the people working on the system, their motivations, the technology itself–is broken. Within our lifetimes, AI will, by design, begin to behave unpredictably, thinking and acting in ways which defy human logic. The big nine corporations may be inadvertently building and enabling vast arrays of intelligent systems that don’t share our motivations, desires, or hopes for the future of humanity.

Much more than a passionate, human-centered call-to-arms, this book delivers a strategy for changing course, and provides a path for liberating us from algorithmic decision-makers and powerful corporations….(More)”

The Stanford Open Policing Project


About: “On a typical day in the United States, police officers make more than 50,000 traffic stops. Our team is gathering, analyzing, and releasing records from millions of traffic stops by law enforcement agencies across the country. Our goal is to help researchers, journalists, and policymakers investigate and improve interactions between police and the public.

Currently, a comprehensive, national repository detailing interactions between police and the public doesn’t exist. That’s why the Stanford Open Policing Project is collecting and standardizing data on vehicle and pedestrian stops from law enforcement departments across the country — and we’re making that information freely available. We’ve already gathered 130 million records from 31 state police agencies and have begun collecting data on stops from law enforcement agencies in major cities, as well.

We, the Stanford Open Policing Project, are an interdisciplinary team of researchers and journalists at Stanford University. We are committed to combining the academic rigor of statistical analysis with the explanatory power of data journalism….(More)”.

Opening the Government of Canada The Federal Bureaucracy in the Digital Age


Book by Amanda Clarke: “In the digital age, governments face growing calls to become more open, collaborative, and networked. But can bureaucracies abandon their closed-by-design mindsets and operations and, more importantly, should they?

Opening the Government of Canada presents a compelling case for the importance of a more open model of governance in the digital age – but a model that continues to uphold traditional democratic principles at the heart of the Westminster system. Drawing on interviews with public officials and extensive analysis of government documents and social media accounts, Clarke details the untold story of the Canadian federal bureaucracy’s efforts to adapt to new digital pressures from the mid-2000s onward. This book argues that the bureaucracy’s tradition of closed government, fuelled by today’s antagonistic political communications culture, is at odds with evolving citizen expectations and new digital policy tools, including social media, crowdsourcing, and open data. Amanda Clarke also cautions that traditional democratic principles and practices essential to resilient governance must not be abandoned in the digital age, which may justify a more restrained opening of our governing institutions than is currently proposed by many academics and governments alike.

Striking a balance between reform and tradition, Opening the Government of Canada concludes with a series of pragmatic recommendations that lay out a roadmap for building a democratically robust, digital-era federal government….(More)”.

The new ecosystem of trust: How data trusts, collaboratives and coops can help govern data for the maximum public benefit


Paper by Geoff Mulgan and Vincent Straub: The world is struggling to govern data. The challenge is to reduce abuses of all kinds, enhance accountability and improve ethical standards, while also ensuring that the maximum public and private value can also be derived from data.

Despite many predictions to the contrary the world of commercial data is dominated by powerful organisations. By contrast, there are few institutions to protect the public interest and those that do exist remain relatively weak. This paper argues that new institutions—an ecosystem of trust—are needed to ensure that uses of data are trusted and trustworthy. It advocates the creation of different kinds of data trust to fill this gap. It argues:

  • That we need, but currently lack, institutions that are good at thinking through, discussing, and explaining the often complex trade-offs that need to be made about data.
  • That the task of creating trust is different in different fields. Overly generic solutions will be likely to fail.
  • That trusts need to be accountable—in some cases to individual members where there is a direct relationship with individuals giving consent, in other cases to the broader public.
  • That we should expect a variety of types of data trust to form—some sharing data; some managing synthetic data; some providing a research capability; some using commercial data and so on. The best analogy is finance which over time has developed a very wide range of types of institution and governance.

This paper builds on a series of Nesta think pieces on data and knowledge commons published over the last decade and current practical projects that explore how data can be mobilised to improve healthcarepolicing, the jobs market and education. It aims to provide a framework for designing a new family of institutions under the umbrella title of data trusts, tailored to different conditions of consent, and different patterns of private and public value. It draws on the work of many others (including the work of GovLab and the Open Data Institute).

Introduction

The governance of personal data of all kinds has recently moved from being a very marginal specialist issue to one of general concern. Too much data has been misused, lost, shared, sold or combined with little involvement of the people most affected, and little ethical awareness on the part of the organisations in charge.

The most visible responses have been general ones—like the EU’s GDPR. But these now need to be complemented by new institutions that can be generically described as ‘data trusts’.

In current practice the term ‘trust’ is used to describe a very wide range of institutions. These include private trusts, a type of legal structure that holds and makes decisions about assets, such as property or investments, and involves trustors, trustees, and beneficiaries. There are also public trusts in fields like education with a duty to provide a public benefit. Examples include the Nesta Trust and the National Trust. There are trusts in business (e.g. to manage pension funds). And there are trusts in the public sector, such as the BBC Trust and NHS Foundation Trusts with remits to protect the public interest, at arms length from political decisions.

It’s now over a decade since the first data trusts were set up as private initiatives in response to anxieties about abuse. These were important pioneers though none achieved much scale or traction.

Now a great deal of work is underway around the world to consider what other types of trust might be relevant to data, so as to fill the governance vacuum—handling everything from transport data to personalised health, the internet of things to school records, and recognising the very different uses of data—by the state for taxation or criminal justice etc.; by academia for research; by business for use and resale; and to guide individual choices. This paper aims to feed into that debate.

1. The twin problems: trust and value

Two main clusters of problem are coming to prominence. The first cluster of problems involve misuseand overuse of data; the second set of problems involves underuse of data.

1.1. Lack of control fuels distrust

The first problem is a lack of control and agency—individuals feel unable to control data about their own lives (from Facebook links and Google searches to retail behaviour and health) and communities are unable to control their own public data (as in Sidewalk labs and other smart city projects that attempted to privatise public data). Lack of control leads to the risk of abuses of privacy, and a wider problem of decreasing trust—which survey evidence from the Open Data Institute (ODI) shows is key in determining the likelihood consumers will share their personal data (although this varies across countries). The lack of transparency regarding how personal data is then used to train algorithms making decisions only adds to the mistrust.

1.2 Lack of trust leads to a deficit of public value

The second, mirror cluster of problems concern value. Flows of data promise a lot: better ways to assess problems, understand options, and make decisions. But current arrangements make it hard for individuals to realise the greatest value from their own data, and they make it even harder for communities to safely and effectively aggregate, analyse and link data to solve pressing problems, from health and crime to mobility. This is despite the fact that many consumers are prepared to make trade-offs: to share data if it benefits themselves and others—a 2018 Nesta poll found, for example, that 73 per cent of people said they would share their personal data in an effort to improve public services if there was a simple and secure way of doing it. A key reason for the failure to maximise public value is the lack of institutions that are sufficiently trusted to make judgements in the public interest.

Attempts to answer these problems sometimes point in opposite directions—the one towards less free flow, less linking of data, the other towards more linking and combination. But any credible policy responses have to address both simultaneously.

2. The current landscape

The governance field was largely empty earlier this decade. It is now full of activity, albeit at an early stage. Some is legislative—like GDPR and equivalents being considered around the world. Some is about standards—like Verify, IHAN and other standards intended to handle secure identity. Some is more entrepreneurial—like the many Personal Data Stores launched over the last decade, from Mydexto SOLID, Citizen-me to digi.me. Some are experiments like the newly launched Amsterdam Data Exchange (Amdex) and the UK government’s recently announced efforts to fund data trust pilots to tackle wildlife conservation, working with the ODI. Finally, we are now beginning to see new institutions within government to guide and shape activity, notably the new Centre for Data Ethics and Innovation.

Many organisations have done pioneering work, including the ODI in the UK and NYU GovLab with its work on data collaboratives. At Nesta, as part of the Europe-wide DECODE consortium, we are helping to develop new tools to give people control of their personal data while the Next Generation Internet (NGI) initiative is focused on creating a more inclusive, human-centric and resilient internet—with transparency and privacy as two of the guiding pillars.

The task of governing data better brings together many elements, from law and regulation to ethics and standards. We are just beginning to see more serious discussion about tax and data—from the proposals to tax digital platforms turnover to more targeted taxes of data harvesting in public places or infrastructures—and more serious debate around regulation. This paper deals with just one part of this broader picture: the role of institutions dedicated to curating data in the public interest….(More)”.

State Capability, Policymaking and the Fourth Industrial Revolution


Demos Helsinki: “The world as we know it is built on the structures of the industrial era – and these structures are falling apart. Yet the vision of a new, sustainable and fair post-industrial society remains unclear. This discussion paper is the result of a collaboration between a group of organisations interested in the implications of the rapid technological development to policymaking processes and knowledge systems that inform policy decisions.

In the discussion paper, we set out to explore what the main opportunities and concerns that accompany the Fourth Industrial Revolution for policymaking and knowledge systems are particularly in middle-income countries. Overall, middle-income countries are home to five billion of the world’s seven billion people and 73 per cent of the world’s poor people; they represent about one-third of the global Gross Domestic Product (GDP) and are major engines of global growth (World Bank 2018).

The paper is co-produced with Capability (Finland), Demos Helsinki (Finland), HELVETAS Swiss Intercooperation (Switzerland), Politics & Ideas (global), Southern Voice (global), UNESCO Montevideo (Uruguay) and Using Evidence (Canada).

The guiding questions for this paper are:

– What are the critical elements of the Fourth Industrial Revolution?

– What does the literature say about the impact of this revolution on societies and economies, and in particular on middle-income countries?

– What are the implications of the Fourth Industrial Revolution for the achievement of the Sustainable Development Goals (SDGs) in middle-income countries?

– What does the literature say about the challenges for governance and the ways knowledge can inform policy during the Fourth Industrial Revolution?…(More)”.

Full discussion paper“State Capability, Policymaking and the Fourth Industrial Revolution: Do Knowledge Systems Matter?”

The privacy threat posed by detailed census data


Gillian Tett at the Financial Times: “Wilbur Ross suffered the political equivalent of a small(ish) black eye last month: a federal judge blocked the US commerce secretary’s attempts to insert a question about citizenship into the 2020 census and accused him of committing “egregious” legal violations.

The Supreme Court has agreed to hear the administration’s appeal in April. But while this high-profile fight unfolds, there is a second, less noticed, census issue about data privacy emerging that could have big implications for businesses (and citizens). Last weekend John Abowd, the Census Bureau’s chief scientist, told an academic gathering that statisticians had uncovered shortcomings in the protection of personal data in past censuses. There is no public evidence that anyone has actually used these weaknesses to hack records, and Mr Abowd insisted that the bureau is using cutting-edge tools to fight back. But, if nothing else, this revelation shows the mounting problem around data privacy. Or, as Mr Abowd, noted: “These developments are sobering to everyone.” These flaws are “not just a challenge for statistical agencies or internet giants,” he added, but affect any institution engaged in internet commerce and “bioinformatics”, as well as commercial lenders and non-profit survey groups. Bluntly, this includes most companies and banks.

The crucial problem revolves around what is known as “re-identification” risk. When companies and government institutions amass sensitive information about individuals, they typically protect privacy in two ways: they hide the full data set from outside eyes or they release it in an “anonymous” manner, stripped of identifying details. The census bureau does both: it is required by law to publish detailed data and protect confidentiality. Since 1990, it has tried to resolve these contradictory mandates by using “household-level swapping” — moving some households from one geographic location to another to generate enough uncertainty to prevent re-identification. This used to work. But today there are so many commercially-available data sets and computers are so powerful that it is possible to re-identify “anonymous” data by combining data sets. …

Thankfully, statisticians think there is a solution. The Census Bureau now plans to use a technique known as “differential privacy” which would introduce “noise” into the public statistics, using complex algorithms. This technique is expected to create just enough statistical fog to protect personal confidentiality in published data — while also preserving information in an encrypted form that statisticians can later unscramble, as needed. Companies such as Google, Microsoft and Apple have already used variants of this technique for several years, seemingly successfully. However, nobody has employed this system on the scale that the Census Bureau needs — or in relation to such a high stakes event. And the idea has sparked some controversy because some statisticians fear that even “differential privacy” tools can be hacked — and others fret it makes data too “noisy” to be useful….(More)”.