Article by Martin Tisne: “…The proliferation of data in recent decades has led some reformers to a rallying cry: “You own your data!” Eric Posner of the University of Chicago, Eric Weyl of Microsoft Research, and virtual-reality guru Jaron Lanier, among others, argue that data should be treated as a possession. Mark Zuckerberg, the founder and head of Facebook, says so as well. Facebook now says that you “own all of the contact and information you post on Facebook” and “can control how it is shared.” The Financial Times argues that “a key part of the answer lies in giving consumers ownership of their own personal data.” In a recent speech, Tim Cook, Apple’s CEO, agreed, saying, “Companies should recognize that data belongs to users.”
This essay argues that “data ownership” is a flawed, counterproductive way of thinking about data. It not only does not fix existing problems; it creates new ones. Instead, we need a framework that gives people rights to stipulate how their data is used without requiring them to take ownership of it themselves….
The notion of “ownership” is appealing because it suggests giving you power and control over your data. But owning and “renting” out data is a bad analogy. Control over how particular bits of data are used is only one problem among many. The real questions are questions about how data shapes society and individuals. Rachel’s story will show us why data rights are important and how they might work to protect not just Rachel as an individual, but society as a whole.
Tomorrow never knows
To see why data ownership is a flawed concept, first think about this article you’re reading. The very act of opening it on an electronic device created data—an entry in your browser’s history, cookies the website sent to your browser, an entry in the website’s server log to record a visit from your IP address. It’s virtually impossible to do anything online—reading, shopping, or even just going somewhere with an internet-connected phone in your pocket—without leaving a “digital shadow” behind. These shadows cannot be owned—the way you own, say, a bicycle—any more than can the ephemeral patches of shade that follow you around on sunny days.
Your data on its own is not very useful to a marketer or an insurer. Analyzed in conjunction with similar data from thousands of other people, however, it feeds algorithms and bucketizes you (e.g., “heavy smoker with a drink habit” or “healthy runner, always on time”). If an algorithm is unfair—if, for example, it wrongly classifies you as a health risk because it was trained on a skewed data set or simply because you’re an outlier—then letting you “own” your data won’t make it fair. The only way to avoid being affected by the algorithm would be to never, ever give anyone access to your data. But even if you tried to hoard data that pertains to you, corporations and governments with access to large amounts of data about other people could use that data to make inferences about you. Data is not a neutral impression of reality. The creation and consumption of data reflects how power is distributed in society. …(More)”.
Stefaan Verhulst at Apolitical: “2018 will probably be remembered as the bust of the blockchain hype. Yet even as crypto currencies continue to sink in value and popular interest, the potential of using blockchain technologies to achieve social ends remains important to consider but poorly understood.
In 2019, business will continue to explore blockchain for sectors as disparate as finance, agriculture, logistics and healthcare. Policymakers and social innovators should also leverage 2019 to become more sophisticated about blockchain’s real promise, limitations and current practice.
In a recent report I prepared with Andrew Young, with the support of the Rockefeller Foundation, we looked at the potential risks and challenges of using blockchain for social change — or “Blockchan.ge.” A number of implementations and platforms are already demonstrating potential social impact.
In an illustration of the breadth of current experimentation, Stanford’s Center for Social Innovation recently analysed and mapped nearly 200 organisations and projects trying to create positive social change using blockchain. Likewise, the GovLab is developing a mapping of blockchange implementations across regions and topic areas; it currently contains 60 entries.
All these examples provide impressive — and hopeful — proof of concept. Yet despite the very clear potential of blockchain, there has been little systematic analysis. For what types of social impact is it best suited? Under what conditions is it most likely to lead to real social change? What challenges does blockchain face, what risks does it pose and how should these be confronted and mitigated?
These are just some of the questions our report, which builds its analysis on 10 case studies assembled through original research, seeks to address.
While the report is focused on identity management, it contains a number of lessons and insights that are applicable more generally to the subject of blockchange.
In particular, it contains seven design principles that can guide individuals or organisations considering the use of blockchain for social impact. We call these the Genesis principles, and they are outlined at the end of this article…(More)”.
Data-scores.org: “Data scores that combine data from a variety of both online and offline activities are becoming a way to categorize citizens, allocating services, and predicting future behavior. Yet little is still known about the implementation of data-driven systems and algorithmic processes in public services and how citizens are increasingly ‘scored’ based on the collection and combination of data.
As part of our project ‘Data Scores as Governance’ we have developed a tool to map and investigate the uses of data analytics and algorithms in public services in the UK. This tool is designed to facilitate further research and investigation into this topic and to advance public knowledge and understanding.
The tool is made up of a collection of documents from different sources that can be searched and mapped according to different categories. The database consists of more than 5300 unverified documents that have been scraped based on a number of search terms relating to data systems in government. This is an incomplete and on-going data-set. You can read more in our Methodology section….(More)”.
Danny Lämmerhirt at Open Knowledge Foundation: “Citizen-generated data (CGD) expands what gets measured, how, and for what purpose. As the collection and engagement with CGD increases in relevance and visibility, public institutions can learn from existing initiatives about what CGD initiatives do, how they enable different forms of sense-making and how this may further progress around the Sustainable Development Goals.
Our report, as well as a guide for governments (find the layouted version here, as well as a living document here) shall help start conversations around the different approaches of doing and organising CGD. When CGD becomes good enough depends on the purpose it is used for but also how CGD is situated in relation to other data.
As our work wishes to be illustrative rather than comprehensive, we started with a list of over 230 projects that were associated with the term “citizen-generated data” on Google Search, using an approach known as “search as research” (Rogers, 2013). Outgoing from this list, we developed case studies on a range of prominent CGD examples.
The report identifies several benefits CGD can bring for implementing and monitoring the SDGs, underlining the importance for public institutions to further support these initiatives.
Figure 1: Illustration of tasks underpinning CGD initiatives and their workflows
Key findings:
Dealing with data is usually much more than ‘just producing’ data. CGD initiativesopen up new types of relationships between individuals, civil society and public institutions. This includes local development and educational programmes, community outreach, and collaborative strategies for monitoring, auditing, planning and decision-making.
Generating data takes many shapes, from collecting new data in the field, to compiling, annotating, and structuring existing data to enable new ways of seeing things through data. Accessing and working with existing (government) data is often an important enabling condition for CGD initiatives to start in the first place.
CGD initiatives can help gathering data in regions otherwise not reachable. Some CGD approaches may provide updated and detailed data at lower costs and faster than official data collections.
Beyond filling data gaps, official measurements can be expanded, complemented, or cross-verified. This includes pattern and trend identification and the creation of baseline indicators for further research. CGD can help governments detect anomalies, test the accuracy of existing monitoring processes, understand the context around phenomena, and initiate its own follow-up data collections.
CGD can inform several actions to achieve the SDGs. Beyond education, community engagement and community-based problem solving, this includes baseline research, planning and strategy development, allocation and coordination of public and private programs, as well as improvement to public services.
CGD must be ‘good enough’ for different (and varying) purposes. Governments already develop pragmatic ways to negotiate and assess the usefulness of data for a specific task. CGD may be particularly useful when agencies have a clear remit or responsibility to manage a problem.
Data quality can be comparable to official data collections, provided tasks are sufficiently easy to conduct, tool quality is high enough, and sufficient training, resources and quality assurance are provided….(More)”.
“Polls suggest that governments across the world face high levels of citizen dissatisfaction, and low levels of citizen trust. The 2017 Edelman Trust Barometer found, for instance, that only 43% of those surveyed trust Canada’s government. Only 15% of those surveyed trust government in South Africa, and levels are low in other countries too—including Brazil (at 24%), South Korea (28%), the United Kingdom (36%), Australia, Japan, and Malaysia (37%), Germany (38%), Russia (45%), and the United States (47%). Similar surveys find trust in government averaging only 40-45% across member countries of the Organization for Economic Cooperation and Development (OECD), and suggest that as few as 31% and 32% of Nigerians and Liberians trust government.
There are many reasons why trust in government is deficient in so many countries, and these reasons differ from place to place. One common factor across many contexts, however, is a lack of confidence that governments can or will address key policy challenges faced by citizens.
Studies show that this confidence deficiency stems from citizen observations or experiences with past public policy failures, which promote jaundiced views of their public officials’ capabilities to deliver. Put simply, citizens lose faith in government when they observe government failing to deliver on policy promises, or to ‘get things done’. Incidentally, studies show that public officials also often lose faith in their own capabilities (and those of their organizations) when they observe, experience or participate in repeated policy implementation failures. Put simply, again, these public officials lose confidence in themselves when they repeatedly fail to ‘get things done’.
I call the ‘public policy futility’ trap—where past public policy failure leads to a lack of confidence in the potential of future policy success, which feeds actual public policy failure, which generates more questions of confidence, in a vicious self fulfilling prophecy. I believe that many governments—and public policy practitioners working within governments—are caught in this trap, and just don’t believe that they can muster the kind of public policy responses needed by their citizens.
Along with my colleagues at the Building State Capability (BSC) program, I believe that many policy communities are caught in this trap, to some degree or another. Policymakers in these communities keep coming up with ideas, and political leaders keep making policy promises, but no one really believes the ideas will solve the problems that need solving or produce the outcomes and impacts that citizens need. Policy promises under such circumstances center on doing what policymakers are confident they can actually implement: like producing research and position papers and plans, or allocating inputs toward the problem (in a budget, for instance), or sponsoring visible activities (holding meetings or engaging high profile ‘experts’ for advice), or producing technical outputs (like new organizations, or laws). But they hold back from promising real solutions to real problems, as they know they cannot really implement them (given past political opposition, perhaps, or the experience of seemingly interactable coordination challenges, or cultural pushback, and more)….(More)”.
Cesar Hidalgo at Scientific American: “Nearly 30 years ago, Paul Romer published a paper exploring the economic value of knowledge. In that paper, he argued that, unlike the classical factors of production (capital and labor), knowledge was a “non-rival good.” This meant that it could be shared infinitely, and thus, it was the only thing that could grow in per-capita terms.
Romer’s work was recently recognized with the Nobel Prize, even though it was just the beginning of a longer story. Knowledge could be infinitely shared, but did that mean it could go everywhere? Soon after Romer’s seminal paper, Adam Jaffe, Manuel Trajtenberg and Rebecca Henderson published a paper on the geographic diffusion of knowledge. Using a statistical technique called matching, they identified a “twin” for each patent (that is, a patent filed at the same time and making similar technological claims).
Then, they compared the citations received by each patent and its twin. Compared to their twins, patents received almost four more citations from other patents originating in the same city than those originating elsewhere. Romer was right in that knowledge could be infinitely shared, but also, knowledge had difficulties travelling far….
What will the study of knowledge bring us next? Will we get to a point at which we will measure Gross Domestic Knowledge as accurately as we measure Gross Domestic Product? Will we learn how to engineer knowledge diffusion? Will knowledge continue to concentrate in cities? Or will it finally break the shackles of society and spread to every corner of the world? The only thing we know for sure is that the study of knowledge is an exciting journey. The lowest hanging fruit may have already been picked, but the tree is still filled with fruits and flavors. Let’s climb it and explore….(More)”
Prediction by Geoff Mulgan, Eva Grobbink and Vincent Straub: “The USSR’s launch of the Sputnik 1 satellite in 1958 was a major psychological blow to the United States. The US had believed it was technologically far ahead of its rival, but was confronted with proof that the USSR was pulling ahead in some fields. After a bout of soul-searching the country responded with extraordinary vigour, massively increasing investment in space technologies and promising to put a man on the Moon by the end of the 1960s.
In 2019, China’s success in smart cities could prompt a similar “Sputnik Moment” for the rest of the world. It may not be as dramatic as that of 1958. But unlike beeping satellites and Moon landings, it could be coming to a town near you….
The concept of a “smart city” has been around for several decades, often associated with hype, grandiose failures, and an overemphasis on hardware rather than people (Nesta has previously written on how we can rethink smart cities and ensure digital innovation realises the potential of technology and people). But various technologies are now coming of age which bring the vision of a smart city closer to fruition. China is in the forefront, investing heavily in sensors and infrastructures, and its ET City Brain project shows just how far the country’s thinking has progressed.
First launched in September 2016, ET City Brain is a collaboration between Chinese technology giant Alibaba and several cities. It was first trialled in Hangzhou, the hometown of Alibaba’s executive chairman, Jack Ma, but has since expanded to other Chinese cities. Earlier this year, Kuala Lumpurbecame the first city outside of China to import the ET City Brain model.
The ET City Brain system gathers large amounts of data (including logs, videos, and data stream) from sensors. These are then processed by algorithms in supercomputers and fed back into control centres around the city for administrators to act on—in some cases, automation means the system works without any human intervention at all.
So far, the project has been used to monitor congestion in Hangzhou, improve the response of emergency services in Guangzhou, and detect traffic accidents in Suzhou. In Hangzhou, Alibaba was given control of 104 traffic light junctions in the city’s Xiaoshan district and tasked with managing traffic flows. By combining mass video surveillance with live data from public transportation systems, ET City Brain was able to autonomously change traffic lights so that emergency vehicles could travel to accident scenes without interruption. As a result, arrival times for ambulances improved by 49 percent….(More)”.
Peter Andras et al in IEEE Technology and Society Magazine: “Intelligent machines have reached capabilities that go beyond a level that a human being can fully comprehend without sufficiently detailed understanding of the underlying mechanisms. The choice of moves in the game Go (generated by Deep Mind’s Alpha Go Zero) are an impressive example of an artificial intelligence system calculating results that even a human expert for the game can hardly retrace. But this is, quite literally, a toy example. In reality, intelligent algorithms are encroaching more and more into our everyday lives, be it through algorithms that recommend products for us to buy, or whole systems such as driverless vehicles. We are delegating ever more aspects of our daily routines to machines, and this trend looks set to continue in the future. Indeed, continued economic growth is set to depend on it. The nature of human-computer interaction in the world that the digital transformation is creating will require (mutual) trust between humans and intelligent, or seemingly intelligent, machines. But what does it mean to trust an intelligent machine? How can trust be established between human societies and intelligent machines?…(More)”.
Julia Hobsbawm at Strategy + Business: “Picture the scene. The eyes of the world are on the Tham Luang cave system in Thailand, near the border with Myanmar. Trapped on a rock ledge deep inside is the Wild Boars soccer team of 12 boys and their coach, who had ventured into the caves about two weeks earlier. It is monsoon season. Water is rising and oxygen levels are falling. Not all of the boys can even swim. Time is running out.
Elon Musk proposes building a “kid-sized submarine” to assist the rescue effort. Musk’s solution is politely declined by Thai authorities as “not practical.” In fact, by the time Musk’s sub arrives, most of the boys are already out, alive. One of the most audacious, moving, complex, and successful rescue operations in history relied not on a single technology or hero but on the collaboration of many people, working together in a spontaneous network.
This web of connections came together organically and quickly, unassisted by algorithms, in a unique collaboration led by humans. It was a stunning example of what physicist Albert-László Barabási calls “scale-free networks”: networks that reproduce exponentially by their very nature. The exact same network effects that can be lethal in spreading a virus can be productive — beautiful, even — in creating a web of diverse human skills quickly. Networks, as Barabási puts it, “are everywhere. You just have to look for them.”…
Networks that come together like this and use technology, community, and communications in a timely manner are an example of what the U.N. calls its “leave no one behind” strategy for achieving sustainable development goals. I consider it an example of social health in action: They are the kinds of collaborations that help us live full and productive lives. And in business, there is an exciting opportunity to harness social health and the power of networks to help solve problems.
This kind of social health network, perhaps unsurprisingly, is very visible in innovations in the healthcare sector. A digital health community called The Mighty, for example, is a forum to find information about rare illnesses and connect people facing similar challenges, so that they might learn from the experiences of others. It now has 90 million engagements on its website per month and a new member joins every 20 seconds….(More)”.
Yves-Alexandre de Montjoye et al in Nature: “The breadcrumbs we leave behind when using our mobile phones—who somebody calls, for how long, and from where—contain unprecedented insights about us and our societies. Researchers have compared the recent availability of large-scale behavioral datasets, such as the ones generated by mobile phones, to the invention of the microscope, giving rise to the new field of computational social science.
With mobile phone penetration rates reaching 90% and under-resourced national statistical agencies, the data generated by our phones—traditional Call Detail Records (CDR) but also high-frequency x-Detail Record (xDR)—have the potential to become a primary data source to tackle crucial humanitarian questions in low- and middle-income countries. For instance, they have already been used to monitor population displacement after disasters, to provide real-time traffic information, and to improve our understanding of the dynamics of infectious diseases. These data are also used by governmental and industry practitioners in high-income countries.
While there is little doubt on the potential of mobile phone data for good, these data contain intimate details of our lives: rich information about our whereabouts, social life, preferences, and potentially even finances. A BCG study showed, e.g., that 60% of Americans consider location data and phone number history—both available in mobile phone data—as “private”.
Historically and legally, the balance between the societal value of statistical data (in aggregate) and the protection of privacy of individuals has been achieved through data anonymization. While hundreds of different anonymization algorithms exist, most of them are variations and improvements of the seminal k-anonymity algorithm introduced in 1998. Recent studies have, however, shown that pseudonymization and standard de-identification are not sufficient to prevent users from being re-identified in mobile phone data. Four data points—approximate places and times where an individual was present—have been shown to be enough to uniquely re-identify them 95% of the time in a mobile phone dataset of 1.5 million people. Furthermore, re-identification estimations using unicity—a metric to evaluate the risk of re-identification in large-scale datasets—and attempts at k-anonymizing mobile phone data ruled out de-identification as sufficient to truly anonymize the data. This was echoed in the recent report of the [US] President’s Council of Advisors on Science and Technology on Big Data Privacy which consider de-identification to be useful as an “added safeguard, but [emphasized that] it is not robust against near-term future re-identification methods”.
The limits of the historical de-identification framework to adequately balance risks and benefits in the use of mobile phone data are a major hindrance to their use by researchers, development practitioners, humanitarian workers, and companies. This became particularly clear at the height of the Ebola crisis, when qualified researchers (including some of us) were prevented from accessing relevant mobile phone data on time despite efforts by mobile phone operators, the GSMA, and UN agencies, with privacy being cited as one of the main concerns.
These privacy concerns are, in our opinion, due to the failures of the traditional de-identification model and the lack of a modern and agreed upon framework for the privacy-conscientious use of mobile phone data by third-parties especially in the context of the EU General Data Protection Regulation (GDPR). Such frameworks have been developed for the anonymous use of other sensitive data such as census, household survey, and tax data. The positive societal impact of making these data accessible and the technical means available to protect people’s identity have been considered and a trade-off, albeit far from perfect, has been agreed on and implemented. This has allowed the data to be used in aggregate for the benefit of society. Such thinking and an agreed upon set of models has been missing so far for mobile phone data. This has left data protection authorities, mobile phone operators, and data users with little guidance on technically sound yet reasonable models for the privacy-conscientious use of mobile phone data. This has often resulted in suboptimal tradeoffs if any.
In this paper, we propose four models for the privacy-conscientious use of mobile phone data (Fig. 1). All of these models 1) focus on a use of mobile phone data in which only statistical, aggregate information is ultimately needed by a third-party and, while this needs to be confirmed on a per-country basis, 2) are designed to fall under the legal umbrella of “anonymous use of the data”. Examples of cases in which only statistical aggregated information is ultimately needed by the third-party are discussed below. They would include, e.g., disaster management, mobility analysis, or the training of AI algorithms in which only aggregate information on people’s mobility is ultimately needed by agencies, and exclude cases in which individual-level identifiable information is needed such as targeted advertising or loans based on behavioral data.
Figure 1: Matrix of the four models for the privacy-conscientious use of mobile phone data.
First, it is important to insist that none of these models is a silver bullet…(More)”.