A Framework for Understanding Data Risk


Sarah Telford and Stefaan G. Verhulst at Understanding Risk Forum: “….In creating the policy, OCHA partnered with the NYU Governance Lab (GovLab) and Leiden University to understand the policy and privacy landscape, best practices of partner organizations, and how to assess the data it manages in terms of potential harm to people.

We seek to share our findings with the UR community to get feedback and start a conversation around the risk to using certain types of data in humanitarian and development efforts and when understanding risk.

What is High-Risk Data?

High-risk data is generally understood as data that includes attributes about individuals. This is commonly referred to as PII or personally identifiable information. Data can also create risk when it identifies communities or demographics within a group and ties them to a place (i.e., women of a certain age group in a specific location). The risk comes when this type of data is collected and shared without proper authorization from the individual or the organization acting as the data steward; or when the data is being used for purposes other than what was initially stated during collection.

The potential harms of inappropriately collecting, storing or sharing personal data can affect individuals and communities that may feel exploited or vulnerable as the result of how data is used. This became apparent during the Ebola outbreak of 2014, when a number of data projects were implemented without appropriate risk management measures. One notable example was the collection and use of aggregated call data records (CDRs) to monitor the spread of Ebola, which not only had limited success in controlling the virus, but also compromised the personal information of those in Ebola-affected countries. (See Ebola: A Big Data Disaster).

A Data-Risk Framework

Regardless of an organization’s data requirements, it is useful to think through the potential risks and harms for its collection, storage and use. Together with the Harvard Humanitarian Initiative, we have set up a four-step data risk process that includes doing an assessment and inventory, understanding risks and harms, and taking measures to counter them.

  1. Assessment – The first step is to understand the context within which the data is being generated and shared. The key questions to ask include: What is the anticipated benefit of using the data? Who has access to the data? What constitutes the actionable information for a potential perpetrator? What could set off the threat to the data being used inappropriately?
  1. Data Inventory – The second step is to take inventory of the data and how it is being stored. Key questions include: Where is the data – is it stored locally or hosted by a third party? Where could the data be housed later? Who might gain access to the data in the future? How will we know – is data access being monitored?
  1. Risks and Harms – The next step is to identify potential ways in which risk might materialize. Thinking through various risk-producing scenarios will help prepare staff for incidents. Examples of risks include: your organization’s data being correlated with other data sources to expose individuals; your organization’s raw data being publicly released; and/or your organization’s data system being maliciously breached.
  1. Counter-Measures – The next step is to determine what measures would prevent risk from materializing. Methods and tools include developing data handling policies, implementing access controls to the data, and training staff on how to use data responsibly….(More)

EU e-Government Action Plan 2016-2020. Accelerating the digital transformation of government


Q and A: “The e-Government Action Plan includes 20 initiatives to be launched in 2016 and 2017 (full list). Several of them aim to accelerate the implementation of existing legislation and related take-up of online public services. The Commission will notably support the transition of Member States towards full e-procurement, use of contract registers and interoperable e-signatures.

Another part of this set of initiatives focuses on cross-border digital public services. For example, the Commission will submit a proposal to create a Single Digital Gateway as a one-stop entry point for business and people to all Digital Single Market related information, assistance, advice and problem-solving services and making sure that the most frequently used procedures for doing business across borders can be completed fully online. The ESSI (Electronic Exchange of Social Security Information) will help national administrations to electronically share personal social information between Member States, thereby making it easier for people to live and work across borders.

Finally, the action plan aims to ensure that high-quality digital public services are designed for users and encourage their participation.

The plan will be regularly reviewed and if needed completed. An online platform for users will ensure that ideas and feedback are collected.

What is the “once-only” principle?

The “once-only” principle means that citizens and businesses should supply the same information only once to a public administration. Public administration internally shares this data, so that no additional burden falls on citizens and businesses. It calls for a reorganisation of public sector internal processes, rather than forcing businesses and citizens to fit around these processes.

The Commission will launch a pilot project with Member States to apply once-only principle across borders, with €8 million funding from Horizon 2020. This pilot will test out a technical once-only solution for businesses working in different EU Member States. Another activity will explore the once-only concept for citizens, and support networking and discussions on how this could be implemented, in due respect of the legal framework on personal data protection and privacy.

What is the digitisation of company law?

A number of EU company rules were conceived in a pre-digital era, when every form had to be completed on paper. As a result, many companies cannot fully benefit from digital tools where it comes to fulfilling company law requirements or interacting with business registers because many of the rules and processes are still paper-based.

The Commission will work on ways to achieve simpler and less burdensome solutions for companies, by facilitating the use of digital solutions throughout a company’s lifecycle in the interaction between companies and business registers, including in cross-border situations.

For instance, in order to set up as a company in a Member State, it is necessary to register that company in a business register. The Commission will look at how and in what ways online registration procedures could be made available in order to reduce the administrative burden and costs of founding a new company. Also, under EU law, companies are obliged to file a number of documents and information in business registers. Cost and time savings for companies could be generated through better use of digital tools when a company needs to submit and disclose new documents or up-date those throughout its lifecycle, for instance when the company name changes.

How will the Single Digital Gateway help European businesses and citizens?

The Single Digital Gateway will link up (not replace) relevant EU and national websites, portals, assistance services and procedures in a seamless and user-friendly way. Over time it will offer users a streamlined, comprehensive portal to find information, initiate and complete transactions with Member States’ administrations across the EU. The most frequently used administrative procedures will be identified and be brought fully online, so that no offline steps like printing and sending documents on paper will be needed.

This will save time and thereby costs for businesses and citizens when they want to engage in cross-border activities like setting up a business, exporting, moving or studying in another EU Member State.

How will interconnecting businesses registers, insolvency registers, and making the e-Justice portal a one-stop shop for justice help businesses?

These initiatives will help businesses trade within the EU with much more confidence. Not only will they be able to find the relevant information on other businesses themselves, but also on their possible insolvency, through the different interconnections of registers. This will increase transparency and enhance confidence in the Digital Single Market.

Interconnecting business registers will also ensure that business registers can communicate to each other electronically in a safe and secure way and that information is up-to-date without any additional red tape for companies.

The European e-Justice Portal provides a lot of additional information in case of problems, including tools to find a lawyer or notary, and tools for the exercise of their rights. It gives businesses easy access to information needed before entering into a business arrangement, as well as the confidence that if things go wrong, a solution is near at hand…. (More)”

See also  Communication on an EU e-Government Action Plan 2016-2020. Accelerating the digital transformation of government

Data protection laws around the world


Fifth edition Handbook by DLA Piper’s Data Protection and Privacy practice: “More than ever it is crucial that organisations manage and safeguard personal information and address their risks and legal responsibilities in relation to processing personal data, to address the growing thicket of applicable data protection legislation.

A well‑constructed and comprehensive compliance program can solve these competing interests and is an important risk‑management tool.

This handbook sets out an overview of the key privacy and data protection laws and regulations across nearly 100 different jurisdictions and offers a primer to businesses as they consider this complex and increasingly important area of compliance….(More)”

“Big data” and “open data”: What kind of access should researchers enjoy?


Paper by Gilles Chatellier, Vincent Varlet, and Corinne Blachier-Poisson in Thérapie: “The healthcare sector is currently facing a new paradigm, the explosion of “big data”. Coupled with advances in computer technology, the field of “big data” appears promising, allowing us to better understand the natural history of diseases, to follow-up new technologies (devices, drugs) implementation and to participate in precision medicine, etc. Data sources are multiple (medical and administrative data, electronic medical records, data from rapidly developing technologies such as DNA sequencing, connected devices, etc.) and heterogeneous while their use requires complex methods for accurate analysis. Moreover, faced with this new paradigm, we must determine who could (or should) have access to which data, how to combine collective interest and protection of personal data and how to finance in the long-term both operating costs and databases interrogation. This article analyses the opportunities and challenges related to the use of open and/or “big data”, … (More)”

Privacy as a Public Good


Joshua A.T. Fairfield & Christoph Engel in Duke Law Journal: “Privacy is commonly studied as a private good: my personal data is mine to protect and control, and yours is yours. This conception of privacy misses an important component of the policy problem. An individual who is careless with data exposes not only extensive information about herself, but about others as well. The negative externalities imposed on nonconsenting outsiders by such carelessness can be productively studied in terms of welfare economics. If all relevant individuals maximize private benefit, and expect all other relevant individuals to do the same, neoclassical economic theory predicts that society will achieve a suboptimal level of privacy. This prediction holds even if all individuals cherish privacy with the same intensity. As the theoretical literature would have it, the struggle for privacy is destined to become a tragedy.

But according to the experimental public-goods literature, there is hope. Like in real life, people in experiments cooperate in groups at rates well above those predicted by neoclassical theory. Groups can be aided in their struggle to produce public goods by institutions, such as communication, framing, or sanction. With these institutions, communities can manage public goods without heavy-handed government intervention. Legal scholarship has not fully engaged this problem in these terms. In this Article, we explain why privacy has aspects of a public good, and we draw lessons from both the theoretical and the empirical literature on public goods to inform the policy discourse on privacy…(More)”

See also:

Privacy, Public Goods, and the Tragedy of the Trust Commons: A Response to Professors Fairfield and Engel, Dennis D. Hirsch

Response to Privacy as a Public Good, Priscilla M. Regan

Privacy, security and data protection in smart cities: a critical EU law perspective


CREATe Working Paper by Lilian Edwards: “Smart cities” are a buzzword of the moment. Although legal interest is growing, most academic responses at least in the EU, are still from the technological, urban studies, environmental and sociological rather than legal, sectors2 and have primarily laid emphasis on the social, urban, policing and environmental benefits of smart cities, rather than their challenges, in often a rather uncritical fashion3 . However a growing backlash from the privacy and surveillance sectors warns of the potential threat to personal privacy posed by smart cities . A key issue is the lack of opportunity in an ambient or smart city environment for the giving of meaningful consent to processing of personal data; other crucial issues include the degree to which smart cities collect private data from inevitable public interactions, the “privatisation” of ownership of both infrastructure and data, the repurposing of “big data” drawn from IoT in smart cities and the storage of that data in the Cloud.

This paper, drawing on author engagement with smart city development in Glasgow as well as the results of an international conference in the area curated by the author, argues that smart cities combine the three greatest current threats to personal privacy, with which regulation has so far failed to deal effectively; the Internet of Things(IoT) or “ubiquitous computing”; “Big Data” ; and the Cloud. While these three phenomena have been examined extensively in much privacy literature (particularly the last two), both in the US and EU, the combination is under-explored. Furthermore, US legal literature and solutions (if any) are not simply transferable to the EU because of the US’s lack of an omnibus data protection (DP) law. I will discuss how and if EU DP law controls possible threats to personal privacy from smart cities and suggest further research on two possible solutions: one, a mandatory holistic privacy impact assessment (PIA) exercise for smart cities: two, code solutions for flagging the need for, and consequences of, giving consent to collection of data in ambient environments….(More)

Privacy in Public Spaces: What Expectations of Privacy Do We Have in Social Media Intelligence?


Paper by Edwards, Lilian and Urquhart, Lachlan: “In this paper we give a basic introduction to the transition in contemporary surveillance from top down traditional police surveillance to profiling and “pre-crime” methods. We then review in more detail the rise of open source (OSINT) and social media (SOCMINT) intelligence and its use by law enforcement and security authorities. Following this we consider what if any privacy protection is currently given in UK law to SOCMINT. Given the largely negative response to the above question, we analyse what reasonable expectations of privacy there may be for users of public social media, with reference to existing case law on art 8 of the ECHR. Two factors are in particular argued to be supportive of a reasonable expectation of privacy in open public social media communications: first, the failure of many social network users to perceive the environment where they communicate as “public”; and secondly, the impact of search engines (and other automated analytics) on traditional conceptions of structured dossiers as most problematic for state surveillance. Lastly, we conclude that existing law does not provide adequate protection foropen SOCMINT and that this will be increasingly significant as more and more personal data is disclosed and collected in public without well-defined expectations of privacy….(More)”

Public Sector Data Management Project


Australian government: “Earlier in 2015, Michael Thawley, Secretary of the Department of the Prime Minister and Cabinet (PM&C), commissioned an in-house study into how public sector data can be better used to achieve efficiencies for government, enable better service delivery and properly be used by the private sector to stimulate economic activity…..

There are four commonly used classifications of data: personal data, research data, open data and security data. Each type of data is used for different purposes and requires a different set of considerations, as the graphic below illustrates. The project focused on how the Australian Public Service manages its research data and open data, while ensuring personal data was kept appropriately secured. Security data was beyond the scope of this project.

4 different types of data and their different purposes

The project found that there are pockets of excellence across the Australian Public Service, with some agencies actively working on projects that focus on a richer analysis of linked data. However, this approach is fragmented and is subject to a number of barriers, both perceived and real. These include cultural and legislative barriers, and a data analytics skills and capability shortage across the Australian Public Service.

To overcome these barriers, the project established a roadmap to make better use of public data, comprising an initial period to build confidence and momentum across the APS, and a longer term set of initiatives to systematise the use, publishing and sharing of public data.

The report is available from the link below: Public Sector Data Management Project

Meeting the Challenges of Big Data


Opinion by the European Data Protection Supervisor: “Big data, if done responsibly, can deliver significant benefits and efficiencies for society and individuals not only in health, scientific research, the environment and other specific areas. But there are serious concerns with the actual and potential impact of processing of huge amounts of data on the rights and freedoms of individuals, including their right to privacy. The challenges and risks of big data therefore call for more effective data protection.

Technology should not dictate our values and rights, but neither should promoting innovation and preserving fundamental rights be perceived as incompatible. New business models exploiting new capabilities for the massive collection, instantaneous transmission, combination and reuse of personal information for unforeseen purposes have placed the principles of data protection under new strains, which calls for thorough consideration on how they are applied.

European data protection law has been developed to protect our fundamental rights and values, including our right to privacy. The question is not whether to apply data protection law to big data, but rather how to apply it innovatively in new environments. Our current data protection principles, including transparency, proportionality and purpose limitation, provide the base line we will need to protect more dynamically our fundamental rights in the world of big data. They must, however, be complemented by ‘new’ principles which have developed over the years such as accountability and privacy by design and by default. The EU data protection reform package is expected to strengthen and modernise the regulatory framework .

The EU intends to maximise growth and competitiveness by exploiting big data. But the Digital Single Market cannot uncritically import the data-driven technologies and business models which have become economic mainstream in other areas of the world. Instead it needs to show leadership in developing accountable personal data processing. The internet has evolved in a way that surveillance – tracking people’s behaviour – is considered as the indispensable revenue model for some of the most successful companies. This development calls for critical assessment and search for other options.

In any event, and irrespective of the business models chosen, organisations that process large volumes of personal information must comply with applicable data protection law. The European Data Protection Supervisor (EDPS) believes that responsible and sustainable development of big data must rely on four essential elements:

  • organisations must be much more transparent about how they process personal data;
  • afford users a higher degree of control over how their data is used;
  • design user friendly data protection into their products and services; and;
  • become more accountable for what they do….(More)

Build digital democracy


Dirk Helbing & Evangelos Pournaras in Nature: “Fridges, coffee machines, toothbrushes, phones and smart devices are all now equipped with communicating sensors. In ten years, 150 billion ‘things’ will connect with each other and with billions of people. The ‘Internet of Things’ will generate data volumes that double every 12 hours rather than every 12 months, as is the case now.

Blinded by information, we need ‘digital sunglasses’. Whoever builds the filters to monetize this information determines what we see — Google and Facebook, for example. Many choices that people consider their own are already determined by algorithms. Such remote control weakens responsible, self-determined decision-making and thus society too.

The European Court of Justice’s ruling on 6 October that countries and companies must comply with European data-protection laws when transferring data outside the European Union demonstrates that a new digital paradigm is overdue. To ensure that no government, company or person with sole control of digital filters can manipulate our decisions, we need information systems that are transparent, trustworthy and user-controlled. Each of us must be able to choose, modify and build our own tools for winnowing information.

With this in mind, our research team at the Swiss Federal Institute of Technology in Zurich (ETH Zurich), alongside international partners, has started to create a distributed, privacy-preserving ‘digital nervous system’ called Nervousnet. Nervousnet uses the sensor networks that make up the Internet of Things, including those in smartphones, to measure the world around us and to build a collective ‘data commons’. The many challenges ahead will be best solved using an open, participatory platform, an approach that has proved successful for projects such as Wikipedia and the open-source operating system Linux.

A wise king?

The science of human decision-making is far from understood. Yet our habits, routines and social interactions are surprisingly predictable. Our behaviour is increasingly steered by personalized advertisements and search results, recommendation systems and emotion-tracking technologies. Thousands of pieces of metadata have been collected about every one of us (seego.nature.com/stoqsu). Companies and governments can increasingly manipulate our decisions, behaviour and feelings1.

Many policymakers believe that personal data may be used to ‘nudge’ people to make healthier and environmentally friendly decisions. Yet the same technology may also promote nationalism, fuel hate against minorities or skew election outcomes2 if ethical scrutiny, transparency and democratic control are lacking — as they are in most private companies and institutions that use ‘big data’. The combination of nudging with big data about everyone’s behaviour, feelings and interests (‘big nudging’, if you will) could eventually create close to totalitarian power.

Countries have long experimented with using data to run their societies. In the 1970s, Chilean President Salvador Allende created computer networks to optimize industrial productivity3. Today, Singapore considers itself a data-driven ‘social laboratory’4 and other countries seem keen to copy this model.

The Chinese government has begun rating the behaviour of its citizens5. Loans, jobs and travel visas will depend on an individual’s ‘citizen score’, their web history and political opinion. Meanwhile, Baidu — the Chinese equivalent of Google — is joining forces with the military for the ‘China brain project’, using ‘deep learning’ artificial-intelligence algorithms to predict the behaviour of people on the basis of their Internet activity6.

The intentions may be good: it is hoped that big data can improve governance by overcoming irrationality and partisan interests. But the situation also evokes the warning of the eighteenth-century philosopher Immanuel Kant, that the “sovereign acting … to make the people happy according to his notions … becomes a despot”. It is for this reason that the US Declaration of Independence emphasizes the pursuit of happiness of individuals.

Ruling like a ‘benevolent dictator’ or ‘wise king’ cannot work because there is no way to determine a single metric or goal that a leader should maximize. Should it be gross domestic product per capita or sustainability, power or peace, average life span or happiness, or something else?

Better is pluralism. It hedges risks, promotes innovation, collective intelligence and well-being. Approaching complex problems from varied perspectives also helps people to cope with rare and extreme events that are costly for society — such as natural disasters, blackouts or financial meltdowns.

Centralized, top-down control of data has various flaws. First, it will inevitably become corrupted or hacked by extremists or criminals. Second, owing to limitations in data-transmission rates and processing power, top-down solutions often fail to address local needs. Third, manipulating the search for information and intervening in individual choices undermines ‘collective intelligence’7. Fourth, personalized information creates ‘filter bubbles’8. People are exposed less to other opinions, which can increase polarization and conflict9.

Fifth, reducing pluralism is as bad as losing biodiversity, because our economies and societies are like ecosystems with millions of interdependencies. Historically, a reduction in diversity has often led to political instability, collapse or war. Finally, by altering the cultural cues that guide peoples’ decisions, everyday decision-making is disrupted, which undermines rather than bolsters social stability and order.

Big data should be used to solve the world’s problems, not for illegitimate manipulation. But the assumption that ‘more data equals more knowledge, power and success’ does not hold. Although we have never had so much information, we face ever more global threats, including climate change, unstable peace and socio-economic fragility, and political satisfaction is low worldwide. About 50% of today’s jobs will be lost in the next two decades as computers and robots take over tasks. But will we see the macroeconomic benefits that would justify such large-scale ‘creative destruction’? And how can we reinvent half of our economy?

The digital revolution will mainly benefit countries that achieve a ‘win–win–win’ situation for business, politics and citizens alike10. To mobilize the ideas, skills and resources of all, we must build information systems capable of bringing diverse knowledge and ideas together. Online deliberation platforms and reconfigurable networks of smart human minds and artificially intelligent systems can now be used to produce collective intelligence that can cope with the diverse and complex challenges surrounding us….(More)” See Nervousnet project