Synthetic data: innovation for public good


Blog Post by Catrin Cheung: “What is synthetic data, and how can it be used for public good? ….Synthetic data are artificially generated data that have the look and structure of real data, but do not contain any information on individuals. They also contain more general characteristics that are used to find patterns in the data.

They are modelled on real data, but designed in a way which safeguards the legal, ethical and confidentiality requirements of the original data. Given their resemblance to the original data, synthetic data are useful in a range of situations, for example when data is sensitive or missing. They are used widely as teaching materials, to test code or mathematical models, or as training data for machine learning models….

There’s currently a wealth of research emerging from the health sector, as the nature of data published is often sensitive. Public Health England have synthesised cancer data which can be freely accessed online. NHS Scotland are making advances in cutting-edge machine learning methods such as Variational Auto Encoders and Generative Adversarial Networks (GANs).

There is growing interest in this area of research, and its influence extends beyond the statistical community. While the Data Science Campus have also used GANs to generate synthetic data in their latest research, its power is not limited to data generation. It can be trained to construct features almost identical to our own across imagery, music, speech and text. In fact, GANs have been used to create a painting of Edmond de Belamy, which sold for $432,500 in 2018!

Within the ONS, a pilot to create synthetic versions of securely held Labour Force Survey data has been carried out using a package in R called “synthpop”. This synthetic dataset can be shared with approved researchers to de-bug codes, prior to analysis of data held in the Secure Research Service….

Although much progress is done in this field, one challenge that persists is guaranteeing the accuracy of synthetic data. We must ensure that the statistical properties of synthetic data match properties of the original data.

Additional features, such as the presence of non-numerical data, add to this difficult task. For example, if something is listed as “animal” and can take the possible values “dog”,”cat” or “elephant”, it is difficult to convert this information into a format suitable for precise calculations. Furthermore, given that datasets have different characteristics, there is no straightforward solution that can be applied to all types of data….particular focus was also placed on the use of synthetic data in the field of privacy, following from the challenges and opportunities identified by the National Statistician’s Quality Review of privacy and data confidentiality methods published in December 2018….(More)”.

Tracking Phones, Google Is a Dragnet for the Police


Jennifer Valentino-DeVries at the New York Times: “….The warrants, which draw on an enormous Google database employees call Sensorvault, turn the business of tracking cellphone users’ locations into a digital dragnet for law enforcement. In an era of ubiquitous data gathering by tech companies, it is just the latest example of how personal information — where you go, who your friends are, what you read, eat and watch, and when you do it — is being used for purposes many people never expected. As privacy concerns have mounted among consumers, policymakers and regulators, tech companies have come under intensifying scrutiny over their data collection practices.

The Arizona case demonstrates the promise and perils of the new investigative technique, whose use has risen sharply in the past six months, according to Google employees familiar with the requests. It can help solve crimes. But it can also snare innocent people.

Technology companies have for years responded to court orders for specific users’ information. The new warrants go further, suggesting possible suspects and witnesses in the absence of other clues. Often, Google employees said, the company responds to a single warrant with location information on dozens or hundreds of devices.

Law enforcement officials described the method as exciting, but cautioned that it was just one tool….

The technique illustrates a phenomenon privacy advocates have long referred to as the “if you build it, they will come” principle — anytime a technology company creates a system that could be used in surveillance, law enforcement inevitably comes knocking. Sensorvault, according to Google employees, includes detailed location records involving at least hundreds of millions of devices worldwide and dating back nearly a decade….(More)”.

The Privacy Project


The New York Times: “Companies and governments are gaining new powers to follow people across the internet and around the world, and even to peer into their genomes. The benefits of such advances have been apparent for years; the costs — in anonymity, even autonomy — are now becoming clearer. The boundaries of privacy are in dispute, and its future is in doubt. Citizens, politicians and business leaders are asking if societies are making the wisest tradeoffs. The Times is embarking on this months long project to explore the technology and where it’s taking us, and to convene debate about how it can best help realize human potential….(More)”

Does Privacy Matter?

What Do They Know, and How Do They Know It?

What Should Be Done About This?

What Can I Do?

(View all Privacy articles…)

The Market for Data Privacy


Paper by Tarun Ramadorai, Antoine Uettwiller and Ansgar Walther: “We scrape a comprehensive set of US firms’ privacy policies to facilitate research on the supply of data privacy. We analyze these data with the help of expert legal evaluations, and also acquire data on firms’ web tracking activities. We find considerable and systematic variation in privacy policies along multiple dimensions including ease of access, length, readability, and quality, both within and between industries. Motivated by a simple theory of big data acquisition and usage, we analyze the relationship between firm size, knowledge capital intensity, and privacy supply. We find that large firms with intermediate data intensity have longer, legally watertight policies, but are more likely to share user data with third parties….(More)”.

Platform Surveillance


Editorial by David Murakami Wood and Torin Monahan of Special Issue of Surveillance and Society: “This editorial introduces this special responsive issue on “platform surveillance.” We develop the term platform surveillance to account for the manifold and often insidious ways that digital platforms fundamentally transform social practices and relations, recasting them as surveillant exchanges whose coordination must be technologically mediated and therefore made exploitable as data. In the process, digital platforms become dominant social structures in their own right, subordinating other institutions, conjuring or sedimenting social divisions and inequalities, and setting the terms upon which individuals, organizations, and governments interact.

Emergent forms of platform capitalism portend new governmentalities, as they gradually draw existing institutions into alignment or harmonization with the logics of platform surveillance while also engendering subjectivities (e.g., the gig-economy worker) that support those logics. Because surveillance is essential to the operations of digital platforms, because it structures the forms of governance and capital that emerge, the field of surveillance studies is uniquely positioned to investigate and theorize these phenomena….(More)”.

Responsible Data Governance of Neuroscience Big Data


Paper by B. Tyr Fothergill et al: “Current discussions of the ethical aspects of big data are shaped by concerns regarding the social consequences of both the widespread adoption of machine learning and the ways in which biases in data can be replicated and perpetuated. We instead focus here on the ethical issues arising from the use of big data in international neuroscience collaborations.

Neuroscience innovation relies upon neuroinformatics, large-scale data collection and analysis enabled by novel and emergent technologies. Each step of this work involves aspects of ethics, ranging from concerns for adherence to informed consent or animal protection principles and issues of data re-use at the stage of data collection, to data protection and privacy during data processing and analysis, and issues of attribution and intellectual property at the data-sharing and publication stages.

Significant dilemmas and challenges with far-reaching implications are also inherent, including reconciling the ethical imperative for openness and validation with data protection compliance, and considering future innovation trajectories or the potential for misuse of research results. Furthermore, these issues are subject to local interpretations within different ethical cultures applying diverse legal systems emphasising different aspects. Neuroscience big data require a concerted approach to research across boundaries, wherein ethical aspects are integrated within a transparent, dialogical data governance process. We address this by developing the concept of ‘responsible data governance’, applying the principles of Responsible Research and Innovation (RRI) to the challenges presented by governance of neuroscience big data in the Human Brain Project (HBP)….(More)”.

Responsible data sharing in international health research: a systematic review of principles and norms


Paper by Shona Kalkman, Menno Mostert, Christoph Gerlinger, Johannes J. M. van Delden and Ghislaine J. M. W. van Thiel: ” Large-scale linkage of international clinical datasets could lead to unique insights into disease aetiology and facilitate treatment evaluation and drug development. Hereto, multi-stakeholder consortia are currently designing several disease-specific translational research platforms to enable international health data sharing. Despite the recent adoption of the EU General Data Protection Regulation (GDPR), the procedures for how to govern responsible data sharing in such projects are not at all spelled out yet. In search of a first, basic outline of an ethical governance framework, we set out to explore relevant ethical principles and norms…

We observed an abundance of principles and norms with considerable convergence at the aggregate level of four overarching themes: societal benefits and value; distribution of risks, benefits and burdens; respect for individuals and groups; and public trust and engagement. However, at the level of principles and norms we identified substantial variation in the phrasing and level of detail, the number and content of norms considered necessary to protect a principle, and the contextual approaches in which principles and norms are used....

While providing some helpful leads for further work on a coherent governance framework for data sharing, the current collection of principles and norms prompts important questions about how to streamline terminology regarding de-identification and how to harmonise the identified principles and norms into a coherent governance framework that promotes data sharing while securing public trust….(More)”

Trustworthy Privacy Indicators: Grades, Labels, Certifications and Dashboards


Paper by Joel R. Reidenberg et al: “Despite numerous groups’ efforts to score, grade, label, and rate the privacy of websites, apps, and network-connected devices, these attempts at privacy indicators have, thus far, not been widely adopted. Privacy policies, however, remain long, complex, and impractical for consumers. Communicating in some short-hand form, synthesized privacy content is now crucial to empower internet users and provide them more meaningful notice, as well as nudge consumers and data processors toward more meaningful privacy. Indeed, on the basis of these needs, the National Institute of Standards and Technology and the Federal Trade Commission in the United States, as well as lawmakers and policymakers in the European Union, have advocated for the development of privacy indicator systems.

Efforts to develop privacy grades, scores, labels, icons, certifications, seals, and dashboards have wrestled with various deficiencies and obstacles for the wide-scale deployment as meaningful and trustworthy privacy indicators. This paper seeks to identify and explain these deficiencies and obstacles that have hampered past and current attempts. With these lessons, the article then offers criteria that will need to be established in law and policy for trustworthy indicators to be successfully deployed and adopted through technological tools. The lack of standardization prevents user-recognizability and dependability in the online marketplace, diminishes the ability to create automated tools for privacy, and reduces incentives for consumers and industry to invest in a privacy indicators. Flawed methods in selection and weighting of privacy evaluation criteria and issues interpreting language that is often ambiguous and vague jeopardize success and reliability when baked into an indicator of privacy protectiveness or invasiveness. Likewise, indicators fall short when those organizations rating or certifying the privacy practices are not objective, trustworthy, and sustainable.

Nonetheless, trustworthy privacy rating systems that are meaningful, accurate, and adoptable can be developed to assure effective and enduring empowerment of consumers. This paper proposes a framework using examples from prior and current attempts to create privacy indicator systems in order to provide a valuable resource for present-day, real world policymaking….(More)”.

Protection of health-related data: new guidelines


Press Release: “The Council of Europe has issued a set of guidelines to its 47 member states urging them to ensure, in law and practice, that the processing of health-related data is done in full respect of human rights, notably the right to privacy and data protection.

With the development of new technological tools in the health sector the volume of health-related data processed has grown exponentially showing the need for guidance for health administrations and professionals.

In a Recommendation, applicable to both the public and private sectors, the Council of Europe´s Committee of Ministers, calls on governments to transmit these guidelines to health-care systems and to actors processing health-related data, in particular health-care professionals and data protection officers.

The recommendation contains a set of principles to protect health-related data incorporating the novelties introduced in the updated Council of Europe data protection convention, known as “Convention 108+”, opened for signature in October 2018.

The Committee of Ministers underlines that health-related data should be protected by appropriate security measures taking into account the latest technological developments, their sensitive nature and the assessment of potential risks. Protection measures should be incorporated by design to any information system which processes health-related data.

The recommendation contains guidance with regard to various issues including the legitimate basis for the data processing of health-care data – notably consent by the data subject -, data concerning unborn children, health-related genetic data, the sharing of health-related data by professionals and the storage of data.

The guidelines list a number of rights of data subjects, crucially the transparency of data processing. They also contain a number of principles that should be respected when data are processed for scientific research, when they are collected by mobile devices or when they are transferred across borders….(More)”.

What you don’t know about your health data will make you sick


Jeanette Beebe at Fast Company: “Every time you shuffle through a line at the pharmacy, every time you try to get comfortable in those awkward doctor’s office chairs, every time you scroll through the web while you’re put on hold with a question about your medical bill, take a second to think about the person ahead of you and behind you.

Chances are, at least one of you is being monitored by a third party like data analytics giant Optum, which is owned by UnitedHealth Group, Inc. Since 1993, it’s captured medical data—lab results, diagnoses, prescriptions, and more—from 150 million Americans. That’s almost half of the U.S. population.

“They’re the ones that are tapping the data. They’re in there. I can’t remove them from my own health insurance contracts. So I’m stuck. It’s just part of the system,” says Joel Winston, an attorney who specializes in privacy and data protection law.

Healthcare providers can legally sell their data to a now-dizzyingly vast spread of companies, who can use it to make decisions, from designing new drugs to pricing your insurance rates to developing highly targeted advertising.

It’s written in the fine print: You don’t own your medical records. Well, except if you live in New Hampshire. It’s the only state that mandates its residents own their medical data. In 21 states, the law explicitly says that healthcare providers own these records, not patients. In the rest of the country, it’s up in the air.

Every time you visit a doctor or a pharmacy, your record grows. The details can be colorful: Using sources like Milliman’s IntelliScript and ExamOne’s ScriptCheck, a fuller picture of you emerges. Your interactions with the health are system, your medical payments, your prescription drug purchase history. And the market for the data is surging.

Its buyers and sharers—pharma giants, insurers, credit reporting agencies, and other data-hungry companies or “fourth parties” (like Facebook)—say that these massive health data sets can improve healthcare delivery and fuel advances in so-called “precision medicine.”

Still, this glut of health data has raised alarms among privacy advocates, who say many consumers are in the dark about how much of their health-related info is being gathered and mined….

Gardner predicted that traditional health data systems—electronic health records and electronic medical records—are less than ideal, given the “rigidity of the vendors and the products” and the way our data is owned and secured. Don’t count on them being around much longer, she said, “beyond the next few years.”

The future, Gardner suggested, is a system that runs on blockchain, which she defined for the committee as “basically a secure, visible, irrefutable ledger of transactions and ownership.” Still, a recent analysis of over 150 white papers revealed most healthcare blockchain projects “fall somewhere between half-baked and overly optimistic.”

As larger companies like IBM sign on, the technology may be edging closer to reality. Last year, Proof Work outlined a HIPAA-compliant system that manages patients’ medical histories over time, from acute care in the hospital to preventative checkups. The goal is to give these records to patients on their phones, and to create a “democratized ecosystem” to solve interoperability between patients, healthcare providers, insurance companies, and researchers. Similar proposals from blockchain-focused startups like Health Bank and Humanity.co would help patients store and share their health information securely—and sell it to researchers, too….(More)”.