Open data could save the NHS hundreds of millions, says top UK scientist


The Guardian: “The UK government must open up and highlight the power of more basic data sets to improve patient care in the NHS and save hundreds of millions of pounds a year, Nigel Shadbolt, chairman of the Open Data Institute (ODI) has urged.

The UK government topped the first league table for open data (paywall)produced by the ODI last year but Shadbolt warns that ministers’ open data responsibilities have not yet been satisfied.

Basic data on prescription administration is now published on a monthly basis but Shadbolt said medical practitioners must be educated about the power of this data to change prescribing habits across the country.

Other data sets, such as trusts’ opening times, consultant lists and details of services, that are promised to make the NHS more accessible are not currently available in a form that is machine-readable.

“These basic sets of information about the processes, the people and places in the health system are all fragmented and fractured and many of them are not available as registers that you can go to,” Shadbolt said.

“Whenever you talk about health data people think you must be talking about personal data and patient data and there are issues, obviously, of absolutely protecting privacy there. But there’s lots of data in the health service that is not about personal patient data at all that would be hugely useful to just have available as machine-readable data for apps to use.”

The UK government has led the way in recent years in encouraging transparency and accountability within the NHS by opening league tables. The publication of league tables on MRSA was followed by a 76-79% drop in infections.

Shadbolt said: “Those hospitals that were worst in their league table don’t like to be there and there was a very rapid diffusion of understanding of best practice across them that you can quantify. It’s many millions of pounds being saved.”

The artificial intelligence and open data expert said the next big area for open data improvement in the NHS is around prescriptions.

Shadbolt pointed to the publication of data about the prescription of statins,which has helped identify savings worth hundreds of millions of pounds: “There is little doubt that this pattern is likely to exist across the whole of the prescribing space.”…(More)”

Protecting Privacy in Data Release


Book by Giovanni Livraga: “This book presents a comprehensive approach to protecting sensitive information when large data collections are released by their owners. It addresses three key requirements of data privacy: the protection of data explicitly released, the protection of information not explicitly released but potentially vulnerable due to a release of other data, and the enforcement of owner-defined access restrictions to the released data. It is also the first book with a complete examination of how to enforce dynamic read and write access authorizations on released data, applicable to the emerging data outsourcing and cloud computing situations. Private companies, public organizations and final users are releasing, sharing, and disseminating their data to take reciprocal advantage of the great benefits of making their data available to others. This book weighs these benefits against the potential privacy risks. A detailed analysis of recent techniques for privacy protection in data release and case studies illustrate crucial scenarios. Protecting Privacy in Data Release targets researchers, professionals and government employees working in security and privacy. Advanced-level students in computer science and electrical engineering will also find this book useful as a secondary text or reference….(More)”

Big Data. Big Obstacles.


Dalton Conley et al. in the Chronicle of Higher Education: “After decades of fretting over declining response rates to traditional surveys (the mainstay of 20th-century social research), an exciting new era would appear to be dawning thanks to the rise of big data. Social contagion can be studied by scraping Twitter feeds; peer effects are tested on Facebook; long-term trends in inequality and mobility can be assessed by linking tax records across years and generations; social-psychology experiments can be run on Amazon’s Mechanical Turk service; and cultural change can be mapped by studying the rise and fall of specific Google search terms. In many ways there has been no better time to be a scholar in sociology, political science, economics, or related fields.

However, what should be an opportunity for social science is now threatened by a three-headed monster of privatization, amateurization, and Balkanization. A coordinated public effort is needed to overcome all of these obstacles.

While the availability of social-media data may obviate the problem of declining response rates, it introduces all sorts of problems with the level of access that researchers enjoy. Although some data can be culled from the web—Twitter feeds and Google searches—other data sit behind proprietary firewalls. And as individual users tune up their privacy settings, the typical university or independent researcher is increasingly locked out. Unlike federally funded studies, there is no mandate for Yahoo or Alibaba to make its data publicly available. The result, we fear, is a two-tiered system of research. Scientists working for or with big Internet companies will feast on humongous data sets—and even conduct experiments—and scholars who do not work in Silicon Valley (or Alley) will be left with proverbial scraps….

To address this triple threat of privatization, amateurization, and Balkanization, public social science needs to be bolstered for the 21st century. In the current political and economic climate, social scientists are not waiting for huge government investment like we saw during the Cold War. Instead, researchers have started to knit together disparate data sources by scraping, harmonizing, and geo­coding any and all information they can get their hands on.

Currently, many firms employ some well-trained social and behavioral scientists free to pursue their own research; likewise, some companies have programs by which scholars can apply to be in residence or work with their data extramurally. However, as Facebook states, its program is “by invitation only and requires an internal Facebook champion.” And while Google provides services like Ngram to the public, such limited efforts at data sharing are not enough for truly transparent and replicable science….(More)”

 

Selected Readings on Data Governance


Jos Berens (Centre for Innovation, Leiden University) and Stefaan G. Verhulst (GovLab)

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of data governance was originally published in 2015.

Context
The field of Data Collaboratives is premised on the idea that sharing and opening-up private sector datasets has great – and yet untapped – potential for promoting social good. At the same time, the potential of data collaboratives depends on the level of societal trust in the exchange, analysis and use of the data exchanged. Strong data governance frameworks are essential to ensure responsible data use. Without such governance regimes, the emergent data ecosystem will be hampered and the (perceived) risks will dominate the (perceived) benefits. Further, without adopting a human-centered approach to the design of data governance frameworks, including iterative prototyping and careful consideration of the experience, the responses may fail to be flexible and targeted to real needs.

Selected Readings List (in alphabetical order)

Annotated Selected Readings List (in alphabetical order)

Better Place Lab, “Privacy, Transparency and Trust.” Mozilla, 2015. Available from: http://www.betterplace-lab.org/privacy-report.

  • This report looks specifically at the risks involved in the social sector having access to datasets, and the main risks development organizations should focus on to develop a responsible data use practice.
  • Focusing on five specific countries (Brazil, China, Germany, India and Indonesia), the report displays specific country profiles, followed by a comparative analysis centering around the topics of privacy, transparency, online behavior and trust.
  • Some of the key findings mentioned are:
    • A general concern on the importance of privacy, with cultural differences influencing conception of what privacy is.
    • Cultural differences determining how transparency is perceived, and how much value is attached to achieving it.
    • To build trust, individuals need to feel a personal connection or get a personal recommendation – it is hard to build trust regarding automated processes.

Montjoye, Yves Alexandre de; Kendall, Jake and; Kerry, Cameron F. “Enabling Humanitarian Use of Mobile Phone Data.” The Brookings Institution, 2015. Available from: http://www.brookings.edu/research/papers/2014/11/12-enabling-humanitarian-use-mobile-phone-data.

  • Focussing in particular on mobile phone data, this paper explores ways of mitigating privacy harms involved in using call detail records for social good.
  • Key takeaways are the following recommendations for using data for social good:
    • Engaging companies, NGOs, researchers, privacy experts, and governments to agree on a set of best practices for new privacy-conscientious metadata sharing models.
    • Accepting that no framework for maximizing data for the public good will offer perfect protection for privacy, but there must be a balanced application of privacy concerns against the potential for social good.
    • Establishing systems and processes for recognizing trusted third-parties and systems to manage datasets, enable detailed audits, and control the use of data so as to combat the potential for data abuse and re-identification of anonymous data.
    • Simplifying the process among developing governments in regards to the collection and use of mobile phone metadata data for research and public good purposes.

Centre for Democracy and Technology, “Health Big Data in the Commercial Context.” Centre for Democracy and Technology, 2015. Available from: https://cdt.org/insight/health-big-data-in-the-commercial-context/.

  • Focusing particularly on the privacy issues related to using data generated by individuals, this paper explores the overlap in privacy questions this field has with other data uses.
  • The authors note that although the Health Insurance Portability and Accountability Act (HIPAA) has proven a successful approach in ensuring accountability for health data, most of these standards do not apply to developers of the new technologies used to collect these new data sets.
  • For non-HIPAA covered, customer facing technologies, the paper bases an alternative framework for consideration of privacy issues. The framework is based on the Fair Information Practice Principles, and three rounds of stakeholder consultations.

Center for Information Policy Leadership, “A Risk-based Approach to Privacy: Improving Effectiveness in Practice.” Centre for Information Policy Leadership, Hunton & Williams LLP, 2015. Available from: https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/white_paper_1-a_risk_based_approach_to_privacy_improving_effectiveness_in_practice.pdf.

  • This white paper is part of a project aiming to explain what is often referred to as a new, risk-based approach to privacy, and the development of a privacy risk framework and methodology.
  • With the pace of technological progress often outstripping the capabilities of privacy officers to keep up, this method aims to offer the ability to approach privacy matters in a structured way, assessing privacy implications from the perspective of possible negative impact on individuals.
  • With the intended outcomes of the project being “materials to help policy-makers and legislators to identify desired outcomes and shape rules for the future which are more effective and less burdensome”, insights from this paper might also feed into the development of innovative governance mechanisms aimed specifically at preventing individual harm.

Centre for Information Policy Leadership, “Data Governance for the Evolving Digital Market Place”, Centre for Information Policy Leadership, Hunton & Williams LLP, 2011. Available from: http://www.huntonfiles.com/files/webupload/CIPL_Centre_Accountability_Data_Governance_Paper_2011.pdf.

  • This paper argues that as a result of the proliferation of large scale data analytics, new models governing data inferred from society will shift responsibility to the side of organizations deriving and creating value from that data.
  • It is noted that, with the reality of the challenge corporations face of enabling agile and innovative data use “In exchange for increased corporate responsibility, accountability [and the governance models it mandates, ed.] allows for more flexible use of data.”
  • Proposed as a means to shift responsibility to the side of data-users, the accountability principle has been researched by a worldwide group of policymakers. Tailing the history of the accountability principle, the paper argues that it “(…) requires that companies implement programs that foster compliance with data protection principles, and be able to describe how those programs provide the required protections for individuals.”
  • The following essential elements of accountability are listed:
    • Organisation commitment to accountability and adoption of internal policies consistent with external criteria
    • Mechanisms to put privacy policies into effect, including tools, training and education
    • Systems for internal, ongoing oversight and assurance reviews and external verification
    • Transparency and mechanisms for individual participation
    • Means of remediation and external enforcement

Crawford, Kate; Schulz, Jason. “Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harm.” NYU School of Law, 2014. Available from: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2325784&download=yes.

  • Considering the privacy implications of large-scale analysis of numerous data sources, this paper proposes the implementation of a ‘procedural data due process’ mechanism to arm data subjects against potential privacy intrusions.
  • The authors acknowledge that some privacy protection structures already know similar mechanisms. However, due to the “inherent analytical assumptions and methodological biases” of big data systems, the authors argue for a more rigorous framework.

Letouze, Emmanuel, and; Vinck, Patrick. “The Ethics and Politics of Call Data Analytics”, DataPop Alliance, 2015. Available from: http://static1.squarespace.com/static/531a2b4be4b009ca7e474c05/t/54b97f82e4b0ff9569874fe9/1421442946517/WhitePaperCDRsEthicFrameworkDec10-2014Draft-2.pdf.

  • Focusing on the use of Call Detail Records (CDRs) for social good in development contexts, this whitepaper explores both the potential of these datasets – in part by detailing recent successful efforts in the space – and political and ethical constraints to their use.
  • Drawing from the Menlo Report Ethical Principles Guiding ICT Research, the paper explores how these principles might be unpacked to inform an ethics framework for the analysis of CDRs.

Data for Development External Ethics Panel, “Report of the External Ethics Review Panel.” Orange, 2015. Available from: http://www.d4d.orange.com/fr/content/download/43823/426571/version/2/file/D4D_Challenge_DEEP_Report_IBE.pdf.

  • This report presents the findings of the external expert panel overseeing the Orange Data for Development Challenge.
  • Several types of issues faced by the panel are described, along with the various ways in which the panel dealt with those issues.

Federal Trade Commission Staff Report, “Mobile Privacy Disclosures: Building Trust Through Transparency.” Federal Trade Commission, 2013. Available from: www.ftc.gov/os/2013/02/130201mobileprivacyreport.pdf.

  • This report looks at ways to address privacy concerns regarding mobile phone data use. Specific advise is provided for the following actors:
    • Platforms, or operating systems providers
    • App developers
    • Advertising networks and other third parties
    • App developer trade associations, along with academics, usability experts and privacy researchers

Mirani, Leo. “How to use mobile phone data for good without invading anyone’s privacy.” Quartz, 2015. Available from: http://qz.com/398257/how-to-use-mobile-phone-data-for-good-without-invading-anyones-privacy/.

  • This paper considers the privacy implications of using call detail records for social good, and ways to mitigate risks of privacy intrusion.
  • Taking example of the Orange D4D challenge and the anonymization strategy that was employed there, the paper describes how classic ‘anonymization’ is often not enough. The paper then lists further measures that can be taken to ensure adequate privacy protection.

Bernholz, Lucy. “Several Examples of Digital Ethics and Proposed Practices” Stanford Ethics of Data conference, 2014, Available from: http://www.scribd.com/doc/237527226/Several-Examples-of-Digital-Ethics-and-Proposed-Practices.

  • This list of readings prepared for Stanford’s Ethics of Data conference lists some of the leading available literature regarding ethical data use.

Abrams, Martin. “A Unified Ethical Frame for Big Data Analysis.” The Information Accountability Foundation, 2014. Available from: http://www.privacyconference2014.org/media/17388/Plenary5-Martin-Abrams-Ethics-Fundamental-Rights-and-BigData.pdf.

  • Going beyond privacy, this paper discusses the following elements as central to developing a broad framework for data analysis:
    • Beneficial
    • Progressive
    • Sustainable
    • Respectful
    • Fair

Lane, Julia; Stodden, Victoria; Bender, Stefan, and; Nissenbaum, Helen, “Privacy, Big Data and the Public Good”, Cambridge University Press, 2014. Available from: http://www.dataprivacybook.org.

  • This book treats the privacy issues surrounding the use of big data for promoting the public good.
  • The questions being asked include the following:
    • What are the ethical and legal requirements for scientists and government officials seeking to serve the public good without harming individual citizens?
    • What are the rules of engagement?
    • What are the best ways to provide access while protecting confidentiality?
    • Are there reasonable mechanisms to compensate citizens for privacy loss?

Richards, Neil M, and; King, Jonathan H. “Big Data Ethics”. Wake Forest Law Review, 2014. Available from: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2384174.

  • This paper describes the growing impact of big data analytics on society, and argues that because of this impact, a set of ethical principles to guide data use is called for.
  • The four proposed themes are: privacy, confidentiality, transparency and identity.
  • Finally, the paper discusses how big data can be integrated into society, going into multiple facets of this integration, including the law, roles of institutions and ethical principles.

OECD, “OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data”. Available from: http://www.oecd.org/sti/ieconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm.

  • A globally used set of principles to inform thought about handling personal data, the OECD privacy guidelines serve as one the leading standards for informing privacy policies and data governance structures.
  • The basic principles of national application are the following:
    • Collection Limitation Principle
    • Data Quality Principle
    • Purpose Specification Principle
    • Use Limitation Principle
    • Security Safeguards Principle
    • Openness Principle
    • Individual Participation Principle
    • Accountability Principle

The White House Big Data and Privacy Working Group, “Big Data: Seizing Opportunities, Preserving Values”, White House, 2015. Available from: https://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_5.1.14_final_print.pdf.

  • Documenting the findings of the White House big data and privacy working group, this report lists i.a. the following key recommendations regarding data governance:
    • Bringing greater transparency to the data services industry
    • Stimulating international conversation on big data, with multiple stakeholders
    • With regard to educational data: ensuring data is used for the purpose it is collected for
    • Paying attention to the potential for big data to facilitate discrimination, and expanding technical understanding to stop discrimination

William Hoffman, “Pathways for Progress” World Economic Forum, 2015. Available from: http://www3.weforum.org/docs/WEFUSA_DataDrivenDevelopment_Report2015.pdf.

  • This paper treats i.a. the lack of well-defined and balanced governance mechanisms as one of the key obstacles preventing particularly corporate sector data from being shared in a controlled space.
  • An approach that balances the benefits against the risks of large scale data usage in a development context, building trust among all stake holders in the data ecosystem, is viewed as key.
  • Furthermore, this whitepaper notes that new governance models are required not just by the growing amount of data and analytical capacity, and more refined methods for analysis. The current “super-structure” of information flows between institutions is also seen as one of the key reasons to develop alternatives to the current – outdated – approaches to data governance.

How to use mobile phone data for good without invading anyone’s privacy


Leo Mirani in Quartz: “In 2014, when the West African Ebola outbreak was at its peak, some academics argued that the epidemic could have been slowed by using mobile phone data.

Their premise was simple: call-data records show the true nature of social networks and human movement. Understanding social networks and how people really move—as seen from phone movements and calls—could give health officials the ability to predict how a disease will move and where a disease will strike next, and prepare accordingly.

The problem is that call-data records are very hard to get a hold of. The files themselves are huge, there are enormous privacy risks, and the process of making the records safe for distribution is long.
First, the technical basics

Every time you make a phone call from your mobile phone to another mobile phone, the network records the following information (note: this is not a complete list):

  • The number from which the call originated
  • The number at which the call terminated
  • Start time of the call
  • Duration of the call
  • The ID number of the phone making the call
  • The ID number of the SIM card used to make the call
  • The code for the antenna used to make the call

On their own, these records are not creepy. Indeed, without them, networks would be unable to connect calls or bill customers. But it is easy to see why operators aren’t rushing to share this information. Even though the data includes none of the actual content of a phone call in the data, simply knowing which number is calling which, and from where and when, is usually more than enough to identify people.
So how can network operators use this valuable data for good while also protecting their own interests and those of their customers? A good example can be found in Africa, where Orange, a French mobile phone network with interests across several African countries, has for the second year run its “Data for Development” (D4D) program, which offers researchers a chance to mine call data for clues on development problems.

Steps to safe sharing

After a successful first year in Ivory Coast, Orange this year ran the D4D program in Senegal. The aim of the program is to give researchers and scientists at universities and other research labs access to data in order to find novel ways to aid development in health, agriculture, transport or urban planning, energy, and national statistics….(More)”

Privacy in the Modern Age: The Search for Solutions


New book edited by Marc Rotenberg, Julia Horwitz, and Jeramie Scott: “The threats to privacy are well known: the National Security Agency tracks our phone calls, Google records where we go online and how we set our thermostats, Facebook changes our privacy settings when it wishes, Target gets hacked and loses control of our credit card information, our medical records are available for sale to strangers, our children are fingerprinted and their every test score saved for posterity, and small robots patrol our schoolyards while drones may soon fill our skies.

The contributors to this anthology don’t simply describe these problems or warn about the loss of privacy- they propose solutions. They look closely at business practices, public policy, and technology design and ask, “Should this continue? Is there a better approach?” They take seriously the dictum of Thomas Edison: “What one creates with his hand, he should control with his head.” It’s a new approach to the privacy debate, one that assumes privacy is worth protecting, that there are solutions to be found, and that the future is not yet known. This volume will be an essential reference for policy makers and researchers, journalists and scholars, and others looking for answers to one of the biggest challenges of our modern day. The premise is clear: there’s a problem- let’s find a solution….(More)”

India asks its citizens: please digitise our files


Joshua Chambers in FutureGov: “India has asked its citizens to help digitise records so that it can move away from paper processes.

Using its crowdsourcing web site MyGov, the government wrote that “we cannot talk of Digital India and transforming India into a knowledge society if most of the transactions continue to be physical.”

It is “essential” that paper records are converted into machine readable digital versions, the government added, but “the cost of such digitisation is very large and existing budgetary constraints of government and many other organisations do not allow such lavish digitisation effort.”

Consequently, the government is asking citizens for advice on how to build a cheap content management system and tools that will allow it to crowdsource records transcriptions. Citizens would be rewarded for every word that they transcribe through a points system, which can then be recouped into cash prizes.

“The proposed platform will create earning and income generation opportunities for our literate rural and urban citizens, develop digital literacy and IT skills and include them in the making of Digital India,” the government added.

The announcement also noted the importance of privacy, suggesting that documents are split so that no portion gives any clue regarded the overall content of the document.

Instead, two people will be given the same words to transcribe, and the software will compare their statements to ensure accuracy. Only successful transcription will be rewarded with points….(More)”

Facebook’s Filter Study Raises Questions About Transparency


Will Knight in MIT Technology Review: “Facebook is an enormously valuable source of information about social interactions.

Facebook’s latest scientific research, about the way it shapes the political perspectives users are exposed to, has led some academics to call for the company to be more open about what it chooses to study and publish.

This week the company’s data science team published a paper in the prominent journal Science confirming what many had long suspected: that the network’s algorithms filter out some content that might challenge a person’s political leanings. However, the paper also suggested that the effect was fairly small, and less significant than a user’s own filtering behavior (see “Facebook Says You Filter News More Than Its Algorithm Does”).
Several academics have pointed to limitations of the study, such as the fact that the only people involved had indicated their political affiliation on their Facebook page. Critics point out that those users might behave in a different way from everyone else. But beyond that, a few academics have noted a potential tension between Facebook’s desire to explore the scientific value of its data and its own corporate interests….

In response to the controversy over that study, Facebook’s chief technology officer, Mike Schroepfer, wrote a Facebook post that acknowledged people’s concerns and described new guidelines for its scientific research. “We’ve created a panel including our most senior subject-area researchers, along with people from our engineering, research, legal, privacy and policy teams, that will review projects falling within these guidelines,” he wrote….(More)

Cops Increasingly Use Social Media to Connect, Crowdsource


Sara E. Wilson at GovTech: “Law enforcement has long used public tip lines and missing persons bulletins to recruit citizens in helping solve crime and increasing public safety. Though the need for police departments to connect with their communities is nothing new, the array of technologies available to do so is growing all the time — as are the ways in which departments use those technologies.

In fact, 81 percent of law enforcement professionals use sites such as Facebook and Twitter on the job. And 25 percent use it daily.

Much of law enforcement is crowdsourced — Amber alerts are pushed to smartphones, seeking response from citizens; officers push wanted information and crime tips to users on Facebook and Twitter in the hopes they can help; and some departments create apps to streamline the information sharing.

Take the Johns Creek, Ga., Police Department, which has deployed a tool that allows additional citizen engagement and crowdsourcing…..Using a mobile app — the SunGard Public Sector P2C Converge app, which is branded specifically for Johns Creek PD as JCPD4Me — the department can more smoothly manage public safety announcements and other social media interactions….

Another tool cops use for communicating with citizens is Nixle, which lets agencies publish alerts, advisories, community information and traffic news. Citizens register for free and receive credible, neighborhood-level public safety information via text message and email in real time.

The Oakland, Calif., Police Department (OPD) uses the platform to engage with citizens — an April 17, 2015 post on Oakland PD’s Nixle Community feed informs readers that the department’s Special Victims Section, which is working to put an end to sex trafficking in the city, arrested five individuals for solicitation of prostitution. Since Jan. 1, 2015, OPD has arrested 70 individuals from 27 cities across the state for solicitation of prostitution.

Nixle allows two-way communication as well — the Tip Watch function allows anonymous tipsters to send information to Oakland PD in three ways (text, phone, Web). Now OPD can issue a passcode to tipsters for two-way, anonymous communication to help gather more information.

On the East Coast, the Peabody, Mass., Police Department has used the My Police Department (MyPD) app by WiredBlue, which lets citizens submit tips and feedback directly to the department, since its creation….

During the high-profile manhunt for the Boston Marathon bombers in 2013, the FBI asked the public for eyewitness photo and video evidence. The response from the public was so overwhelming that the server infrastructure couldn’t handle the massive inflow of data.

This large-scale crowdsourcing and data dilemma inspired a new product: the Los Angeles Sheriff’s Department’s Large Emergency Event Digital Information Repository (LEEDIR). Developed by CitizenGlobal Inc. and Amazon Web Services, LEEDIR pairs an app with cloud storage to help police use citizens’ smartphones as tools to gather and investigate evidence. Since its creation, the repository was used in Santa Barbara, Calif., in 2014 to investigate riots in Isla Vista.

Proponents of LEEDIR say the crowdsourcing system gives authorities a secure, central repository for the countless electronic tips that can come in during a crisis. Critics, however, claim that privacy issues come into play with this kind of policing. …(More)”

Apple Has Plans for Your DNA


Antonio Regalado at MIT Technology Review: “…Apple is collaborating with U.S. researchers to launch apps that would offer some iPhone owners the chance to get their DNA tested, many of them for the first time, according to people familiar with the plans.

The apps are based on ResearchKit, a software platform Apple introduced in March that helps hospitals or scientists run medical studies on iPhones by collecting data from the devices’ sensors or through surveys.

The first five ResearchKit apps, including one called mPower that tracks symptoms of Parkinson’s disease, quickly recruited thousands of participants in a few days, demonstrating the reach of Apple’s platform.

“Apple launched ResearchKit and got a fantastic response. The obvious next thing is to collect DNA,” says Gholson Lyon, a geneticist at Cold Spring Harbor Laboratory, who isn’t involved with the studies.

Nudging iPhone owners to submit DNA samples to researchers would thrust Apple’s devices into the center of a widening battle for genetic information. Universities, large technology companies like Google (see “Google Wants to Store Your Genome”), direct-to-consumer labs, and even the U.S. government (see “U.S. to Develop DNA Study of One Million People”) are all trying to amass mega-databases of gene information to uncover clues about the causes of disease (see “Internet of DNA”).

In two initial studies planned, Apple isn’t going to directly collect or test DNA itself. That will be done by academic partners. The data would be maintained by scientists in a computing cloud, but certain findings could appear directly on consumers’ iPhones as well. Eventually, it’s even possible consumers might swipe to share “my genes” as easily as they do their location….(More)”