Crowdsourced Health


crowdsourcedhealthBook by Elad Yom-Tov: “Most of us have gone online to search for information about health. What are the symptoms of a migraine? How effective is this drug? Where can I find more resources for cancer patients? Could I have an STD? Am I fat? A Pew survey reports more than 80 percent of American Internet users have logged on to ask questions like these. But what if the digital traces left by our searches could show doctors and medical researchers something new and interesting? What if the data generated by our searches could reveal information about health that would be difficult to gather in other ways? In this book, Elad Yom-Tov argues that Internet data could change the way medical research is done, supplementing traditional tools to provide insights not otherwise available. He describes how studies of Internet searches have, among other things, already helped researchers track to side effects of prescription drugs, to understand the information needs of cancer patients and their families, and to recognize some of the causes of anorexia.

Yom-Tov shows that the information collected can benefit humanity without sacrificing individual privacy. He explains why people go to the Internet with health questions; for one thing, it seems to be a safe place to ask anonymously about such matters as obesity, sex, and pregnancy. He describes in detrimental effects of “pro-anorexia” online content; tells how computer scientists can scour search engine data to improve public health by, for example, identifying risk factors for disease and centers of contagion; and tells how analyses of how people deal with upsetting diagnoses help doctors to treat patients and patients to understand their conditions….(More)

Access to Government Information in the United States: A Primer


Wendy Ginsberg and Michael Greene at Congressional Research Service: “No provision in the U.S. Constitution expressly establishes a procedure for public access to executive branch records or meetings. Congress, however, has legislated various public access laws. Among these laws are two records access statutes,

  • the Freedom of Information Act (FOIA; 5 U.S.C. §552), and
  • the Privacy Act (5 U.S.C. §552a),

and two meetings access statutes,

  •  the Federal Advisory Committee Act (FACA; 5 U.S.C. App.), and
  • the Government in the Sunshine Act (5 U.S.C. §552b).

These four laws provide the foundation for access to executive branch information in the American federal government. The records-access statutes provide the public with a variety of methods to examine how executive branch departments and agencies execute their missions. The meeting-access statutes provide the public the opportunity to participate in and inform the policy process. These four laws are also among the most used and most litigated federal access laws.

While the four statutes provide the public with access to executive branch federal records and meetings, they do not apply to the legislative or judicial branches of the U.S. government. The American separation of powers model of government provides a collection of formal and informal methods that the branches can use to provide information to one another. Moreover, the separation of powers anticipates conflicts over the accessibility of information. These conflicts are neither unexpected nor necessarily destructive. Although there is considerable interbranch cooperation in the sharing of information and records, such conflicts over access may continue on occasion.

This report offers an introduction to the four access laws and provides citations to additional resources related to these statutes. This report includes statistics on the use of FOIA and FACA and on litigation related to FOIA. The 114th Congress may have an interest in overseeing the implementation of these laws or may consider amending the laws. In addition, this report provides some examples of the methods Congress, the President, and the courts have employed to provide or require the provision of information to one another. This report is a primer on information access in the U.S. federal government and provides a list of resources related to transparency, secrecy, access, and nondisclosure….(More)”

Social Media for Government: Theory and Practice


Book edited by Staci M. Zavattaro and Thomas A. Bryer: “Social media is playing a growing role within public administration, and with it, there is an increasing need to understand the connection between social media research and what actually takes place in government agencies. Most of the existing books on the topic are scholarly in nature, often leaving out the vital theory-practice connection. This book joins theory with practice within the public sector, and explains how the effectiveness of social media can be maximized. The chapters are written by leading practitioners and span topics like how to manage employee use of social media sites, how emergency managers reach the public during a crisis situation, applying public record management methods to social media efforts, how to create a social media brand, how social media can help meet government objectives such as transparency while juggling privacy laws, and much more. For each topic, a collection of practitioner insights regarding the best practices and tools they have discovered are included. Social Media for Government responds to calls within the overall public administration discipline to enhance the theory-practice connection, giving practitioners space to tell academics what is happening in the field in order to encourage further meaningful research into social media use within government….(More)

Cities, Data, and Digital Innovation


Paper by Mark Kleinman: “Developments in digital innovation and the availability of large-scale data sets create opportunities for new economic activities and new ways of delivering city services while raising concerns about privacy. This paper defines the terms Big Data, Open Data, Open Government, and Smart Cities and uses two case studies – London (U.K.) and Toronto – to examine questions about using data to drive economic growth, improve the accountability of government to citizens, and offer more digitally enabled services. The paper notes that London has been one of a handful of cities at the forefront of the Open Data movement and has been successful in developing its high-tech sector, although it has so far been less innovative in the use of “smart city” technology to improve services and lower costs. Toronto has also made efforts to harness data, although it is behind London in promoting Open Data. Moreover, although Toronto has many assets that could contribute to innovation and economic growth, including a growing high-technology sector, world-class universities and research base, and its role as a leading financial centre, it lacks a clear narrative about how these assets could be used to promote the city. The paper draws some general conclusions about the links between data innovation and economic growth, and between open data and open government, as well as ways to use big data and technological innovation to ensure greater efficiency in the provision of city services…(More)

Responsible Data reflection stories


Responsible Data Forum: “Through the various Responsible Data Forum events over the past couple of years, we’ve heard many anecdotes of responsible data challenges faced by people or organizations. These include potentially harmful data management practices, situations where people have experienced gut feelings that there is potential for harm, or workarounds that people have created to avoid those situations.

But we feel that trading in these “war stories” isn’t the most useful way for us to learn from these experiences as acommunity. Instead, we have worked with our communities to build a set of Reflection Stories: a structured, well-researched knowledge base on the unforeseen challenges and (sometimes) negative consequences of usingtechnology and data for social change.

We hope that this can offer opportunities for reflection and learning, as well as helping to develop innovativestrategies for engaging with technology and data in new and responsible ways….

What we learned from the stories

New spaces, new challenges

Moving into new digital spaces is bringing new challenges, and social media is one such space where these challengesare proving very difficult to navigate. This seems to stem from a number of key points:

  • organisations with low levels of technical literacy and experience in tech- or data-driven projects, deciding toengage suddenly with a certain tool or technology without realising what this entails. For some, this seems to stemfrom funders being more willing to support ‘innovative’ tech projects.
  • organisations wishing to engage more with social media while not being aware of more nuanced understandingsof public/private spaces online, and how different communities engage with social media. (see story #2)
    unpredictability and different levels of visibility: due to how privacy settings on Twitter are currently set, visibilityof users can be increased hugely by the actions of others – and once that happens, a user actually has very littleagency to change or reverse that. Sadly, being more visible on, for example, Twitter disproportionately affectswomen and minority groups in a negative way – so while ‘signal boosting’ to raise someone’s profile might be well-meant, the consequences are hard to predict, and almost impossible to reverse manually. (see story #4)
  • consent: related to the above point, “giving consent” can mean many different things when it comes to digitalspaces, especially if the person in question has little experience or understanding of using the technology inquestion (see stories #4 and #5).

Grey areas of responsible data

In almost all of the cases we looked at, very few decisions were concretely “right” or “wrong”. There are many, manygrey areas here, which need to be addressed on a case by case basis. In some cases, people involved really did thinkthrough their actions, and approached their problems thoughtfully and responsibly – but consequences they had notimagined, happened (see story #8).

Additionally, given the quickly moving nature of the space, challenges can arise that simply would not have beenpossible at the start.

….Despite the very varying settings of the stories collected, the shared mitigation strategies indicate that there areindeed a few key principles that can be kept in mind throughout the development of a new tech- or data-drivenproject.

The most stark of these – and one key aspect that is underlying many of these challenges – is a fundamental lack of technical literacy among advocacy organisations. This affects the way they interact with technical partners, the decisions they make around the project, the level to which they can have meaningful input, and more. Perhaps more crucially, it also affects the ability to know what to ask for help about – ie, to ‘know the unknowns’.

Building an organisation’s technical literacy might not mean being able to answer all technical questions in-house, but rather knowing what to ask and what to expect in an answer, from others. For advocacy organisations who don’t (yet)have this, it becomes all too easy to outsource not just the actual technical work but the contextual decisions too, which should be a collaborative process, benefiting from both sets of expertise.

There seems to be a lot of scope to expand this set of stories both in terms of collecting more from other advocacy organisations, and into other sectors, too. Ultimately, we hope that sharing our collective intelligence around lessonslearned from responsible data challenges faced in projects, will contribute to a greater understanding for all of us….Read all the stories here

Do Universities, Research Institutions Hold the Key to Open Data’s Next Chapter


Ben Miller at Government Technology: “Government produces a lot of data — reams of it, roomfuls of it, rivers of it. It comes in from citizen-submitted forms, fleet vehicles, roadway sensors and traffic lights. It comes from utilities, body cameras and smartphones. It fills up servers and spills into the cloud. It’s everywhere.

And often, all that data sits there not doing much. A governing entity might have robust data collection and it might have an open data policy, but that doesn’t mean it has the computing power, expertise or human capital to turn those efforts into value.

The amount of data available to government and the computing public promises to continue to multiply — the growing smart cities trend, for example, installs networks of sensors on everything from utility poles to garbage bins.

As all this happens, a movement — a new spin on an old concept — has begun to take root: partnerships between government and research institutes. Usually housed within universities and laboratories, these partnerships aim to match strength with strength. Where government has raw data, professors and researchers have expertise and analytics programs.

Several leaders in such partnerships, spanning some of the most tech-savvy cities in the country, see increasing momentum toward the concept. For instance, the John D. and Catherine T. MacArthur Foundation in September helped launch the MetroLab Network, an organization of more than 20 cities that have partnered with local universities and research institutes for smart-city-oriented projects….

Two recurring themes in projects that universities and research organizations take on in cooperation with government are project evaluation and impact analysis. That’s at least partially driven by the very nature of the open data movement: One reason to open data is to get a better idea of how well the government is operating….

Open data may have been part of the impetus for city-university partnerships, in that the availability of more data lured researchers wanting to work with it and extract value. But those partnerships have, in turn, led to government officials opening more data than ever before for useful applications.

Sort of.

“I think what you’re seeing is not just open data, but kind of shades of open — the desire to make the data open to university researchers, but not necessarily the broader public,” said Beth Noveck, co-founder of New York University’s GovLab.


shipping+crates

GOVLAB: DOCKER FOR DATA 

Much of what GovLab does is about opening up access to data, and that is the whole point of Docker for Data. The project aims to simplify and quicken the process of extracting and loading large data sets so they will respond to Structured Query Language commands by moving the computing power of that process to the cloud. The docker can be installed with a single line of code, and its website plays host to already-extracted data sets. Since its inception, the website has grown to include more than 100 gigabytes of data from more than 8,000 data sets. From Baltimore, for example, one can easily find information on public health, water sampling, arrests, senior centers and more. Photo via Shutterstock.


That’s partially because researchers are a controlled group who can be forced to sign memorandums of understanding and trained to protect privacy and prevent security breaches when government hands over sensitive data. That’s a top concern of agencies that manage data, and it shows in the GovLab’s work.

It was something Noveck found to be very clear when she started working on a project she simply calls “Arnold” because of project support from the Laura and John Arnold Foundation. The project involves building a better understanding of how different criminal justice jurisdictions collect, store and share data. The motivation is to help bridge the gaps between people who manage the data and people who should have easy access to it. When Noveck’s center conducted a survey among criminal justice record-keepers, the researchers found big differences between participants.

“There’s an incredible disparity of practices that range from some jurisdictions that have a very well established, formalized [memorandum of understanding] process for getting access to data, to just — you send an email to a guy and you hope that he responds, and there’s no organized way to gain access to data, not just between [researchers] and government entities, but between government entities,” she said….(More)

Ebola: A Big Data Disaster


Study by Sean Martin McDonald: “…undertaken with support from the Open Society Foundation, Ford Foundation, and Media Democracy Fund, explores the use of Big Data in the form of Call Detail Record (CDR) data in humanitarian crisis.

It discusses the challenges of digital humanitarian coordination in health emergencies like the Ebola outbreak in West Africa, and the marked tension in the debate around experimentation with humanitarian technologies and the impact on privacy. McDonald’s research focuses on the two primary legal and human rights frameworks, privacy and property, to question the impact of unregulated use of CDR’s on human rights. It also highlights how the diffusion of data science to the realm of international development constitutes a genuine opportunity to bring powerful new tools to fight crisis and emergencies.

Analysing the risks of using CDRs to perform migration analysis and contact tracing without user consent, as well as the application of big data to disease surveillance is an important entry point into the debate around use of Big Data for development and humanitarian aid. The paper also raises crucial questions of legal significance about the access to information, the limitation of data sharing, and the concept of proportionality in privacy invasion in the public good. These issues hold great relevance in today’s time where big data and its emerging role for development, involving its actual and potential uses as well as harms is under consideration across the world.

The paper highlights the absence of a dialogue around the significant legal risks posed by the collection, use, and international transfer of personally identifiable data and humanitarian information, and the grey areas around assumptions of public good. The paper calls for a critical discussion around the experimental nature of data modelling in emergency response due to mismanagement of information has been largely emphasized to protect the contours of human rights….

See Sean Martin McDonald – “Ebola: A Big Data Disaster” (PDF).

 

Meet your Matchmaker: New crowdsourced sites for rare diseases


Carina Storrs at CNN: “Angela’s son Jacob was born with a number of concerning traits. He had an extra finger, and a foot and hip that were abnormally shaped. The doctors called in geneticists to try to diagnose his unusual condition. “That started our long, 12-year journey,” said Angela, who lives in the Baltimore area.

As geneticists do, they studied Jacob’s genes, looking for mutations in specific regions of the genome that could point to a problem. But there were no leads.

In the meantime, Jacob developed just about every kind of health problem there is. He has cognitive delays, digestive problems, muscle weakness, osteoporosis and other ailments.

“It was extremely frustrating, it was like being on a roller coaster. You wait six to eight weeks for the (gene) test and then it comes back as showing nothing,” recalled Angela, who asked that their last name not be used to protect her son’s privacy. “How do we go about treating until we get at what it is?”

Finally a test last year, which was able to take a broad look at all of Jacob’s genes, revealed a possible genetic culprit, but it still did not shed any light on his condition. “Nothing was known about the gene,” said Dr. Antonie Kline, director of pediatric genetics at the Greater Baltimore Medical Center, who had been following Jacob since birth.

Fortunately, Kline knew about an online program called GeneMatcher, which launched in December 2013. It would allow her to enter the new mystery gene into a database and search for other clinicians in the world who work with patients who have mutations in the same gene….

the search for “someone else on the planet” can be hard, Hamosh said. The diseases in GeneMatcher are rare, affecting fewer than 200,000 people in the United States, and it can be difficult for clinicians with similar patients to find each other just through word of mouth and professional connections. Au, the Canadian researcher with a patient similar to Jacob, is actually a friend of Kline’s, but the two had never realized their patients’ similarities.

It was not just Hamosh and her colleagues who were struck by the need for something like GeneMatcher. At the same time they were developing their program, researchers in Canada and the UK were creating PhenomeCentral and Decipher, respectively.

The three are collectively known as matchmaker programs. They connect patients with rare diseases which clinicians may never have seen before. In the case of PhenomeCentral, however, clinicians do not have to have a genetic culprit and can search only for other patients with similar traits or symptoms.

In the summer of 2015, it got much easier for clinicians all over the world to use these programs, when a clearinghouse site called Matchmaker Exchange was launched. They can now enter the patient information one time and search all three databases….(More)

New #ODimpact Release: How is Open Data Creating Economic Opportunities and Solving Public Problems?


Andrew Young at The GovLab: “Last month, the GovLab and Omidyar Network launched Open Data’s Impact (odimpact.org), a custom-built repository offering a range of in-depth case studies on global open data projects. The initial launch of theproject featured the release of 13 open data impact case studies – ten undertaken by the GovLab, as well asthree case studies from Becky Hogge (@barefoot_techie), an independent researcher collaborating withOmidyar Network. Today, we are releasing a second batch of 12 case studies – nine case studies from theGovLab and three from Hogge…

The batch of case studies being revealed today examines two additional dimensions of impact. They find that:

  • Open data is creating new opportunities for citizens and organizations, by fostering innovation and promoting economic growth and job creation.
  • Open data is playing a role in solving public problems, primarily by allowing citizens and policymakers access to new forms of data-driven assessment of the problems at hand. It also enables data-driven engagement, producing more targeted interventions and enhanced collaboration.

The specific impacts revealed by today’s release of case studies are wide-ranging, and include both positive and negative transformations. We have found that open data has enabled:

  • The creation of new industries built on open weather data released by the United States NationalOceanic and Atmospheric Administration (NOAA).
  • The generation of billions of dollars of economic activity as a result of the Global Positioning System(GPS) being opened to the global public in the 1980s, and the United Kingdom’s Ordnance Survey geospatial offerings.
  • A more level playing field for small businesses in New York City seeking market research data.
  • The coordinated sharing of data among government and international actors during the response to theEbola outbreak in Sierra Leone.
  • The identification of discriminatory water access decisions in the case Kennedy v the City of Zanesville, resulting in a $10.9 million settlement for the African-American plaintiffs.
  • Increased awareness among Singaporeans about the location of hotspots for dengue fever transmission.
  • Improved, data-driven emergency response following earthquakes in Christchurch, New Zealand.
  • Troubling privacy violations on Eightmaps related to Californians’ political donation activity….(More)”

All case studies available at odimpact.org.

 

Privacy as a Public Good


Joshua A.T. Fairfield & Christoph Engel in Duke Law Journal: “Privacy is commonly studied as a private good: my personal data is mine to protect and control, and yours is yours. This conception of privacy misses an important component of the policy problem. An individual who is careless with data exposes not only extensive information about herself, but about others as well. The negative externalities imposed on nonconsenting outsiders by such carelessness can be productively studied in terms of welfare economics. If all relevant individuals maximize private benefit, and expect all other relevant individuals to do the same, neoclassical economic theory predicts that society will achieve a suboptimal level of privacy. This prediction holds even if all individuals cherish privacy with the same intensity. As the theoretical literature would have it, the struggle for privacy is destined to become a tragedy.

But according to the experimental public-goods literature, there is hope. Like in real life, people in experiments cooperate in groups at rates well above those predicted by neoclassical theory. Groups can be aided in their struggle to produce public goods by institutions, such as communication, framing, or sanction. With these institutions, communities can manage public goods without heavy-handed government intervention. Legal scholarship has not fully engaged this problem in these terms. In this Article, we explain why privacy has aspects of a public good, and we draw lessons from both the theoretical and the empirical literature on public goods to inform the policy discourse on privacy…(More)”

See also:

Privacy, Public Goods, and the Tragedy of the Trust Commons: A Response to Professors Fairfield and Engel, Dennis D. Hirsch

Response to Privacy as a Public Good, Priscilla M. Regan