Making Open Innovation Ecosystems Work: Case Studies in Healthcare


New paper by Donald E. Wynn, Jr.Renee M. E. Pratt and Randy V. Bradley for the Business of Government Center: “In the mist of tightening budgets, many government agencies are being asked to deliver innovative solutions to operational and strategic problems. One way to address this dilemma is to participate in open innovation. This report addresses two key components of open innovation:

  • Adopting external ideas from private firms, universities, and individuals into the agency’s innovation practices
  • Pushing innovations developed internally to the public by reaching out to external channels

To illustrate how open innovation can work, the authors employ the concept of the technological ecosystem to demonstrate that fostering innovations cannot be done alone.

Successful technological ecosystems create innovation through the combination of five key elements:

  1. Resources – the contribution made and exchanged among the participants of an ecosystem
  2. Participants – the characteristics of the participants
  3. Relationships – the relationships and interaction among the participants
  4. Organization –of the ecosystem as a whole
  5. External environment in which the ecosystem operates

This report examines both strategies by studying two cases of government-sponsored participation in technological ecosystems in the health care industry:

  • The U.S. Department of Veterans Affairs (VA) built a new ecosystem around its VistA electronic health records software in order to better facilitate the flow of innovation practices and processes between the VA and external agencies and private firms.
  • The state of West Virginia selected a variant of the VistA software for deployment in its hospital system, saving a significant amount of money while introducing a number of new features and functionality for the seven medical facilities.

As a result of these studies, the authors have identified 10 best practices for agencies seeking to capi­talize on open innovation.  These best practices include encouraging openness and transparency, minimizing internal friction and bureaucracy, and continuously monitoring external conditions….(More)”

Democratic Rulemaking


New paper by de Figueiredo, John M. and Stiglitz, Edward for the Oxford Handbook of Law and Economics, Forthcoming: “To what extent is agency rulemaking democratic? This paper examines the soundness and empirical support for the leading theories that purport to endow the administrative state with democratic legitimacy. We study the theories in light of two normative benchmarks: a “democratic” benchmark based on voter preferences, and a “republican” benchmark based on the preferences of elected representatives. We conclude that all of the proposed theories lack empirical support and many have substantial conceptual flaws; we point to directions for possible future research….(More)”

Community-based Participatory Science is Changing the Way Research Happens—and What Happens Next


Judy Robinson at The Equation: “…Whereas in the past the public seemed content to hear about scientific progress from lab-coat-clad researchers on private crusades to advance their field, now people want science to improve their lives directly. They want progress faster, and a more democratic, participatory role in deciding what needs to change and which research questions will fuel a movement for those changes….

Coming Clean is a network of community, state, national and technical organizations focused on environmental health and justice. Often we’ve been at the forefront of community-based participatory science efforts to support healthier environments, less toxic products, and a more just and equitable society: all issues that deeply matter to the non-expert public.

….For instance, with environmental justice advocacy organizations in the lead, residents of low-income, minority communities collected products at neighborhood dollar stores to see what unnecessary and dangerous chemical exposures could occur as a result of product purchases. In laboratory results we found over 80% of the products tested contained toxic chemicals at potentially hazardous levels (as documented in our report; “A Day Late and a Dollar Short”). That information, along with their organizing around it, has since attracted over 146,700 people to support the nationalCampaign for Healthier Solutions. That’s local science at work.

Documented in Coming Clean’s report; “Warning Signs: Toxic Pollution Identified at Oil and Gas Development Sites and in the peer-reviewed journal Environmental Health, 38% of the samples collected by community volunteers contained concentrations of volatile compounds exceeding federal standards for health risks, some at levels thousands of times higher than what federal health and environmental agencies consider to be “safe.” Seven air samples from Wyoming contained hydrogen sulfide at levels between two and 660 times the concentration that is immediately dangerous to human life. Beyond the astonishing numbers, the research helped educate and engage the public on the problem and solutions communities seek, filled critical gaps in our understanding of the threat oil and gas development poses to public health, and was among the reasons cited in Governor Cuomo’s decision to ban fracking in New York State.

For Coming Clean and others across the country, this kind of community-based participatory science is changing the way science is conducted and, most importantly, what comes after the data collection and analysis is complete. In both the dollar store research and the oil and gas science, the effect of the science was to strengthen existing organizing campaigns for community-based solutions. The “good old days” when we waited for scientific proof to change the world are over, if they ever existed. Now science and citizen organizing together are changing the rules of the game, the outcome, and who gets to play….(More)”

Hacking the Obesity Epidemic


Press Release: “The de Beaumont Foundation, in collaboration with the Health Data Consortium and the Department of Health and Human Services (HHS), is pleased to announce the winners of the U.S. Obesity Data Challenge at NHS England’s Health and Care Innovation Expo 2015. The challenge is part of a joint U.S.-England initiative designed to harness the power of health data in tackling the epidemic of adult obesity in both countries….

The winning entries are:

  • Healthdata+Obesity (1st place) — This simple, curated dashboard helps health officials tell a powerful story about the root causes of obesity. The dashboard provides customizable data visualizations at the national, state, and local level as well as an interactive map, national benchmarks, and written content to contextualize the data. Developed by HealthData+, a partnership between the Public Health Institute and LiveStories.
  • The Neighborhood Map of U.S. Obesity (2nd Place) — This highly-detailed, interactive mapincorporates obesity data with a GIS database to provide a localized, high-resolution visualization of the prevalence of obesity. Additional data sources can also be added to the map to allow researchers and health officials greater flexibility in customizing the map to support analysis and decision-making on a community level. Developed by RTI International.
  • The Health Demographic Analysis Tool – Visualizing The Cross-Sector Relationship Between Obesity And Social Determinants (3rd Place) — This interactive database maps the relationship between the social determinants of health (factors like educational attainment, income, and lifestyle choices) and health outcomes in order to illustrate what plays a role in community health. The powerful images generated by this tool provide compelling material for new health interventions as well as a way to look retrospectively at the impact of existing public health campaigns. Developed by GeoHealth Innovations andCommunity Health Solutions….(More)

The Merit Principle in Crisis


Commentary in Governance: “In the United States, the presidential race is heating up, and one result is an increasing number of assaults on century-old ideas about the merit-based civil service.  “The merit principle is under fierce attack,” says Donald Kettl, in a new commentary for Governance.  Kettl outlines five “tough questions” that are raised by attacks on the civil service system — and says that the US research community “has been largely asleep at the switch” on all of them.  Within major public policy schools, courses on the public service have been “pushed to the side.”  A century ago, American academics helped to build the American state.  Kettl warns that “scholarly neglect in the 2000s could undermine it.”  Read the commentary.

On the morals of network research and beyond


Conspicuous Chatter:”…Discussion on ethics have become very popular in computer science lately — and to some extent I am glad about this. However, I think we should dispel three key fallacies.

The first one is that things we do not like (some may brand “immoral”) happen because others do not think of the moral implications of their actions. In fact it is entirely possible that they do and decide to act in a manner we do not like none-the-less. This could be out of conviction: those who built the surveillance equipment, that argue against strong encryption, and also those that do the torture and the killing (harm), may have entirely self-righteous ways of justifying their actions to themselves and others. Others, may simply be doing a good buck — and there are plenty of examples of this in the links above.

The second fallacy is that ethics, and research ethics more specifically, comes down to a “common sense” variant of “do no harm” — and that is that. In fact Ethics, as a philosophical discipline is extremely deep, and there are plenty of entirely legitimate ways to argue that doing harm is perfectly fine. If the authors of the paper were a bit more sophisticated in their philosophy they could, for example have made reference to the “doctrine of double effect” or the nature of free will of those that will bring actual harm to users, and therefore their moral responsibility. It seems that a key immoral aspect of this work was that the authors forgot to write that, confusing section.

Finally, we should dispel in conversations about research ethics, the myth that morality equals legality. The public review mentions “informed consent”, but in fact this is an extremely difficult notion — and legalistically it has been used to justify terrible things. The data protection variant of informed consent allows large internet companies, and telcos, to basically scoop most users’ data because of some small print in lengthy terms and conditions. In fact it should probably be our responsibility to highlight the immorality of this state of affairs, before writing public reviews about the immorality of a hypothetical censorship detection system.

Thus, I would argue, if one is to make an ethical point relating to the values and risks of technology they have to make it in the larger context of how technology is fielded and used, the politics around it, who has power, who makes the money, who does the torturing and the killing, and why. Technology lives within a big moral picture that a research community has a responsibility to comment on. Focusing moral attention on the microcosm of a specific hypothetical use case — just because it is the closest to our research community — misses the point, perpetuating silently a terrible state of moral affairs….(More)”

The Fundamentals of Policy Crowdsourcing


Article by John PrpićAraz Taeihagh and James Melton at Policy and Internet: “What is the state of the research on crowdsourcing for policymaking? This article begins to answer this question by collecting, categorizing, and situating an extensive body of the extant research investigating policy crowdsourcing, within a new framework built on fundamental typologies from each field. We first define seven universal characteristics of the three general crowdsourcing techniques (virtual labor markets, tournament crowdsourcing, open collaboration), to examine the relative trade-offs of each modality. We then compare these three types of crowdsourcing to the different stages of the policy cycle, in order to situate the literature spanning both domains. We finally discuss research trends in crowdsourcing for public policy and highlight the research gaps and overlaps in the literature….(More)”

Index: Crime and Criminal Justice Data


The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on crime and criminal justice data and was originally published in 2015.

This index provides information about the type of crime and criminal justice data collected, shared and used in the United States. Because it is well known that data related to the criminal justice system is often times unreliable, or just plain missing, this index also highlights some of the issues that stand in the way of accessing useful and in-demand statistics.

Data Collections: National Crime Statistics

  • Number of incident-based crime datasets created by the Federal Bureau of Investigation (FBI): 2
    • Number of U.S. Statistical Agencies: 13
    • How many of those are focused on criminal justice: 1, the Bureau of Justice Statistics (BJS)
    • Number of data collections focused on criminal justice the BJS produces: 61
    • Number of federal-level APIs available for crime or criminal justice data: 1, the National Crime Victimization Survey (NCVS).
    • Frequency of the NCVS: annually
  • Number of Statistical Analysis Centers (SACs), organizations that are essentially clearinghouses for crime and criminal justice data for each state, the District of Columbia, Puerto Rico and the Northern Mariana Islands: 53

Open data, data use and the impact of those efforts

  • Number of datasets that are returned when “criminal justice” is searched for on Data.gov: 417, including federal-, state- and city-level datasets
  • Number of datasets that are returned when “crime” is searched for on Data.gov: 281
  • The percentage that public complaints dropped after officers started wearing body cameras, according to a study done in Rialto, Calif.: 88
  • The percentage that reported incidents of officer use of force fell after officers started wearing body cameras, according to a study done in Rialto, Calif.: 5
  • The percent that crime decreased during an experiment in predictive policing in Shreveport, LA: 35  
  • Number of crime data sets made available by the Seattle Police Department – generally seen as a leader in police data innovation – on the Seattle.gov website: 4
    • Major crime stats by category in aggregate
    • Crime trend reports
    • Precinct data by beat
    • State sex offender database
  • Number of datasets mapped by the Seattle Police Department: 2:
      • 911 incidents
    • Police reports
  • Number of states where risk assessment tools must be used in pretrial proceedings to help determine whether an offender is released from jail before a trial: at least 11.

Police Data

    • Number of federally mandated databases that collect information about officer use of force or officer involved shootings, nationwide: 0
    • The year a crime bill was passed that called for data on excessive force to be collected for research and statistical purposes, but has never been funded: 1994
    • Number of police departments that committed to being a part of the White House’s Police Data Initiative: 21
    • Percentage of police departments surveyed in 2013 by the Office of Community Oriented Policing within the Department of Justice that are not using body cameras, therefore not collecting body camera data: 75

The criminal justice system

  • Parts of the criminal justice system where data about an individual can be created or collected: at least 6
    • Entry into the system (arrest)
    • Prosecution and pretrial
    • Sentencing
    • Corrections
    • Probation/parole
    • Recidivism

Sources

  • Crime Mapper. Philadelphia Police Department. Accessed August 24, 2014.

Can big databases be kept both anonymous and useful?


The Economist: “….The anonymisation of a data record typically means the removal from it of personally identifiable information. Names, obviously. But also phone numbers, addresses and various intimate details like dates of birth. Such a record is then deemed safe for release to researchers, and even to the public, to make of it what they will. Many people volunteer information, for example to medical trials, on the understanding that this will happen.

But the ability to compare databases threatens to make a mockery of such protections. Participants in genomics projects, promised anonymity in exchange for their DNA, have been identified by simple comparison with electoral rolls and other publicly available information. The health records of a governor of Massachusetts were plucked from a database, again supposedly anonymous, of state-employee hospital visits using the same trick. Reporters sifting through a public database of web searches were able to correlate them in order to track down one, rather embarrassed, woman who had been idly searching for single men. And so on.

Each of these headline-generating stories creates a demand for more controls. But that, in turn, deals a blow to the idea of open data—that the electronic “data exhaust” people exhale more or less every time they do anything in the modern world is actually useful stuff which, were it freely available for analysis, might make that world a better place.

Of cake, and eating it

Modern cars, for example, record in their computers much about how, when and where the vehicle has been used. Comparing the records of many vehicles, says Viktor Mayer-Schönberger of the Oxford Internet Institute, could provide a solid basis for, say, spotting dangerous stretches of road. Similarly, an opening of health records, particularly in a country like Britain, which has a national health service, and cross-fertilising them with other personal data, might help reveal the multifarious causes of diseases like Alzheimer’s.

This is a true dilemma. People want both perfect privacy and all the benefits of openness. But they cannot have both. The stripping of a few details as the only means of assuring anonymity, in a world choked with data exhaust, cannot work. Poorly anonymised data are only part of the problem. What may be worse is that there is no standard for anonymisation. Every American state, for example, has its own prescription for what constitutes an adequate standard.

Worse still, devising a comprehensive standard may be impossible. Paul Ohm of Georgetown University, in Washington, DC, thinks that this is partly because the availability of new data constantly shifts the goalposts. “If we could pick an industry standard today, it would be obsolete in short order,” he says. Some data, such as those about medical conditions, are more sensitive than others. Some data sets provide great precision in time or place, others merely a year or a postcode. Each set presents its own dangers and requirements.

Fortunately, there are a few easy fixes. Thanks in part to the headlines, many now agree that public release of anonymised data is a bad move. Data could instead be released piecemeal, or kept in-house and accessible by researchers through a question-and-answer mechanism. Or some users could be granted access to raw data, but only in strictly controlled conditions.

All these approaches, though, are anathema to the open-data movement, because they limit the scope of studies. “If we’re making it so hard to share that only a few have access,” says Tim Althoff, a data scientist at Stanford University, “that has profound implications for science, for people being able to replicate and advance your work.”

Purely legal approaches might mitigate that. Data might come with what have been called “downstream contractual obligations”, outlining what can be done with a given data set and holding any onward recipients to the same standards. One perhaps draconian idea, suggested by Daniel Barth-Jones, an epidemiologist at Columbia University, in New York, is to make it illegal even to attempt re-identification….(More).”

One way traffic: The open data initiative project and the need for an effective demand side initiative in Ghana


Paper by Frank L. K. Ohemeng and Kwaku Ofosu-Adarkwa in the Government Information Quarterly: “In recent years the necessity for governments to develop new public values of openness and transparency, and thereby increase their citizenries’ sense of inclusiveness, and their trust in and confidence about their governments, has risen to the point of urgency. The decline of trust in governments, especially in developing countries, has been unprecedented and continuous. A new paradigm that signifies a shift to citizen-driven initiatives over and above state- and market-centric ones calls for innovative thinking that requires openness in government. The need for this new synergy notwithstanding, Open Government cannot be considered truly open unless it also enhances citizen participation and engagement. The Ghana Open Data Initiative (GODI) project strives to create an open data community that will enable government (supply side) and civil society in general (demand side) to exchange data and information. We argue that the GODI is too narrowly focused on the supply side of the project, and suggest that it should generate an even platform to improve interaction between government and citizens to ensure a balance in knowledge sharing with and among all constituencies….(More)”