Developing Public Policy To Advance The Use Of Big Data In Health Care


Paper by Axel Heitmueller et al in Health Affairs:  “The vast amount of health data generated and stored around the world each day offers significant opportunities for advances such as the real-time tracking of diseases, predicting disease outbreaks, and developing health care that is truly personalized. However, capturing, analyzing, and sharing health data is difficult, expensive, and controversial. This article explores four central questions that policy makers should consider when developing public policy for the use of “big data” in health care. We discuss what aspects of big data are most relevant for health care and present a taxonomy of data types and levels of access. We suggest that successful policies require clear objectives and provide examples, discuss barriers to achieving policy objectives based on a recent policy experiment in the United Kingdom, and propose levers that policy makers should consider using to advance data sharing. We argue that the case for data sharing can be won only by providing real-life examples of the ways in which it can improve health care.”

Business Models for Open Innovation: Matching Heterogenous Open Innovation Strategies with Business Model Dimensions


New paper by Saebi, Tina and Foss, Nicolai, available at SSRN:  “Research on open innovation suggests that companies benefit differentially from adopting open innovation strategies; however, it is unclear why this is so. One possible explanation is that companies’ business models are not attuned to open strategies. Accordingly, we propose a contingency model of open business models by systematically linking open innovation strategies to core business model dimensions, notably the content, structure, governance of transactions. We further illustrate a continuum of open innovativeness, differentiating between four types of open business models. We contribute to the open innovation literature by specifying the conditions under which business models are conducive to the success of open innovation strategies.”

The Crypto-democracy and the Trustworthy


New Paper by Sebastien Gambs, Samuel Ranellucci, and Alain Tapp: “In the current architecture of the Internet, there is a strong asymmetry in terms of power between the entities that gather and process personal data (e.g., major Internet companies, telecom operators, cloud providers, …) and the individuals from which this personal data is issued. In particular, individuals have no choice but to blindly trust that these entities will respect their privacy and protect their personal data. In this position paper, we address this issue by proposing an utopian crypto-democracy model based on existing scientific achievements from the field of cryptography. More precisely, our main objective is to show that cryptographic primitives, including in particular secure multiparty computation, offer a practical solution to protect privacy while minimizing the trust assumptions. In the crypto-democracy envisioned, individuals do not have to trust a single physical entity with their personal data but rather their data is distributed among several institutions. Together these institutions form a virtual entity called the Trustworthy that is responsible for the storage of this data but which can also compute on it (provided first that all the institutions agree on this). Finally, we also propose a realistic proof-of-concept of the Trustworthy, in which the roles of institutions are played by universities. This proof-of-concept would have an important impact in demonstrating the possibilities offered by the crypto-democracy paradigm.”

Bridging the Knowledge Gap: In Search of Expertise


New paper by Beth Simone Noveck, The GovLab, for Democracy: “In the early 2000s, the Air Force struggled with a problem: Pilots and civilians were dying because of unusual soil and dirt conditions in Afghanistan. The soil was getting into the rotors of the Sikorsky UH-60 helicopters and obscuring the view of its pilots—what the military calls a “brownout.” According to the Air Force’s senior design scientist, the manager tasked with solving the problem didn’t know where to turn quickly to get help. As it turns out, the man practically sitting across from him had nine years of experience flying these Black Hawk helicopters in the field, but the manager had no way of knowing that. Civil service titles such as director and assistant director reveal little about skills or experience.
In the fall of 2008, the Air Force sought to fill in these kinds of knowledge gaps. The Air Force Research Laboratory unveiled Aristotle, a searchable internal directory that integrated people’s credentials and experience from existing personnel systems, public databases, and users themselves, thus making it easy to discover quickly who knew and had done what. Near-term budgetary constraints killed Aristotle in 2013, but the project underscored a glaring need in the bureaucracy.
Aristotle was an attempt to solve a challenge faced by every agency and organization: quickly locating expertise to solve a problem. Prior to Aristotle, the DOD had no coordinated mechanism for identifying expertise across 200,000 of its employees. Dr. Alok Das, the senior scientist for design innovation tasked with implementing the system, explained, “We don’t know what we know.”
This is a common situation. The government currently has no systematic way of getting help from all those with relevant expertise, experience, and passion. For every success on Challenge.gov—the federal government’s platform where agencies post open calls to solve problems for a prize—there are a dozen open-call projects that never get seen by those who might have the insight or experience to help. This kind of crowdsourcing is still too ad hoc, infrequent, and unpredictable—in short, too unreliable—for the purposes of policy-making.
Which is why technologies like Aristotle are so exciting. Smart, searchable expert networks offer the potential to lower the costs and speed up the process of finding relevant expertise. Aristotle never reached this stage, but an ideal expert network is a directory capable of including not just experts within the government, but also outside citizens with specialized knowledge. This leads to a dual benefit: accelerating the path to innovative and effective solutions to hard problems while at the same time fostering greater citizen engagement.
Could such an expert-network platform revitalize the regulatory-review process? We might find out soon enough, thanks to the Food and Drug Administration…”

Bridging Distant Worlds: Innovation in the Civic Space


A digital white paper by Public Innovation: “In an increasingly complex world, today’s challenges are interconnected. Many have argued that our civic institutions are not equipped to respond with the same velocity at which technology is advancing other sectors of the economy. While this may, in fact, be a fair criticism of our electoral, fiscal, and policy structures, a new mindset is emerging at government’s service delivery layer.
Civic innovation offers a new approach to solving community problems that is emergent, generative, resilient, participatory, human-centered, and driven by a process of validated learning where core assumptions are tested quickly and iteratively – and lead to better solutions that are both impactful and durable. And perhaps most surprisingly, new markets are being created that enable creative problem solvers to sustain their social impact through activities that don’t rely on traditional models of grant funding.
While the Sacramento region is making significant progress in this space, our civic innovation and entrepreneurship ecosystem has yet to reach its full potential. The purpose of this white paper is to make the case for why now is the time for a Regional Civic Technology, Innovation and Entrepreneurship Agenda.
The paper concludes with a set of recommendations for collective action among the region’s public, private, nonprofit organizations, and, of course, our fellow citizens. Appendix A articulates this agenda in the form of a resolution to be adopted by as many cities and counties the region as possible.
A recurring theme in this paper is that technology is fundamentally changing the way humans interact with organizations and each other. In order for regional leaders and residents to be honest with ourselves, we must consciously choose whether or not we are going to raise our expectations and co-create a new civic experience.
Because the future is now and the opportunities are infinite…”

Agency Liability Stemming from Citizen-Generated Data


Paper by Bailey Smith for The Wilson Center’s Science and Technology Innovation Program: “New ways to gather data are on the rise. One of these ways is through citizen science. According to a new paper by Bailey Smith, JD, federal agencies can feel confident about using citizen science for a few reasons. First, the legal system provides significant protection from liability through the Federal Torts Claim Act (FTCA) and Administrative Procedures Act (APA). Second, training and technological innovation has made it easier for the non-scientist to collect high quality data.”

When Big Data Maps Your Safest, Shortest Walk Home


Sarah Laskow at NextCity: “Boston University and University of Pittsburgh researchers are trying to do the same thing that got the creators of the app SketchFactor into so much trouble over the summer. They’re trying to show people how to avoid dangerous spots on city streets while walking from one place to another.
“What we are interested in is finding paths that offer trade-offs between safety and distance,” Esther Galbrun, a postdoc at Boston University, recently said in New York at the 3rd International Workshop on Urban Computing, held in conjunction with KDD2014.
She was presenting, “Safe Navigation in Urban Environments,” which describes a set of algorithms that would give a person walking through a city options for getting from one place to another — the shortest path, the safest path and a number of alternatives that balanced between both factors. The paper takes existing algorithms, well defined in theory — nothing new or fancy, Galbrun says — and applies them to a problem that people face everyday.
Imagine, she suggests, that a person is standing at the Philadelphia Museum of Art, and he wants to walk home, to his place on Wharton Street. (Galbrun and her colleagues looked at Philadelphia and Chicago because those cities have made their crime data openly available.) The walk is about three miles away, and one option would be to take the shortest path back. But maybe he’s worried about safety. Maybe he’s willing to take a little bit of a longer walk if it means he has to worry less about crime. What route should he take then?
Services like Google Maps have excelled at finding the shortest, most direct routes from Point A to Point B. But, increasingly, urban computing is looking to capture other aspects of moving about a place. “Fast is only one option,” says co-author Konstantinos Pelechrinis. “There are noble objectives beyond the surface path that you can put inside this navigation problem.” You might look for the path that will burn the most calories; a Yahoo! lab has considered how to send people along the most scenic route.
But working on routes that do more than give simple directions can have its pitfalls. The SketchFactor app relies both on crime data, when it’s available, and crowdsourced comments to reveal potential trouble spots to users. When it was released this summer, tech reporters and other critics immediately started talking about how it could easily become a conduit for racism. (“Sketchy” is, after all, a very subjective measure.)
So far, though, the problem with the SketchFactor app is less that it offers racially skewed perspectives than that the information it does offer is pretty useless — if entertaining. A pinpoint marked “very sketchy” is just as likely to flag an incident like a Jewish man eating pork products or hipster kids making too much noise as it is to flag a mugging.
Here, then, is a clear example of how Big Data has an advantage over Big Anecdata. The SafePath set-up measures risk more objectively and elegantly. It pulls in openly available crime data and considers simple data like time, location and types of crime. While a crime occurs at a discrete point, the researchers wanted to estimate the risk of a crime on every street, at every point. So they use a mathematical tool that smooths out the crime data over the space of the city and allows them to measure the relative risk of witnessing a crime on every street segment in a city….”

The Decalogue of Policy Making 2.0: Results from Analysis of Case Studies on the Impact of ICT for Governance and Policy Modelling


Paper by Sotirios Koussouris, Fenareti Lampathaki, Gianluca Misuraca, Panagiotis Kokkinakos, and Dimitrios Askounis: “Despite the availability of a myriad of Information and Communication Technologies (ICT) based tools and methodologies for supporting governance and the formulation of policies, including modelling expected impacts, these have proved to be unable to cope with the dire challenges of the contemporary society. In this chapter we present the results of the analysis of a set of promising cases researched in order to understand the possible impact of what we define ‘Policy Making 2.0’, which refers to ‘a set of methodologies and technological solutions aimed at enabling better, timely and participative policy-making’. Based on the analysis of these cases we suggest a bouquet of (mostly ICT-related) practical and research recommendations that are relevant to researchers, practitioners and policy makers in order to guide the introduction and implementation of Policy Making 2.0 initiatives. We argue that this ‘decalogue’ of Policy Making 2.0 could be an operational checklist for future research and policy to further explore the potential of ICT tools for governance and policy modelling, so to make next generation policy making more ‘intelligent’ and hopefully able to solve or anticipate the societal challenges we are (and will be) confronted today and in the future.

Using Crowds for Evaluation Tasks: Validity by Numbers vs. Validity by Expertise


Paper by Christoph Hienerth and Frederik Riar:Developing and commercializing novel ideas is central to innovation processes. As the outcome of such ideas cannot fully be foreseen, the evaluation of them is crucial. With the rise of the internet and ICT, more and new kinds of evaluations are done by crowds. This raises the question whether individuals in crowds possess necessary capabilities to evaluate and whether their outcomes are valid. As empirical insights are not yet available, this paper deals with the examination of evaluation processes and general evaluation components, the discussion of underlying characteristics and mechanism of these components affecting evaluation outcomes (i.e. evaluation validity). We further investigate differences between firm- and crowd-based evaluation using different cases of applications, and develop a theoretical framework towards evaluation validity, i.e. validity by numbers vs. the validity by expertise. The identified factors that influence the validity of evaluations are: (1) the number of evaluation tasks, (2) complexity, (3) expertise, (4) costs, and (5) time to outcome. For each of these factors, hypotheses are developed based on theoretical arguments. We conclude with implications, proposing a model of evaluation validity.”

A Few Useful Things to Know about Machine Learning


A new research paper by Pedro Domingos: “Machine learning algorithms can figure out how to perform important tasks by generalizing from examples. This is often feasible and cost-effective where manual programming is not. As more data becomes available, more ambitious problems can be tackled. As a result, machine learning is widely used in computer science and other fields. However, developing successful machine learning applications requires a substantial amount of “black art” that is hard to find in textbooks. This article summarizes twelve key lessons that machine learning researchers and practitioners have learned. These include pitfalls to avoid, important issues to focus on, and answers to common questions.”