Paper by Axel Heitmueller et al in Health Affairs: “The vast amount of health data generated and stored around the world each day offers significant opportunities for advances such as the real-time tracking of diseases, predicting disease outbreaks, and developing health care that is truly personalized. However, capturing, analyzing, and sharing health data is difficult, expensive, and controversial. This article explores four central questions that policy makers should consider when developing public policy for the use of “big data” in health care. We discuss what aspects of big data are most relevant for health care and present a taxonomy of data types and levels of access. We suggest that successful policies require clear objectives and provide examples, discuss barriers to achieving policy objectives based on a recent policy experiment in the United Kingdom, and propose levers that policy makers should consider using to advance data sharing. We argue that the case for data sharing can be won only by providing real-life examples of the ways in which it can improve health care.”
The Rise of Data Poverty in America
Report by Daniel Castro for the Center of Data Innovation: “Data-driven innovations offer enormous opportunities to advance important societal goals. However, to take advantage of these opportunities, individuals must have access to high-quality data about themselves and their communities. If certain groups routinely do not have data collected about them, their problems may be overlooked and their communities held back in spite of progress elsewhere. Given this risk, policymakers should begin a concerted effort to address the “data divide”—the social and economic inequalities that may result from a lack of collection or use of data about individuals or communities..”
Online Petitions Proposed to Offer New Yorkers a New Way to Speak Out
The New York Times: “Since introducing a petition site in 2011 and promising to respond to any request that received enough signatures, the White House has been compelled to release its beer recipe, inform Texas that it would not be permitted to secede and weigh the merits of a “Death Star” for national defense.
in“The administration,” the response to that petition read, “does not support blowing up planets.”
So it is perhaps with some trepidation that New York City lawmakers consider a local model: an online petition system that would allow residents to ask anything they want of their public officials and, with sufficient support, receive a response.
“Not everyone can go to a public hearing,” said the bill’s sponsor, Councilman James Vacca, Democrat of the Bronx. “This would be a way for people to register their views collectively.”
The proposal to create something resembling a Reddit for the body politic was introduced on Wednesday by Mr. Vacca and referred to the City Council’s Committee on Technology, of which he is chairman. Spokesmen for Mayor Bill de Blasio and Melissa Mark-Viverito, the Council speaker, said their offices were reviewing the bill.
Mr. Vacca’s office said the petition system would be the first of its kind on the municipal level anywhere, a claim that could not be immediately confirmed. Under his bill, the city’s Department of Information Technology and Telecommunications would determine the threshold number of electronic signatures that would prompt a response. The department would also be asked to establish the website, creating a system that “allows city agencies or public authorities to post public responses” to the petitions….
Dick Dadey, the executive director of Citizens Union, a civic group, called the petition proposal “a novel idea” worthy of debate. But he sounded several notes of caution, wondering whether the setup might be subject to manipulation, favoring “a preordained outcome directed by public officials” on a given issue….”
Value Based Prioritisation of Open Government Data Investments
This ePSI platform: “This ePSI platform topic report explores how Governments are increasingly prioritising their investments in Open Government Data on the basis of the value that can be unlocked by opening up government datasets.
The report elaborates on a working definition for high value datasets from different dimensions, both from the perspective of the data publisher and data re-user. This working definition has been used to identify and prioritise datasets to be listed on the European Union Open Data Portal, allowing EU institutions to better determine which new datasets should be published with priority, or to identify which high value datasets already listed on the portal should be improved with priority.”
Business Models for Open Innovation: Matching Heterogenous Open Innovation Strategies with Business Model Dimensions
New paper by Saebi, Tina and Foss, Nicolai, available at SSRN: “Research on open innovation suggests that companies benefit differentially from adopting open innovation strategies; however, it is unclear why this is so. One possible explanation is that companies’ business models are not attuned to open strategies. Accordingly, we propose a contingency model of open business models by systematically linking open innovation strategies to core business model dimensions, notably the content, structure, governance of transactions. We further illustrate a continuum of open innovativeness, differentiating between four types of open business models. We contribute to the open innovation literature by specifying the conditions under which business models are conducive to the success of open innovation strategies.”
Stacking Up the Benefits of Openness
Jeanne Holm at Digital Gov: “Open government, open source, openness. These words are often used in talking about open data, but we sometimes forget that the root of all of this is an open community. Individuals working together to release government data and put it to use to help their neighbors and reach new personal goals.
This sense of community in the open data field shows up in many places. I see it when people volunteer at the National Day of Civic Hacking, crowdsource data integrity with MapGive, or mentor with Girls Who Code. And each day I see it on Open Data Stack Exchange, where people ask questions about open data issues, searches, or challenges, and strangers half a world away answer the question within an hour.
We launched the Open Data Stack Exchange in 2013 as a way of helping to build community and open up the knowledge in our emergent field. What started slowly, soon took off with 3,375 participants today having provided 1,592 answers to 721 questions. Anyone can ask a question. These have ranged from data requests (looking for specific hard-to-find data) to technical questions on parsing or visualizing data. More importantly, anyone can answer a question, too. You’ll notice from the numbers that most questions have more than one answer, with the asker being able to choose the best answer and everyone being able to vote the questions and answers up and down. The forum is loosely moderated (I’ve served as one of the moderators since inception), but predominantly self-governed. Google trusts this method and forum so much that within a few minutes of answering a question, it will pop to the top of the Google search results for that topic.
What are people asking on the Open Data Stack Exchange? One question is seeking applications being developed with open data, one is looking for a database of open databases and another seeks data about the Ebola outbreak. Answers, edits, comments, suggestions…all are part of the conversation and documentation of our collective open data knowledge. This type of community-vetted, open forum helps to evolve and preserve our collective wisdom into the future. I encourage people who ask questions of Data.gov to do so on Stack Exchange so that everyone can see the answer, and flag those for easy reference (OpenFDA does the same)…”
How Open Data Is Transforming City Life
Joel Gurin, The GovLab, at Techonomy: “Start a business. Manage your power use. Find cheap rents, or avoid crime-ridden neighborhoods. Cities and their citizens worldwide are discovering the power of “open data”—public data and information available from government and other sources that can help solve civic problems and create new business opportunities. By opening up data about transportation, education, health care, and more, municipal governments are helping app developers, civil society organizations, and others to find innovative ways to tackle urban problems. For any city that wants to promote entrepreneurship and economic development, open data can be a valuable new resource.
The urban open data movement has been growing for several years, with American cities including New York, San Francisco, Chicago, and Washington in the forefront. Now an increasing number of government officials, entrepreneurs, and civic hackers are recognizing the potential of open data. The results have included applications that can be used across many cities as well as those tailored to an individual city’s needs.
At first, the open data movement was driven by a commitment to transparency and accountability. City, state, and local governments have all released data about their finances and operations in the interest of good government and citizen participation. Now some tech companies are providing platforms to make this kind of city data more accessible, useful, and comparable. Companies like OpenGov and Govini make it possible for city managers and residents to examine finances, assess police department overtime, and monitor other factors that let them compare their city’s performance to neighboring municipalities.
Other new businesses are tapping city data to provide residents with useful, practical information. One of the best examples is NextBus, which uses metropolitan transportation data to tell commuters when to expect a bus along their route. Commuter apps like this have become common in cities in the U.S. and around the world. Another website, SpotCrime, collects, analyzes, and maps crime statistics to tell city dwellers which areas are safest or most dangerous and to offer crime alerts. And the Chicago-based Purple Binder helps people in need find city healthcare services. Many companies in the Open Data 500, the study of open data companies that I direct at the GovLab at NYU, use data from cities as well as other sources….
Some of the most ambitious uses of city data—with some of the greatest potential—focus on improving education. In Washington, the nonprofit Learn DC has made data about public schools available through a portal that state agencies, community organizations, and civic hackers can all use. They’re using it for collaborative research and action that, they say, has “empowered every DC parent to participate in shaping the future of the public education system.”…”
The Crypto-democracy and the Trustworthy
New Paper by Sebastien Gambs, Samuel Ranellucci, and Alain Tapp: “In the current architecture of the Internet, there is a strong asymmetry in terms of power between the entities that gather and process personal data (e.g., major Internet companies, telecom operators, cloud providers, …) and the individuals from which this personal data is issued. In particular, individuals have no choice but to blindly trust that these entities will respect their privacy and protect their personal data. In this position paper, we address this issue by proposing an utopian crypto-democracy model based on existing scientific achievements from the field of cryptography. More precisely, our main objective is to show that cryptographic primitives, including in particular secure multiparty computation, offer a practical solution to protect privacy while minimizing the trust assumptions. In the crypto-democracy envisioned, individuals do not have to trust a single physical entity with their personal data but rather their data is distributed among several institutions. Together these institutions form a virtual entity called the Trustworthy that is responsible for the storage of this data but which can also compute on it (provided first that all the institutions agree on this). Finally, we also propose a realistic proof-of-concept of the Trustworthy, in which the roles of institutions are played by universities. This proof-of-concept would have an important impact in demonstrating the possibilities offered by the crypto-democracy paradigm.”
Data + Design: A simple introduction to preparing and visualizing information
Open access book by By Trina Chiasson, Dyanna Gregory, and all of these people: “(Foreword) Data are all around us and always have been. Everything throughout history has always had the potential to be quantified: theoretically, one could count every human who has ever lived, every heartbeat that has ever beaten, every step that was ever taken, every star that has ever shone, every word that has ever been uttered or written. Each of these collective things can be represented by a number. But only recently have we had the technology to efficiently surface these hidden numbers, leading to greater insight into our human condition.
But what does this mean, exactly? What are the cultural effects of having easy access to data? It means, for one thing, that we all need to be more data literate. It also means we have to be more design literate. As the old adage goes, statistics lie. Well, data visualizations lie, too. How can we learn how to first, effectively read data visualizations; and second, author them in such a way that is ethical and clearly communicates the data’s inherent story?
At the intersection of art and algorithm, data visualization schematically abstracts information to bring about a deeper understanding of the data, wrapping it in an element of awe.
My favorite description of data visualization comes from the prolific blogger, Maria Popova, who said that data visualization is “at the intersection of art and algorithm.” To learn about the history of data visualization is to become an armchair cartographer, explorer, and statistician….”
Early visual explorations of data focused mostly on small snippets of data gleaned to expand humanity’s understanding of the geographical world, mainly through maps. Starting with the first recognized world maps of the 13th century, scientists, mathematicians, philosophers, and sailors used math to visualize the invisible. Stars and suns were plotted, coastlines and shipping routes charted. Data visualization, in its native essence, drew the lines, points, and coordinates that gave form to the physical world and our place in it. It answered questions like “Where am I?”, “How do I get there?”, and “How far is it?”
Bridging the Knowledge Gap: In Search of Expertise
New paper by Beth Simone Noveck, The GovLab, for Democracy: “In the early 2000s, the Air Force struggled with a problem: Pilots and civilians were dying because of unusual soil and dirt conditions in Afghanistan. The soil was getting into the rotors of the Sikorsky UH-60 helicopters and obscuring the view of its pilots—what the military calls a “brownout.” According to the Air Force’s senior design scientist, the manager tasked with solving the problem didn’t know where to turn quickly to get help. As it turns out, the man practically sitting across from him had nine years of experience flying these Black Hawk helicopters in the field, but the manager had no way of knowing that. Civil service titles such as director and assistant director reveal little about skills or experience.
In the fall of 2008, the Air Force sought to fill in these kinds of knowledge gaps. The Air Force Research Laboratory unveiled Aristotle, a searchable internal directory that integrated people’s credentials and experience from existing personnel systems, public databases, and users themselves, thus making it easy to discover quickly who knew and had done what. Near-term budgetary constraints killed Aristotle in 2013, but the project underscored a glaring need in the bureaucracy.
Aristotle was an attempt to solve a challenge faced by every agency and organization: quickly locating expertise to solve a problem. Prior to Aristotle, the DOD had no coordinated mechanism for identifying expertise across 200,000 of its employees. Dr. Alok Das, the senior scientist for design innovation tasked with implementing the system, explained, “We don’t know what we know.”
This is a common situation. The government currently has no systematic way of getting help from all those with relevant expertise, experience, and passion. For every success on Challenge.gov—the federal government’s platform where agencies post open calls to solve problems for a prize—there are a dozen open-call projects that never get seen by those who might have the insight or experience to help. This kind of crowdsourcing is still too ad hoc, infrequent, and unpredictable—in short, too unreliable—for the purposes of policy-making.
Which is why technologies like Aristotle are so exciting. Smart, searchable expert networks offer the potential to lower the costs and speed up the process of finding relevant expertise. Aristotle never reached this stage, but an ideal expert network is a directory capable of including not just experts within the government, but also outside citizens with specialized knowledge. This leads to a dual benefit: accelerating the path to innovative and effective solutions to hard problems while at the same time fostering greater citizen engagement.
Could such an expert-network platform revitalize the regulatory-review process? We might find out soon enough, thanks to the Food and Drug Administration…”