Andrew R Schrock in New Media and Society: “The civic hacker tends to be described as anachronistic, an ineffective “white hat” compared to more overtly activist cousins. By contrast, I argue that civic hackers’ politics emerged from a distinct historical milieu and include potentially powerful modes of political participation. The progressive roots of civic data hacking can be found in early 20th-century notions of “publicity” and the right to information movement. Successive waves of activists saw the Internet as a tool for transparency. The framing of openness shifted in meaning from information to data, weakening of mechanisms for accountability even as it opened up new forms of political participation. Drawing on a year of interviews and participant observation, I suggest civic data hacking can be framed as a form of data activism and advocacy: requesting, digesting, contributing to, modeling, and contesting data. I conclude civic hackers are utopian realists involved in the crafting of algorithmic power and discussing ethics of technology design. They may be misunderstood because open data remediates previous forms of openness. In the process, civic hackers transgress established boundaries of political participation….(More)”
Linked Open Economy: Take Full Advantage of Economic Data
Paper by Michalis N. Vafopoulos et al: “For decades, information related to public finances was out of reach for most of the people. Gradually, public budgets and tenders are becoming openly available and global initiatives promote fiscal transparency and open product and price data. But, the poor quality of economic open data undermines their potential to answer interesting questions (e.g. efficiency of public funds and market processes). Linked Open Economy (LOE) has been developed as a top-level conceptualization that interlinks the publicly available economic open data by modelling the flows incorporated in public procurement together with the market process to address complex policy issues. LOE approach is extensively used to enrich open economic data ranging from budgets and spending to prices. Developers, professionals, public administrations and any other interested party use and customize LOE model to develop new systems, to enable information exchange between systems, to integrate data from heterogeneous sources and to publish open data related to economic activities….(More)”
Open data and (15 million!) new measures of democracy
Joshua Tucker in the Washington Post: “Last month the University of Gothenberg’s V-Dem Institute released a new“Varieties of Democracy” dataset. It provides about 15 million data points on democracy, including 39 democracy-related indices. It can be accessed at v-dem.net along with supporting documentation. I asked Staffan I. Lindberg, Director of the V-Dem Institute and one of the directors of the project, a few questions about the new data. What follows is a lightly edited version of his answers.
Women’s Political Empowerment Index for Southeast Asia (Data: V-Dem data version 5; Figure V-Dem Institute, University of Gothenberg, Sweden)
Joshua Tucker: What is democracy, and is it even really to have quantitative measures on democracy?
Staffan Lindberg: There is no consensus on the definition of democracy and how to measure it. The understanding of what a democracy really is varies across countries and regions. This motivates the V-Dem approach not to offer one standard definition of the concept but instead to distinguish among five principles different versions of democracy: Electoral, Liberal, Participatory, Deliberative, and Egalitarian democracy. All of these principles have played prominent roles in current and historical discussions about democracy. Our measurement of these principles are based on two types of data, factual data collected by assisting researchers and survey responses by country experts, which are combined using a rather complex measurement model (which is a“custom-designed Bayesian ordinal item response theory model”, for details see the V-Dem Methodology document)….(More)
Open government data and why it matters
Open data dusts off the art world
Suzette Lohmeyer at GCN: “Open data is not just for spreadsheets. Museums are finding ways to convert even the provenance of artwork into open data, offering an out-of-the-box lesson in accessibility to public sector agencies. The specific use case could be of interest to government as well — many cities and states have sizeable art collections, and the General Services Administration owns more than 26,000 pieces.
Most art pieces have a few skeletons in their closet, or at least a backstory worthy of The History Channel. That provenance, or ownership information, has traditionally been stored in manila folders, only occasionally dusted off by art historians for academic papers or auction houses to verify authenticity. Many museums have some provenance data in collection management systems, but the narratives that tell the history of the work are often stored as semi-structured data, formatted according to the needs of individual institutions, making the information both hard to search and share across systems.
Enter Art Tracks from Pittsburgh’s Carnegie Museum of Art (CMOA) — a new open source, open data initiative that aims to turn provenance into structured data by building a suite of open source software tools so an artwork’s past can be available to museum goers, curators, researchers and software developers.
….The Art Tracks software is all open source. The code libraries and the user-facing provenance entry tool called Elysa (E-lie-za) are all “available on GitHub for use, modification and tinkering,” Berg-Fulton explained. “That’s a newer way of working for our museum, but that openness gives others a chance to lean on our technical expertise and improve their own records and hopefully contribute back to the software to improve that as well.”
Using an open data format, Berg-Fulton said, also creates opportunities for ongoing partnerships with other experts across the museum community so that provenance becomes a constant conversation.
This is a move Berg-Fulton said CMOA has been “dying to make,” because the more people that have access to data, the more ways it can be interpreted. “When you give people data, they do cool things with it, like help you make your own records better, or interpret it in a way you’ve never thought of,” she said. “It feels like the right thing to do in light of our duty to public trust.”….(More)”
The Promise and Perils of Open Medical Data
Read more:http://www.thehastingscenter.org/Publications/HCR/Detail.aspx?id=7731#ixzz3zOSM2kF0
Moving from Open Data to Open Knowledge: Announcing the Commerce Data Usability Project
Jeffrey Chen, Tyrone Grandison, and Kristen Honey at the US Department of Commerce: “…in 2016, the DOC is committed to building on this momentum with new and expanded efforts to transform open data into knowledge into action.

DOC has been in the business of open data for a long time. DOC’s National Oceanic and Atmospheric Administration (NOAA) alone collects and disseminates huge amounts of data that fuel the global weather economy—and this information represents just a fraction of the tens of thousands of datasets that DOC collects and manages, on topics ranging from satellite imagery to material standards to demographic surveys.
Unfortunately, far too many DOC datasets are either hard to find, difficult to use, and/or not yet publicly available on Data.gov, the home of U.S. government’s open data. This challenge is not exclusive to DOC; and indeed, under Project Open Data, Federal agencies are working hard on various efforts to make tax-payer funded data more easily discoverable.
One of these efforts is DOC’s Commerce Data Usability Project (CDUP). To unlock the power of data, just making data open isn’t enough. It’s critical to make data easier to find and use—to provide information and tools that make data accessible and actionable for all users. That’s why DOC formed a public-private partnership to create CDUP, a collection of online data tutorials that provide students, developers, and entrepreneurs with the necessary context and code for them to start quickly extracting value from various datasets. Tutorials exist on topics such as:
- NOAA’s Severe Weather Data Inventory (SWDI), demonstrating how to use hail data to save life and property. The tutorial helps users see that hail events often occur in the summer (late night to early morning), and in midwestern and southern states.
- Security vulnerability data from the National Institute of Standards and Technology (NIST). The tutorial helps users see that spikes and dips in security incidents consistently occur in the same set of weeks each year.
- Visible Infrared Imaging Radiometer Suite (VIIRS) data from the National Oceanic and Atmospheric Administration (NOAA). The tutorial helps users understand how to use satellite imagery to estimate populations.
- American Community Survey (ACS) data from the U.S. Census Bureau. The tutorial helps users understand how nonprofits can identify communities that they want to serve based on demographic traits.
In the coming months, CDUP will continue to expand with a rich, diverse set of additional tutorials….(More)“
Designing a toolkit for policy makers
Laurence Grinyer at UK’s Open Policy Making Blog: “At the end of the last parliament, the Cabinet Office Open Policy Making team launched the Open Policy Making toolkit. This was about giving policy makers the actual tools that will enable them to develop policy that is well informed, creative, tested, and works. The starting point was addressing their needs and giving them what they had told us they needed to develop policy in an ever changing, fast paced and digital world. In a way, it was the culmination of the open policy journey we have been on with departments for the past 2 years. In the first couple of months we saw thousands of unique visits….
Our first version toolkit has been used by 20,000 policy makers. This gave us a huge audience to talk to to make sure that we continue to meet the needs of policy makers and keep the toolkit relevant and useful. Although people have really enjoyed using the toolkit, user testing quickly showed us a few problems…
We knew what we needed to do. Help people understand what Open Policy Making was, how it impacted their policy making, and then to make it as simple as possible for them to know exactly what to do next.
So we came up with some quick ideas on pen and paper and tested them with people. We quickly discovered what not to do. People didn’t want a philosophy— they wanted to know exactly what to do, practical answers, and when to do it. They wanted a sort of design manual for policy….
How do we make user-centered design and open policy making as understood as agile?
We decided to organise the tools around the journey of a policy maker. What might a policy maker need to understand their users? How could they co-design ideas? How could they test policy? We looked at what tools and techniques they could use at the beginning, middle and end of a project, and organised tools accordingly.
We also added sections to remove confusion and hesitation. Our opening section ‘Getting started with Open Policy Making’ provides people with a clear understanding of what open policy making might mean to them, but also some practical considerations. Sections for limited timeframes and budgets help people realise that open policy can be done in almost any situation.
And finally we’ve created a much cleaner and simpler design that lets people show as much or little of the information as they need….
So go and check out the new toolkit and make more open policy yourselves….(More)”
The Open (Data) Market
Sean McDonald at Medium: “Open licensing privatizes technology and data usability. How does that effect equality and accessibility?…The open licensing movement(open data, open source software, etc.) predicates its value on increasing accessibility and transparency by removing legal and ownership restrictions on use. The groups that advocate for open data and open sourcecode, especially in government and publicly subsidized industries, often come from transparency, accountability, and freedom of information backgrounds. These efforts, however, significantly underestimate the costs of refining, maintaining, targeting, defining a value proposition, marketing, and presenting both data and products in ways that are effective and useful for the average person. Recent research suggests the primary beneficiaries of civic technologies — those specifically built on government data or services — are privileged populations. The World Banks recent World Development Report goes further to point out that public digitization can be a driver of inequality.
The dynamic of self-replicating privilege in both technology and openmarkets is not a new phenomenon. Social science research refers to it as the Matthew Effect, which says that in open or unregulated spaces, theprivileged tend to become more privileged, while the poor become poorer.While there’s no question the advent of technology brings massive potential,it is already creating significant access and achievement divides. Accordingto the Federal Communication Commission’s annual Broadband Progressreport in 2015, 42% of students in the U.S. struggle to do their homeworkbecause of web access — and 39% of rural communities don’t even have abroadband option. Internet access skews toward urban, wealthycommunities, with income, geography, and demographics all playing a rolein adoption. Even further, research suggests that the rich and poor use technology differently. This runs counter to narrative of Interneteventualism, which insist that it’s simply a (small) matter of time beforethese access (and skills) gaps close. Evidence suggests that for upper andmiddle income groups access is almost universal, but the gaps for lowincome groups are growing…(More)”
Open Data Is Changing the World in Four Ways…
Stefaan Verhulst at The GovLab Blog: “New repository of case studies documents the impact of open data globally: odimpact.org.
Despite global commitments to and increasing enthusiasm for open data, little is actually known about its use and impact. What kinds of social and economic transformation has open data brought about, and what is its future potential? How—and under what circumstances—has it been most effective? How have open data practitioners mitigated risks and maximized social good?
Even as proponents of open data extol its virtues, the field continues to suffer from a paucity of empiricalevidence. This limits our understanding of open data and its impact.
Over the last few months, The GovLab (@thegovlab), in collaboration with Omidyar Network(@OmidyarNetwork), has worked to address these shortcomings by developing 19 detailed open data case studies from around the world. The case studies have been selected for their sectoral and geographic representativeness. They are built in part from secondary sources (“desk research”), and also from more than60 first-hand interviews with important players and key stakeholders. In a related collaboration withOmidyar Network, Becky Hogge(@barefoot_techie), an independent researcher, has developed an additional six open data case studies, all focused on the United Kingdom. Together, these case studies, seek to provide a more nuanced understanding of the various processes and factors underlying the demand, supply, release, use and impact of open data.
Today, after receiving and integrating comments from dozens of peer reviewers through a unique open process, we are delighted to share an initial batch of 10 case studies, as well three of Hogge’s UK-based stories. These are being made available at a new custom-built repository, Open Data’s Impact (http://odimpact.org), that will eventually house all the case studies, key findings across the studies, and additional resources related to the impact of open data. All this information will be stored in machine-readable HTML and PDF format, and will be searchable by area of impact, sector and region….(More)