Introducing Socrata’s Open Data Magazine: Open Innovation


“Socrata is dedicated to telling the story of open data as it evolves, which is why we have launched a quarterly magazine, “Open Innovation.”
As innovators push the open data movement forward, they are transforming government and public engagement at every level. With thousands of innovators all over the world – each with their own successes, advice, and ideas – there is a tremendous amount of story for us to tell.
The new magazine features articles, advice, infographics, and more dedicated exclusively to the open data movement. The first issue, Fall 2013, will cover topics such as:

  • What is a Chief Data Officer?
  • Who should be on your open data team?
  • How do you publish your first open data set?

It will also include four Socrata case studies and opinion pieces from some of the industry’s leading innovators…
The magazine is currently free to download or read online through the Socrata website. It is optimized for viewing on tablets and smart phones, with plans in the works to make the magazine available through the Kindle Fire and iTunes magazine stores.
Check out the first issue of Open Innovation at www.socrata.com/magazine.”

GovLab Seeks Open Data Success Stories


Wyatt Kash in InformationWeek: “A team of open government advocates, led by former White House aide Beth Novek, has launched a campaign to identify 500 examples of how freely available government data is being put to profitable use in the private sector.Open Data 500 is part of a broader effort by New York University’s Governance Lab (GovLab) to conduct the “first real, comprehensive study of the use of open government data in the private sector,” said Joel Gurin, founder of OpenDataNow.com and senior adviser at GovLab.
Novek, who served in the White House as the first U.S. deputy CTO and led the White House Open Government Initiative from 2009-2011, founded GovLab while also teaching at the MIT Media Lab and NYU’s Robert F. Wagner Graduate School of Public Service.
In an interview with InformationWeek Government, Gurin explained that the goal of GovLab, and the Open Data 500 project, is to show how technology and new uses of data can make government more effective, and create more of a partnership between government and the public. “We’re also trying to draw on more public expertise to solve government problems,” he said….
Gurin said Open Data 500 will primarily look at U.S.-based, revenue-producing companies or organizations where government data is a key resource for their business. While the GovLab will focus initially on the use of federal data, it will also look at cases where entrepreneurs are making use of state or local data, but in scalable fashion.
“This goes one step further than the datapaloozas” championed by U.S. CTO Todd Park to showcase tools developed by the private sector using government data. “We’re trying to show how we can make data sets even more impactful and useful.”
Gurin said the GovLab team hopes to complete the study by the end of this year. The team has already identified 150 companies as candidates. To submit your company for consideration, visit thegovlab.org/submit-your-company; to submit another company, visit thegovlab.org/open500

Here’s how the Recovery Act became a test case for open data


Andrea Peterson in the Washington Post: “Making sure that government money is spent efficiently and without fraud can be difficult. You need to collect the right data, get the information to the right people, and deal with the sheer volume of projects that need tracking. Open data make the job easier to draw comparisons across programs and agencies. And when data are released to the public, everyone can help be a government watchdog.
When President Obama was first elected in 2008, he promised transparency. Almost immediately after he was sworn into office, he had an opportunity to test that promise with the implementation of the Recovery Act. And it worked….
Recovery.gov used geospatial technology to “allow Americans to drill down to their zip codes exactly where government money was being spent in their neighborhood.” It’s this micro-level of attention that increased accountability, according to Devaney.
“The degree of transparency forced them to get it right because they didn’t want to be embarrassed by their neighbors who they knew were going to these Web sites and could see what they were doing with the money.”
As to the second question of what data to collect: “I finally put my foot down and said no more than 100 pieces of data,” Devaney recalls, “So naturally, we came up to 99.” Of course, even with limiting themselves to that number of data points, transparency and fraud prevention was a daunting task, with the 300,000 some grantees to keep tabs on.
But having those data points in an open format was what allowed investigators to use “sophisticated cyber-technology and software to review and analyze Recovery-related data and information for any possible concerns or issues.” And they were remarkably successful on that end. A status report in October, 2010 showed “less than 0.2 percent of all reported awards currently have active fraud investigations.” Indeed, for Devaney’s  tenure leading the board he says the level of fraud hovered somewhere below half of one percent of all awards.”

OpenPrism


thomas levine: “There are loads of open data portals There’s even portal about data portals. And each of these portals has loads of datasets.
OpenPrism is my most recent attempt at understanding what is going on in all of these portals. Read on if you want to see why I made it, or just go to the site and start playing with it.

People don’t know much about open data

Nobody seems to know what is in the data portals. Many people know about datasets that are relevant to their work, municipality, &c., but nobody seems to know about the availability of data on broader topics, and nobody seems to have a good way of finding out what is available.
If someone does know any of this, he probably works for an open data portal. Still, he probably doesn’t know much about what is going on in other portals.

Naive search method

One difficulty in discovering open data is the search paradigm.
Open data portals approach searching data as if data were normal prose; your search terms are some keywords, a category, &c., and your results are dataset titles and descriptions.
There are other approaches. For example, AppGen searches for datasets with the same variables as each other, and the results are automatically generated app prototypes.

Siloed open data portals

Another issue is that people tend to use data from only one portal; they use their local government’s portals or their organizations’ portals.
Let me give you a couple examples of why this should maybe be different. Perhaps I’m considering making an app to help people find parking, and I want to see what parking lot data are available before I put much work into the app. Or maybe I want to find all of the data about sewer overflows so that I can expand my initiative to reduce water pollution.
OpenPrism is one small attempt at making it easier to search. Rather than going to all of the different portals and making a separate search for each portal, you type your search in one search bar, and you get results from a bunch of different Socrata, CKAN and Junar portals.”

Open Access


Reports by the UK’s House of Commons, Business, Innovation and Skills Committee: “Open access refers to the immediate, online availability of peer reviewed research articles, free at the point of access (i.e. without subscription charges or paywalls). Open access relates to scholarly articles and related outputs. Open data (which is a separate area of Government policy and outside the scope of this inquiry) refers to the availability of the underlying research data itself. At the heart of the open access movement is the principle that publicly funded research should be publicly accessible. Open access expanded rapidly in the late twentieth century with the growth of the internet and digitisation (the transcription of data into a digital form), as it became possible to disseminate research findings more widely, quickly and cheaply.
Whilst there is widespread agreement that the transition to open access is essential in order to improve access to knowledge, there is a lack of consensus about the best route to achieve it. To achieve open access at scale in the UK, there will need to be a shift away from the dominant subscription-based business model. Inevitably, this will involve a transitional period and considerable change within the scholarly publishing market.
For the UK to transition to open access, an effective, functioning and competitive market in scholarly communications will be vital. The evidence we saw over the course of this inquiry shows that this is currently far from the case, with journal subscription prices rising at rates that are unsustainable for UK universities and other subscribers. There is a significant risk that the Government’s current open access policy will inadvertently encourage and prolong the dysfunctional elements of the scholarly publishing market, which are a major barrier to access.
See Volume I and  Volume II

Understanding the impact of releasing and re-using open government data


New Report by the European Public Sector Information Platform: “While there has been a proliferation of open data portals and data re-using tools and applications of tremendous speed in the last decade, research and understanding about the impact of opening up public sector information and open government data (OGD hereinafter) has been lacking behind.
Until now, there have been some research efforts to structure the concept of the impact of OGD suggesting various theories of change, their measuring methodologies or in some cases, concrete calculations as to what financial benefits opening government data brings on a table. For instance, the European Commission conducted a study on pricing of public sector information, which attempted evaluating direct and indirect economic impact of opening public data and identified key indicators to monitor the effects of open data portals. Also, Open Data Research Network issued a background report in April 2012 suggesting a general framework of key indicators to measure the impact of open data initiatives both on a provision and re-use stages.
Building on the research efforts up to date, this report will reflect upon the main types of impacts OGD may have and will also present key measuring frameworks to observe the change OGD initiatives may bring about.”

Open data for accountable governance: Is data literacy the key to citizen engagement?


at UNDP’s Voices of Eurasia blog: “How can technology connect citizens with governments, and how can we foster, harness, and sustain the citizen engagement that is so essential to anti-corruption efforts?
UNDP has worked on a number of projects that use technology to make it easier for citizens to report corruption to authorities:

These projects are showing some promising results, and provide insights into how a more participatory, interactive government could develop.
At the heart of the projects is the ability to use citizen generated data to identify and report problems for governments to address….

Wanted: Citizen experts

As Kenneth Cukier, The Economist’s Data Editor, has discussed, data literacy will become the new computer literacy. Big data is still nascent and it is impossible to predict exactly how it will affect society as a whole. What we do know is that it is here to stay and data literacy will be integral to our lives.
It is essential that we understand how to interact with big data and the possibilities it holds.
Data literacy needs to be integrated into the education system. Educating non-experts to analyze data is critical to enabling broad participation in this new data age.
As technology advances, key government functions become automated, and government data sharing increases, newer ways for citizens to engage will multiply.
Technology changes rapidly, but the human mind and societal habits cannot. After years of closed government and bureaucratic inefficiency, adaptation of a new approach to governance will take time and education.
We need to bring up a generation that sees being involved in government decisions as normal, and that views participatory government as a right, not an ‘innovative’ service extended by governments.

What now?

In the meantime, while data literacy lies in the hands of a few, we must continue to connect those who have the technological skills with citizen experts seeking to change their communities for the better – as has been done in many a Social Innovation Camps recently (in Montenegro, Ukraine and Armenia at Mardamej and Mardamej Relaoded and across the region at Hurilab).
The social innovation camp and hackathon models are an increasingly debated topic (covered by Susannah Vila, David Eaves, Alex Howard and Clay Johnson).
On the whole, evaluations are leading to newer models that focus on greater integration of mentorship to increase sustainability – which I readily support. However, I do have one comment:
Social innovation camps are often criticized for a lack of sustainability – a claim based on the limited number of apps that go beyond the prototype phase. I find a certain sense of irony in this, for isn’t this what innovation is about: Opening oneself up to the risk of failure in the hope of striking something great?
In the words of Vinod Khosla:

“No failure means no risk, which means nothing new.”

As more data is released, the opportunity for new apps and new ways for citizen interaction will multiply and, who knows, someone might come along and transform government just as TripAdvisor transformed the travel industry.”

Public Open Data: The Good, the Bad, the Future


at IDEALAB: “Some of the most powerful tools combine official public data with social media or other citizen input, such as the recent partnership between Yelp and the public health departments in New York and San Francisco for restaurant hygiene inspection ratings. In other contexts, such tools can help uncover and ultimately reduce corruption by making it easier to “follow the money.”
Despite the opportunities offered by “free data,” this trend also raises new challenges and concerns, among them, personal privacy and security. While attention has been devoted to the unsettling power of big data analysis and “predictive analytics” for corporate marketing, similar questions could be asked about the value of public data. Does it contribute to community cohesion that I can find out with a single query how much my neighbors paid for their house or (if employed by public agencies) their salaries? Indeed, some studies suggest that greater transparency leads not to greater trust in government but to resignation and apathy.
Exposing certain law enforcement data also increases the possibility of vigilantism. California law requires the registration and publication of the home addresses of known sex offenders, for instance. Or consider the controversy and online threats that erupted when, shortly after the Newtown tragedy, a newspaper in New York posted an interactive map of gun permit owners in nearby counties.
…Policymakers and officials must still mind the “big data gap.”So what does the future hold for open data? Publishing data is only one part of the information ecosystem. To be useful, tools must be developed for cleaning, sorting, analyzing and visualizing it as well. …
For-profit companies and non-profit watchdog organizations will continue to emerge and expand, building on the foundation of this data flood. Public-private partnerships such as those between San Francisco and Appallicious or Granicus, startups created by Code for America’s Incubator, and non-partisan organizations like the Sunlight Foundation and MapLight rely on public data repositories for their innovative applications and analysis.
Making public data more accessible is an important goal and offers enormous potential to increase civic engagement. To make the most effective and equitable use of this resource for the public good, cities and other government entities should invest in the personnel and equipment — hardware and software — to make it universally accessible. At the same time, Chief Data Officers (or equivalent roles) should also be alert to the often hidden challenges of equity, inclusion, privacy, and security.”

The Other Side of Open is Not Closed


Dazza Greenwood at Civics.com: “Impliedly, the opposite of “open” is “closed” but the other side of open data, open API’s and open access is usually still about enabling access but only when allowed or required. Open government also needs to include adequate methods to access and work with data and other resources that are not fully open. In fact, many (most?) high value, mission critical and societally important data access is restricted in some way. If a data-set is not fully public record then a good practice is to think of it as “protected” and to ensure access according to proper controls.
As a metaphorical illustration, you could look at an open data system like a village square or agora that is architected and intended to be broadly accessible. On the other side of the spectrum, you could see a protected data system more like a castle or garrison, that is architected to be secure from intruders but features guarded gates and controlled access points in order to function.
In fact, this same conceptual approach applies well beyond data and includes everything you could consider an resource on the Internet.  In other words, any asset, service, process or other item that can exist at a URL (or URI) is a resource and can be positioned somewhere on a spectrum from openly accessible to access protected. It is easy to forget that the “R” in URL stands for “Resource” and the whole wonderful web connects to resources of every nature and description. Data – structured, raw or otherwise – is just the tip of the iceberg.
Resources on the web could be apps and other software, or large-scale enterprise network services, or just a single text file with few lines of html. The concept of a enabling access permission to “protected resources” on the web is the cornerstone of OAuth2 and is now being extended by the OpenID Connect standard, the User Managed Access protocol and other specifications to enable a powerful array of REST-based authorization possibilities…”

A promising phenomenon of open data: A case study of the Chicago open data project


Paper by Maxat Kassen in Government Information Quarterly: “This article presents a case study of the open data project in the Chicago area. The main purpose of the research is to explore empowering potential of an open data phenomenon at the local level as a platform useful for promotion of civic engagement projects and provide a framework for future research and hypothesis testing. Today the main challenge in realization of any e-government projects is a traditional top–down administrative mechanism of their realization itself practically without any input from members of the civil society. In this respect, the author of the article argues that the open data concept realized at the local level may provide a real platform for promotion of proactive civic engagement. By harnessing collective wisdom of the local communities, their knowledge and visions of the local challenges, governments could react and meet citizens’ needs in a more productive and cost-efficient manner. Open data-driven projects that focused on visualization of environmental issues, mapping of utility management, evaluating of political lobbying, social benefits, closing digital divide, etc. are only some examples of such perspectives. These projects are perhaps harbingers of a new political reality where interactions among citizens at the local level will play an more important role than communication between civil society and government due to the empowering potential of the open data concept.”