Digital Enlightenment Yearbook 2014


Book edited O’Hara, K. , Nguyen, M-H.C., Haynes, P.: “Tracking the evolution of digital technology is no easy task; changes happen so fast that keeping pace presents quite a challenge. This is, nevertheless, the aim of the Digital Enlightenment Yearbook.
This book is the third in the series which began in 2012 under the auspices of the Digital Enlightenment Forum. This year, the focus is on the relationship of individuals with their networks, and explores “Social networks and social machines, surveillance and empowerment”. In what is now the well-established tradition of the yearbook, different stakeholders in society and various disciplinary communities (technology, law, philosophy, sociology, economics, policymaking) bring their very different opinions and perspectives to bear on this topic.
The book is divided into four parts: the individual as data manager; the individual, society and the market; big data and open data; and new approaches. These are bookended by a Prologue and an Epilogue, which provide illuminating perspectives on the discussions in between. The division of the book is not definitive; it suggests one narrative, but others are clearly possible.
The 2014 Digital Enlightenment Yearbook gathers together the science, social science, law and politics of the digital environment in order to help us reformulate and address the timely and pressing questions which this new environment raises. We are all of us affected by digital technology, and the subjects covered here are consequently of importance to us all. (Contents)”

Would You Share Private Data for the Good of City Planning?


Henry Grabar at NextCity: “The proliferation of granular data on automobile movement, drawn from smartphones, cab companies, sensors and cameras, is sharpening our sense of how cars travel through cities. Panglossian seers believe the end of traffic jams is nigh.
This information will change cities beyond their roads. Real-time traffic data may lead to reworked intersections and new turning lanes, but understanding cars is in some ways a stand-in for understanding people. There’s traffic as traffic and traffic as proxy, notes Brett Goldstein, an urban science fellow at the University of Chicago who served as that city’s first data officer from 2011 to 2013. “We’d be really naive, in thinking about how we make cities better,” he says, “to only consider traffic for what it is.”
Even a small subset of a city’s car data goes a long way. Consider the raft of discrete findings that have emerged from the records of New York City taxis.
Researchers at the Massachusetts Institute of Technology, led by Paolo Santi, showed that cab-sharing could reduce taxi mileage by 40 percent. Their counterparts at NYU, led by Claudio Silva, mapped activity around hubs like train stations and airports and during hurricanes.
“You start to build actual models of how people move, and where they move,” observes Silva, the head of disciplines at NYU’s Center for Science and Urban Progress (CUSP). “The uses of this data for non-traffic engineering are really substantial.”…
Many of these ideas are hypothetical, for the moment, because so-called “granular” data is so hard to come by. That’s one reason the release of New York’s taxi cab data spurred so many studies — it’s an oasis of information in a desert of undisclosed records. Corporate entreaties, like Uber’s pending data offering to Boston, don’t always meet researchers’ standards. “It’s going to be a lot of superficial data, and it’s not clear how usable it’ll be at this point,” explains Sarah Kaufman, the digital manager at NYU’s Rudin Center for Transportation….
Yet Americans seem much more alarmed by the collection of location data than other privacy breaches.
How can data utopians convince the hoi polloi to share their comings and goings? One thought: Make them secure. Mike Flowers, the founder of New York City’s Office of Data Analytics and a fellow at NYU’s CUSP, told me it might be time to consider establishing a quasi-governmental body that people would trust to make their personal data anonymous before they are channeled into government projects. (New York City’s Taxi and Limousine Commission did not do a very good job at this, which led to Gawker publishing a dozen celebrity cab rides.)
Another idea is to frame open data as a beneficial trade-off. “When people provide information, they want to realize the benefit of the information,” Goldstein says.
Users tell the routing company Waze where they are and get a smoother commute in return. Progressive Insurance offers drivers a “Snapshot” tracker. If it likes the way you drive, the company will lower your rates. It’s not hard to imagine that, in the long run, drivers will be penalized for refusing such a device…. (More).”

Open Data Is Finally Making A Dent In Cities


Brooks Rainwater at Co-Exist: “As with a range of leading issues, cities are at the vanguard of this shifting environment. Through increased measurement, analysis, and engagement, open data will further solidify the centrality of cities.
In the Chicago, the voice of the mayor counts for a lot. And Mayor Emmanuel has been at the forefront in supporting and encouraging open data in the city, resulting in a strong open government community. The city has more than 600 datasets online, and has seen millions of page views on its data portal. The public benefits have accrued widely with civic initiatives like Chicagolobbyists.org, as well as with a myriad of other open data led endeavors.
Transparency is one of the great promises of open data. Petitioning the government is a fundamental tenet of democracy and many government relations’ professionals perform this task brilliantly. At the same time that transparency is good for the city, it’s good for citizens and democracy. Through the advent of Chicagolobbyists.org, anyone can now see how many lobbyists are in the city, how much they are spending, who they are talking to, and when it is happening.
Throughout the country, we are seeing data driven sites and apps like this that engage citizens, enhance services, and provide a rich understanding of government operations In Austin, a grassroots movement has formed with advocacy organization Open Austin. Through hackathons and other opportunities, citizens are getting involved, services are improving, and businesses are being built.
Data can even find your dog, reducing the number of stray animals being sheltered, with StrayMapper.com. The site has a simple map-based web portal where you can type in whether you are missing a dog or cat, when you lost them, and where. That information is then plugged into the data being collected by the city on stray animals. This project, developed by a Code for America brigade team, helps the city improve its rate of returning pets to owners.
It’s not only animals that get lost or at least can’t find the best way home. I’ve found myself in that situation too. Thanks to Ridescout, incubated in Washington, D.C., at 1776, I have been able to easily find the best way home. Through the use of open data available from both cities and the Department of Transportation, Ridescout created an app that is an intuitive mobility tool. By showing me all of the available options from transit to ridesharing to my own two feet, it frequently helps me get from place to place in the city. It looks like it wasn’t just me that found this app to be handy; Daimler recently acquired Ridescout as the auto giant continues its own expansion into the data driven mobility space.”

The downside of Open Data


Joshua Chambers at FutureGov: “…Inaccurate public datasets can cause big problems, because apps that feed off of them could be giving out false information. I was struck by this when we reported on an app in Australia that was issuing alerts for forest fires that didn’t exist. The data was coming from public emergency calls, but wasn’t verified before being displayed. This meant that app users would be alerted of all possible fires, but also could be caused unnecessarily panic. The government takes the view that more alerts are better than slower verified ones, but there is the potential for people to become less likely to trust all alerts on the app.
No-one wants to publish inaccurate data, but accuracy takes time and costs money. So we come to a central tension in discussions about open data: is it better to publish more data, with the risk of inaccuracy, or limit publication to datasets which are accurate?
The United Kingdom takes the view that more data is best. I interviewed the UK’s lead official on open data, Paul Maltby, a couple of years ago, and he told me that: “There’s a misnomer here that everything has to be perfect before you can put it out,” adding that “what we’re finding is that, actually, some of the datasets are a bit messy. We try to keep them as high-quality as we can; but other organisations then clean up the data and sell it on”.
Indeed, he noted that some officials use data accuracy as an excuse to not publish information that could hold their departments to account. “There’s sometimes a reluctance to get data out from the civil service; and whilst we see many examples of people understanding the reasons why data has been put to use, I’d say the general default is still not pro-release”.
Other countries take a different view, however. Singapore, for example, publishes much less data than Britain, but has more of a push on making its data accurate to assist startups and app builders….(More)”

Open Data Barometer (second edition)


The second edition of the Open Data Barometer: “A global movement to make government “open by default” picked up steam in 2013 when the G8 leaders signed an Open Data Charter – promising to make public sector data openly available, without charge and in re-useable formats. In 2014 the G20 largest industrial economies followed up by pledging to advance open data as a tool against corruption, and the UN recognized the need for a “Data Revolution” to achieve global development goals.
However, this second edition of the Open Data Barometer shows that there is still a long way to go to put the power of data in the hands of citizens. Core data on how governments are spending our money and how public services are performing remains inaccessible or paywalled in most countries. Information critical to fight corruption and promote fair competition, such as company registers, public sector contracts, and land titles, is even harder to get. In most countries, proactive disclosure of government data is not mandated in law or policy as part of a wider right to information, and privacy protections are weak or uncertain.
Our research suggests some of the key steps needed to ensure the “Data Revolution” will lead to a genuine revolution in the transparency and performance of governments:

  • High-level political commitment to proactive disclosure of public sector data, particularly the data most critical to accountability
  • Sustained investment in supporting and training a broad cross-section of civil society and entrepreneurs to understand and use data effectively
  • Contextualizing open data tools and approaches to local needs, for example by making data visually accessible in countries with lower literacy levels.
  • Support for city-level open data initiatives as a complement to national-level programmes
  • Legal reform to ensure that guarantees of the right to information and the right to privacy underpin open data initiatives

Over the next six months, world leaders have several opportunities to agree these steps, starting with the United Nation’s high-level data revolution in Africa conference in March, Canada’s global International Open Data Conference in May and the G7 summit in Germany this June. It is crucial that these gatherings result in concrete actions to address the political and resource barriers that threaten to stall open data efforts….(More)”.

Exploring the Factors Influencing the Adoption of Open Government Data by Private Organisations


Article by Maaike Kaasenbrood et al in the International Journal of Public Administration in the Digital Age (IJPADA): “Governments are increasingly opening their datasets, allowing use. Drawing on a multi-method approach, this paper develops a framework for identifying factors influencing the adoption of Open Government Data (OGD) by private organisations. Subsequently the framework was used to analyse five cases. The findings reveal that for private organizations to use OGD, the content and source of the data needs to be clear, a usable open data license must be present and continuity of data updates needs to be ensured. For none of the investigated private organisations OGD was key to their existence. Organisations use OGD in addition to, or as an enhancement of their core activities. As the official OGD-channels are bypassed trustworthy relationships between the data user and data provider were found to play an important role in finding and using OGD. The findings of this study can help government agencies in developing OGD-policies and stimulating OGD-use….(More).”

The Next 5 Years in Open Data: 3 Key Trends to Watch


Kevin Merritt (Socrata Inc.) at GovTech:2014 was a pivotal year in the evolution of open data for one simple and powerful reason – it went mainstream and was widely adopted on just about every continent. Open data is now table stakes. Any government that is not participating in open data is behind its peers…The move toward data-driven government will absolutely accelerate between 2015 and 2020, thanks to three key trends.

1. Comparative Analytics for Government Employees

The first noteworthy trend that will drive open data change in 2015 is that open data technology offerings will deliver first-class benefits to public-sector employees. This means government employees will be able to derive enormous insights from their own data and act on them in a deep, meaningful and analytical way. Until only recently, the primary beneficiaries of open data initiatives were external stakeholders: developers and entrepreneurs; scientists, researchers, analysts, journalists and economists; and ordinary citizens lacking technical training. The open data movement, until now, has ignored an important class of stakeholders – government employees….

2. Increased Global Expansion for Open Data

The second major trend fueling data-driven government is that 2015 will be a year of accelerating adoption of open data internationally.
Right now, for example, open data is being adopted prolifically in Europe, Latin America, Australia, New Zealand and Canada.
….
We will continue to see international governments adopt open data in 2015 for a variety of reasons. Northern European governments, for instance, are interested in efficiency and performance right now; Southern European governments, on the other hand, are currently focused on transparency, trust, and credibility. Despite the different motivations, the open data technology solutions are the same. And, looking out beyond 2015, it’s important to note that Southern European governments will also adopt open data to help increase job creation and improve delivery of services.

3. “Open Data” Will Simply Become “Government Data”

The third trend that we’ll see in the arena of open data lies a little further out on the horizon, and it will be surprising. In my opinion, the term “open data” may disappear within a decade; and in its place will simply be the term “government data.”
That’s because virtually all government data will be open data by 2020; and government data will be everywhere it needs to be – available to the public as fast as it’s created, processed and accumulated….(More).”

The Participatory Approach to Open Data


at the SmartChicagoCollaborative: “…Having vast stores of government data is great, but to make this data useful – powerful – takes a different type of approach. The next step in the open data movement will be about participatory data.

Systems that talk back

One of the great advantages behind Chicago’s 311 ServiceTracker is that when you submit something to the system, the system has the capacity to talk back giving you a tracking number and an option to get email updates about your request. What also happens is that as soon as you enter your request, the data get automatically uploaded into the city’s data portal giving other 311 apps like SeeClickFix and access to the information as well…

Participatory Legislative Apps

We already see a number of apps that allow user to actively participate using legislative data.
At the Federal level, apps like PopVox allow users to find and track legislation that’s making it’s way through Congress. The app then allows users to vote if they approve or disapprove of a particular bill. You can then send explain your reasoning in a message that will be sent to all of your elected officials. The app makes it easier for residents to send feedback on legislation by creating a user interface that cuts through the somewhat difficult process of keeping tabs on legislation.
At the state level, New York’s OpenLegislation site allows users to search for state legislation and provide commentary on each resolution.
At the local level, apps like Councilmatic allows users to post comments on city legislation – but these comments aren’t mailed or sent to alderman the same way PopVox does. The interaction only works if the alderman are also using Councilmatic to receive feedback…

Crowdsourced Data

Chicago has hardwired several datasets into their computer systems, meaning that this data is automatically updated as the city does the people’s business.
But city governments can’t be everywhere at once. There are a number of apps that are designed to gather information from residents to better understand what’s going on their cities.
In Gary, the city partnered with the University of Chicago and LocalData to collect information on the state of buildings in Gary, IN. LocalData is also being used in Chicago, Houston, and Detroit by both city governments and non-profit organizations.
Another method the City of Chicago has been using to crowdsource data has been to put several of their datasets on GitHub and accept pull requests on that data. (A pull request is when one developer makes a change to a code repository and asks the original owner to merge the new changes into the original repository.) An example of this is bikers adding private bike rack locations to the city’s own bike rack dataset.

Going from crowdsourced to participatory

Shareabouts is a mapping platform by OpenPlans that gives city the ability to collect resident input on city infrastructure. Chicago’s Divvy Bikeshare program is using the tool to collect resident feedback on where the new Divvy stations should go. The app allows users to comment on suggested locations and share the discussion on social media.
But perhaps the most unique participatory app has been piloted by the City of South Bend, Indiana. CityVoice is a Code for America fellowship project designed to get resident feedback on abandoned buildings in South Bend…. (More)”

Businesses dig for treasure in open data


Lindsay Clark in ComputerWeekly: “Open data, a movement which promises access to vast swaths of information held by public bodies, has started getting its hands dirty, or rather its feet.
Before a spade goes in the ground, construction and civil engineering projects face a great unknown: what is down there? In the UK, should someone discover anything of archaeological importance, a project can be halted – sometimes for months – while researchers study the site and remove artefacts….
During an open innovation day hosted by the Science and Technologies Facilities Council (STFC), open data services and technology firm Democrata proposed analytics could predict the likelihood of unearthing an archaeological find in any given location. This would help developers understand the likely risks to construction and would assist archaeologists in targeting digs more accurately. The idea was inspired by a presentation from the Archaeological Data Service in the UK at the event in June 2014.
The proposal won support from the STFC which, together with IBM, provided a nine-strong development team and access to the Hartree Centre’s supercomputer – a 131,000 core high-performance facility. For natural language processing of historic documents, the system uses two components of IBM’s Watson – the AI service which famously won the US TV quiz show Jeopardy. The system uses SPSS modelling software, the language R for algorithm development and Hadoop data repositories….
The proof of concept draws together data from the University of York’s archaeological data, the Department of the Environment, English Heritage, Scottish Natural Heritage, Ordnance Survey, Forestry Commission, Office for National Statistics, the Land Registry and others….The system analyses sets of indicators of archaeology, including historic population dispersal trends, specific geology, flora and fauna considerations, as well as proximity to a water source, a trail or road, standing stones and other archaeological sites. Earlier studies created a list of 45 indicators which was whittled down to seven for the proof of concept. The team used logistic regression to assess the relationship between input variables and come up with its prediction….”