Statistics and Open Data: Harvesting unused knowledge, empowering citizens and improving public services


House of Commons Public Administration Committee (Tenth Report):
“1. Open data is playing an increasingly important role in Government and society. It is data that is accessible to all, free of restrictions on use or redistribution and also digital and machine-readable so that it can be combined with other data, and thereby made more useful. This report looks at how the vast amounts of data generated by central and local Government can be used in open ways to improve accountability, make Government work better and strengthen the economy.

2. In this inquiry, we examined progress against a series of major government policy announcements on open data in recent years, and considered the prospects for further development. We heard of government open data initiatives going back some years, including the decision in 2009 to release some Ordnance Survey (OS) data as open data, and the Public Sector Mapping Agreement (PSMA) which makes OS data available for free to the public sector.  The 2012 Open Data White Paper ‘Unleashing the Potential’ says that transparency through open data is “at the heart” of the Government’s agenda and that opening up would “foster innovation and reform public services”. In 2013 the report of the independently-chaired review by Stephan Shakespeare, Chief Executive of the market research and polling company YouGov, of the use, re-use, funding and regulation of Public Sector Information urged Government to move fast to make use of data. He criticised traditional public service attitudes to data before setting out his vision:

    • To paraphrase the great retailer Sir Terry Leahy, to run an enterprise without data is like driving by night with no headlights. And yet that is what Government often does. It has a strong institutional tendency to proceed by hunch, or prejudice, or by the easy option. So the new world of data is good for government, good for business, and above all good for citizens. Imagine if we could combine all the data we produce on education and health, tax and spending, work and productivity, and use that to enhance the myriad decisions which define our future; well, we can, right now. And Britain can be first to make it happen for real.

3. This was followed by publication in October 2013 of a National Action Plan which sets out the Government’s view of the economic potential of open data as well as its aspirations for greater transparency.

4. This inquiry is part of our wider programme of work on statistics and their use in Government. A full description of the studies is set out under the heading “Statistics” in the inquiries section of our website, which can be found at www.parliament.uk/pasc. For this inquiry we received 30 pieces of written evidence and took oral evidence from 12 witnesses. We are grateful to all those who have provided evidence and to our Specialist Adviser on statistics, Simon Briscoe, for his assistance with this inquiry.”

Table of Contents:

Summary
1 Introduction
2 Improving accountability through open data
3 Open Data and Economic Growth
4 Improving Government through open data
5 Moving faster to make a reality of open data
6 A strategic approach to open data?
Conclusion
Conclusions and recommendations

How Twitter Could Help Police Departments Predict Crime


Eric Jaffe in Atlantic Cities: “Initially, Matthew Gerber didn’t believe Twitter could help predict where crimes might occur. For one thing, Twitter’s 140-character limit leads to slang and abbreviations and neologisms that are hard to analyze from a linguistic perspective. Beyond that, while criminals occasionally taunt law enforcement via Twitter, few are dumb or bold enough to tweet their plans ahead of time. “My hypothesis was there was nothing there,” says Gerber.
But then, that’s why you run the data. Gerber, a systems engineer at the University of Virginia’s Predictive Technology Lab, did indeed find something there. He reports in a new research paper that public Twitter data improved the predictions for 19 of 25 crimes that occurred early last year in metropolitan Chicago, compared with predictions based on historical crime patterns alone. Predictions for stalking, criminal damage, and gambling saw the biggest bump…..
Of course, the method says nothing about why Twitter data improved the predictions. Gerber speculates that people are tweeting about plans that correlate highly with illegal activity, as opposed to tweeting about crimes themselves.
Let’s use criminal damage as an example. The algorithm identified 700 Twitter topics related to criminal damage; of these, one topic involved the words “united center blackhawks bulls” and so on. Gather enough sports fans with similar tweets and some are bound to get drunk enough to damage public property after the game. Again this scenario extrapolates far more than the data tells, but it offers a possible window into the algorithm’s predictive power.

The map on the left shows predicted crime threat based on historical patterns; the one on the right includes Twitter data. (Via Decision Support Systems)
From a logistical standpoint, it wouldn’t be too difficult for police departments to use this method in their own predictions; both the Twitter data and modeling software Gerber used are freely available. The big question, he says, is whether a department used the same historical crime “hot spot” data as a baseline for comparison. If not, a new round of tests would have to be done to show that the addition of Twitter data still offered a predictive upgrade.
There’s also the matter of public acceptance. Data-driven crime prediction tends to raise any number of civil rights concerns. In 2012, privacy advocates criticized the FBI for a similar plan to use Twitter for crime predictions. In recent months the Chicago Police Department’s own methods have been knocked as a high-tech means of racial profiling. Gerber says his algorithms don’t target any individuals and only cull data posted voluntarily to a public account.”

Building a More Open Government


Corinna Zarek at the White House: “It’s Sunshine Week again—a chance to celebrate transparency and participation in government and freedom of information. Every year in mid-March, we take stock of our progress and where we are headed to make our government more open for the benefit of citizens.
In December, 2013, the Administration announced 23 ambitious commitments to further open up government over the next two years in U.S. Government’s  second Open Government National Action Plan. Those commitments are now all underway or in development, including:
·         Launching an improved Data.gov: The updated Data.gov debuted in January, 2014, and continues to grow with thousands of updated or new government data sets being proactively made available to the public.
·         Increasing public collaboration: Through crowdsourcing, citizen science, and other methods, Federal agencies continue to expand the ways they collaborate with the public. For example, the National Aeronautics and Space Administration, for instance, recently launched its third Asteroid Grand Challenge, a broad call to action, seeking the best and brightest ideas from non-traditional partners to enhance and accelerate the work NASA is already doing for planetary defense.
·         Improving We the People: The online petition platform We the People gives the public a direct way to participate in their government and is currently incorporating improvements to make it easier for the public to submit petitions and signatures.”

New Field Guide Explores Open Data Innovations in Disaster Risk and Resilience


Worldbank: “From Indonesia to Bangladesh to Nepal, community members armed with smartphones and GPS systems are contributing to some of the most extensive and versatile maps ever created, helping inform policy and better prepare their communities for disaster risk.
In Jakarta, more than 500 community members have been trained to collect data on thousands of hospitals, schools, private buildings, and critical infrastructure. In Sri Lanka, government and academic volunteers mapped over 30,000 buildings and 450 km of roadways using a collaborative online resource called OpenStreetMaps.
These are just a few of the projects that have been catalyzed by the Open Data for Resilience Initiative (OpenDRI), developed by the World Bank’s Global Facility for Disaster Reduction and Recovery (GFDRR). Launched in 2011, OpenDRI is active in more than 20 countries today, mapping tens of thousands of buildings and urban infrastructure, providing more than 1,000 geospatial datasets to the public, and developing innovative application tools.
To expand this work, the World Bank Group has launched the OpenDRI Field Guide as a showcase of successful projects and a practical guide for governments and other organizations to shape their own open data programs….
The field guide walks readers through the steps to build open data programs based on the OpenDRI methodology. One of the first steps is data collation. Relevant datasets are often locked because of proprietary arrangements or fragmented in government bureaucracies. The field guide explores tools and methods to enable the participatory mapping projects that can fill in gaps and keep existing data relevant as cities rapidly expand.

GeoNode: Mapping Disaster Damage for Faster Recovery
One example is GeoNode, a locally controlled and open source cataloguing tool that helps manage and visualize geospatial data. The tool, already in use in two dozen countries, can be modified and easily be integrated into existing platforms, giving communities greater control over mapping information.
GeoNode was used extensively after Typhoon Yolanda (Haiyan) swept the Philippines with 300 km/hour winds and a storm surge of over six meters last fall. The storm displaced nearly 11 million people and killed more than 6,000.
An event-specific GeoNode project was created immediately and ultimately collected more than 72 layers of geospatial data, from damage assessments to situation reports. The data and quick analysis capability contributed to recovery efforts and is still operating in response mode at Yolandadata.org.
InaSAFE: Targeting Risk Reduction
A sister project, InaSAFE, is an open, easy-to-use tool for creating impact assessments for targeted risk reduction. The assessments are based on how an impact layer – such as a tsunami, flood, or earthquake – affects exposure data, such as population or buildings.
With InaSAFE, users can generate maps and statistical information that can be easily disseminated and even fed back into projects like GeoNode for simple, open source sharing.
The initiative, developed in collaboration with AusAID and the Government of Indonesia, was put to the test in the 2012 flood season in Jakarta, and its successes provoked a rapid national rollout and widespread interest from the international community.
Open Cities: Improving Urban Planning & Resilience
The Open Cities project, another program operating under the OpenDRI platform, aims to catalyze the creation, management and use of open data to produce innovative solutions for urban planning and resilience challenges across South Asia.
In 2013, Kathmandu was chosen as a pilot city, in part because the population faces the highest mortality threat from earthquakes in the world. Under the project, teams from the World Bank assembled partners and community mobilizers to help execute the largest regional community mapping project to date. The project surveyed more than 2,200 schools and 350 health facilities, along with road networks, points of interest, and digitized building footprints – representing nearly 340,000 individual data nodes.”

After the Protests


Zeynep Tufekc in the New York Times on why social media is fueling a boom-and-bust cycle of political: “LAST Wednesday, more than 100,000 people showed up in Istanbul for a funeral that turned into a mass demonstration. No formal organization made the call. The news had come from Twitter: Berkin Elvan, 15, had died. He had been hit in the head by a tear-gas canister on his way to buy bread during the Gezi protests last June. During the 269 days he spent in a coma, Berkin’s face had become a symbol of civic resistance shared on social media from Facebook to Instagram, and the response, when his family tweeted “we lost our son” and then a funeral date, was spontaneous.

Protests like this one, fueled by social media and erupting into spectacular mass events, look like powerful statements of opposition against a regime. And whether these take place in Turkey, Egypt or Ukraine, pundits often speculate that the days of a ruling party or government, or at least its unpopular policies, must be numbered. Yet often these huge mobilizations of citizens inexplicably wither away without the impact on policy you might expect from their scale.

This muted effect is not because social media isn’t good at what it does, but, in a way, because it’s very good at what it does. Digital tools make it much easier to build up movements quickly, and they greatly lower coordination costs. This seems like a good thing at first, but it often results in an unanticipated weakness: Before the Internet, the tedious work of organizing that was required to circumvent censorship or to organize a protest also helped build infrastructure for decision making and strategies for sustaining momentum. Now movements can rush past that step, often to their own detriment….

But after all that, in the approaching local elections, the ruling party is expected to retain its dominance.

Compare this with what it took to produce and distribute pamphlets announcing the Montgomery bus boycott in 1955. Jo Ann Robinson, a professor at Alabama State College, and a few students sneaked into the duplicating room and worked all night to secretly mimeograph 52,000 leaflets to be distributed by hand with the help of 68 African-American political, religious, educational and labor organizations throughout the city. Even mundane tasks like coordinating car pools (in an era before there were spreadsheets) required endless hours of collaborative work.

By the time the United States government was faced with the March on Washington in 1963, the protest amounted to not just 300,000 demonstrators but the committed partnerships and logistics required to get them all there — and to sustain a movement for years against brutally enforced Jim Crow laws. That movement had the capacity to leverage boycotts, strikes and demonstrations to push its cause forward. Recent marches on Washington of similar sizes, including the 50th anniversary march last year, also signaled discontent and a desire for change, but just didn’t pose the same threat to the powers that be.

Social media can provide a huge advantage in assembling the strength in numbers that movements depend on. Those “likes” on Facebook, derided as slacktivism or clicktivism, can have long-term consequences by defining which sentiments are “normal” or “obvious” — perhaps among the most important levers of change. That’s one reason the same-sex marriage movement, which uses online and offline visibility as a key strategy, has been so successful, and it’s also why authoritarian governments try to ban social media.

During the Gezi protests, Prime Minister Recep Tayyip Erdogan called Twitter and other social media a “menace to society.” More recently, Turkey’s Parliament passed a law greatly increasing the government’s ability to censor online content and expand surveillance, and Mr. Erdogan said he would consider blocking access to Facebook and YouTube. It’s also telling that one of the first moves by President Vladimir V. Putin of Russia before annexing Crimea was to shut down the websites of dissidents in Russia.
Media in the hands of citizens can rattle regimes. It makes it much harder for rulers to maintain legitimacy by controlling the public sphere. But activists, who have made such effective use of technology to rally supporters, still need to figure out how to convert that energy into greater impact. The point isn’t just to challenge power; it’s to change it.”

The data gold rush


Neelie KROES (European Commission):  “Nearly 200 years ago, the industrial revolution saw new networks take over. Not just a new form of transport, the railways connected industries, connected people, energised the economy, transformed society.
Now we stand facing a new industrial revolution: a digital one.
With cloud computing its new engine, big data its new fuel. Transporting the amazing innovations of the internet, and the internet of things. Running on broadband rails: fast, reliable, pervasive.
My dream is that Europe takes its full part. With European industry able to supply, European citizens and businesses able to benefit, European governments able and willing to support. But we must get all those components right.
What does it mean to say we’re in the big data era?
First, it means more data than ever at our disposal. Take all the information of humanity from the dawn of civilisation until 2003 – nowadays that is produced in just two days. We are also acting to have more and more of it become available as open data, for science, for experimentation, for new products and services.
Second, we have ever more ways – not just to collect that data – but to manage it, manipulate it, use it. That is the magic to find value amid the mass of data. The right infrastructure, the right networks, the right computing capacity and, last but not least, the right analysis methods and algorithms help us break through the mountains of rock to find the gold within.
Third, this is not just some niche product for tech-lovers. The impact and difference to people’s lives are huge: in so many fields.
Transforming healthcare, using data to develop new drugs, and save lives. Greener cities with fewer traffic jams, and smarter use of public money.
A business boost: like retailers who communicate smarter with customers, for more personalisation, more productivity, a better bottom line.
No wonder big data is growing 40% a year. No wonder data jobs grow fast. No wonder skills and profiles that didn’t exist a few years ago are now hot property: and we need them all, from data cleaner to data manager to data scientist.
This can make a difference to people’s lives. Wherever you sit in the data ecosystem – never forget that. Never forget that real impact and real potential.
Politicians are starting to get this. The EU’s Presidents and Prime Ministers have recognised the boost to productivity, innovation and better services from big data and cloud computing.
But those technologies need the right environment. We can’t go on struggling with poor quality broadband. With each country trying on its own. With infrastructure and research that are individual and ineffective, separate and subscale. With different laws and practices shackling and shattering the single market. We can’t go on like that.
Nor can we continue in an atmosphere of insecurity and mistrust.
Recent revelations show what is possible online. They show implications for privacy, security, and rights.
You can react in two ways. One is to throw up your hands and surrender. To give up and put big data in the box marked “too difficult”. To turn away from this opportunity, and turn your back on problems that need to be solved, from cancer to climate change. Or – even worse – to simply accept that Europe won’t figure on this mapbut will be reduced to importing the results and products of others.
Alternatively: you can decide that we are going to master big data – and master all its dependencies, requirements and implications, including cloud and other infrastructures, Internet of things technologies as well as privacy and security. And do it on our own terms.
And by the way – privacy and security safeguards do not just have to be about protecting and limiting. Data generates value, and unlocks the door to new opportunities: you don’t need to “protect” people from their own assets. What you need is to empower people, give them control, give them a fair share of that value. Give them rights over their data – and responsibilities too, and the digital tools to exercise them. And ensure that the networks and systems they use are affordable, flexible, resilient, trustworthy, secure.
One thing is clear: the answer to greater security is not just to build walls. Many millennia ago, the Greek people realised that. They realised that you can build walls as high and as strong as you like – it won’t make a difference, not without the right awareness, the right risk management, the right security, at every link in the chain. If only the Trojans had realised that too! The same is true in the digital age: keep our data locked up in Europe, engage in an impossible dream of isolation, and we lose an opportunity; without gaining any security.
But master all these areas, and we would truly have mastered big data. Then we would have showed technology can take account of democratic values; and that a dynamic democracy can cope with technology. Then we would have a boost to benefit every European.
So let’s turn this asset into gold. With the infrastructure to capture and process. Cloud capability that is efficient, affordable, on-demand. Let’s tackle the obstacles, from standards and certification, trust and security, to ownership and copyright. With the right skills, so our workforce can seize this opportunity. With new partnerships, getting all the right players together. And investing in research and innovation. Over the next two years, we are putting 90 million euros on the table for big data and 125 million for the cloud.
I want to respond to this economic imperative. And I want to respond to the call of the European Council – looking at all the aspects relevant to tomorrow’s digital economy.
You can help us build this future. All of you. Helping to bring about the digital data-driven economy of the future. Expanding and depening the ecosystem around data. New players, new intermediaries, new solutions, new jobs, new growth….”

Climate Data Initiative Launches with Strong Public and Private Sector Commitments


John Podesta and Dr. John P. Holdren at the White House blog:  “…today, delivering on a commitment in the President’s Climate Action Plan, we are launching the Climate Data Initiative, an ambitious new effort bringing together extensive open government data and design competitions with commitments from the private and philanthropic sectors to develop data-driven planning and resilience tools for local communities. This effort will help give communities across America the information and tools they need to plan for current and future climate impacts.
The Climate Data Initiative builds on the success of the Obama Administration’s ongoing efforts to unleash the power of open government data. Since data.gov, the central site to find U.S. government data resources, launched in 2009, the Federal government has released troves of valuable data that were previously hard to access in areas such as health, energy, education, public safety, and global development. Today these data are being used by entrepreneurs, researchers, tech innovators, and others to create countless new applications, tools, services, and businesses.
Data from NOAA, NASA, the U.S. Geological Survey, the Department of Defense, and other Federal agencies will be featured on climate.data.gov, a new section within data.gov that opens for business today. The first batch of climate data being made available will focus on coastal flooding and sea level rise. NOAA and NASA will also be announcing an innovation challenge calling on researchers and developers to create data-driven simulations to help plan for the future and to educate the public about the vulnerability of their own communities to sea level rise and flood events.
These and other Federal efforts will be amplified by a number of ambitious private commitments. For example, Esri, the company that produces the ArcGIS software used by thousands of city and regional planning experts, will be partnering with 12 cities across the country to create free and open “maps and apps” to help state and local governments plan for climate change impacts. Google will donate one petabyte—that’s 1,000 terabytes—of cloud storage for climate data, as well as 50 million hours of high-performance computing with the Google Earth Engine platform. The company is challenging the global innovation community to build a high-resolution global terrain model to help communities build resilience to anticipated climate impacts in decades to come. And the World Bank will release a new field guide for the Open Data for Resilience Initiative, which is working in more than 20 countries to map millions of buildings and urban infrastructure….”

Quantified Health – It’s Just A Phase, Get Over It. Please.


Geoff McCleary at PSFK: “The near ubiquitous acceptance of smartphones and mobile internet access have ushered in a new wave of connected devices and smart objects that help us compile and track an unprecedented amount of previously unavailable data.
This quantification of self, which used to be the sole domain of fitness fanatics and professional athletes, is now being expanded out and applied to everything from how we drive and interface with our cars, to homes that adapt around us, to our daily interactions with others. But the most exciting application of this approach has to be the quantification of health – from how much time we spend on the couch, to how frequently a symptom flares up, even to how adherent we are with our medications.
But this new phase of quantified health is just that – it’s just a phase. How many steps a patient takes is a meaningless data point, unless the information means something to the patient. How many pills we take isn’t going to tell us if we are getting better.
Over time, we begin to see correlations between some of the data points and we can see that on the days a user takes their pill, they average 3,000 more steps, but that still doesn’t tell us what is getting better. We can see that when they get a pill reminder every day, that they will refill their prescription twice as much as other users.  As marketers, that information makes us happy, but does it make the patient any healthier? Can’t we both be happy?
We can pretty the data up with shiny infographics and widgets, but unless there is meaningful context to that data it is just a nicely organized set of data points. So, what will make a difference? What will get us out of the dark ages of quantified health and into the enlightened age of Personalized Health? What will need to change to get me the treatment I need because of who I am – on a genetic level?…
Our history, our future, our uniqueness and our sameness mean nothing if we cannot get this information on-demand, in real- time. This information has to be available when we need it (and when we don’t) on whatever screen is handy, in whatever setting we are in. Our physicians need access to our information and they need it in the context of how others have dealt with the same situation.
This access can only be enabled by a cloud-based, open health profile. As quantified self gave way to quantified health, quantified health must give way to Qualitative Health. This cloud based profile of our health past, present and future will need to be both quantified and qualitative.  Based not only on numbers and raw data, but relevance, context and meaning. Based not on a database or an app, but in the cloud where personal information will accessible by whomever we designate, our sameness open and shareable with all — with all contributing to the meaning of our data, and physicians interacting in an informed, consistent manner across our entire health being, instead of just the 20 minutes a year when they see us.
That is truly health care, and I cannot wait for it to get here.”

The Open Data/Environmental Justice Connection


Jeffrey Warren for Wilson’s Commons Lab: “… Open data initiatives seem to assume that all data is born in the hallowed halls of government, industry and academia, and that open data is primarily about convincing such institutions to share it to the public.
It is laudable when institutions with important datasets — such as campaign finance, pollution or scientific data — see the benefit of opening it to the public. But why do we assume unilateral control over data production?
The revolution in user-generated content shows the public has a great deal to contribute – and to gain—from the open data movement. Likewise, citizen science projects that solicit submissions or “task completion” from the public rarely invite higher-level participation in research –let alone true collaboration.
This has to change. Data isn’t just something you’re given if you ask nicely, or a kind of community service we perform to support experts. Increasingly, new technologies make it possible for local groups to generate and control data themselves — especially in environmental health. Communities on the front line of pollution’s effects have the best opportunities to monitor it and the most to gain by taking an active role in the research process.
DIY Data
Luckily, an emerging alliance between the maker/Do-It-Yourself (DIY) movement and watchdog groups is starting to challenge the conventional model.
The Smart Citizen project, the Air Quality Egg and a variety of projects in the Public Lab network are recasting members of the general public as actors in the framing of new research questions and designers of a new generation of data tools.
The Riffle, a <$100 water quality sensor built inside of hardware-store pipe, can be left in a creek near an industrial site to collect data around the clock for weeks or months. In the near future, when pollution happens – like the ash spill in North Carolina or the chemical spill in West Virginia – the public will be alerted and able to track its effects without depending on expensive equipment or distant labs.
This emerging movement is recasting environmental issues not as intractably large problems, but up-close-and-personal health issues — just what environmental justice (EJ) groups have been arguing for years. The difference is that these new initiatives hybridize such EJ community organizers and the technology hackers of the open hardware movement. Just as the Homebrew Computer Club’s tinkering with early prototypes led to the personal computer, a new generation of tinkerers sees that their affordable, accessible techniques can make an immediate difference in investigating lead in their backyard soil, nitrates in their tap water and particulate pollution in the air they breathe.
These practitioners see that environmental data collection is not a distant problem in a developing country, but an issue that anyone in a major metropolitan area, or an area affected by oil and gas extraction, faces on a daily basis. Though underserved communities are often disproportionally affected, these threats often transcend socioeconomic boundaries…”

“Open-washing”: The difference between opening your data and simply making them available


Christian Villum at the Open Knowledge Foundation Blog:  “Last week, the Danish it-magazine Computerworld, in an article entitled “Check-list for digital innovation: These are the things you must know“, emphasised how more and more companies are discovering that giving your users access to your data is a good business strategy. Among other they wrote:

(Translation from Danish) According to Accenture it is becoming clear to many progressive businesses that their data should be treated as any other supply chain: It should flow easily and unhindered through the whole organisation and perhaps even out into the whole eco-system – for instance through fully open API’s.

They then use Google Maps as an example, which firstly isn’t entirely correct, as also pointed out by the Neogeografen, a geodata blogger, who explains how Google Maps isn’t offering raw data, but merely an image of the data. You are not allowed to download and manipulate the data – or run it off your own server.

But secondly I don’t think it’s very appropriate to highlight Google and their Maps project as a golden example of a business that lets its data flow unhindered to the public. It’s true that they are offering some data, but only in a very limited way – and definitely not as open data – and thereby not as progressively as the article suggests.

Surely it’s hard to accuse Google of not being progressive in general. The article states how Google Maps’ data are used by over 800,000 apps and businesses across the globe. So yes, Google has opened its silo a little bit, but only in a very controlled and limited way, which leaves these 800,000 businesses dependent on the continual flow of data from Google and thereby not allowing them to control the very commodities they’re basing their business on. This particular way of releasing data brings me to the problem that we’re facing: Knowing the difference between making data available and making them open.

Open data is characterized by not only being available, but being both legally open (released under an open license that allows full and free reuse conditioned at most to giving credit to it’s source and under same license) and technically available in bulk and in machine readable formats – contrary to the case of Google Maps. It may be that their data are available, but they’re not open. This – among other reasons – is why the global community around the 100% open alternative Open Street Map is growing rapidly and an increasing number of businesses choose to base their services on this open initiative instead.

But why is it important that data are open and not just available? Open data strengthens the society and builds a shared resource, where all users, citizens and businesses are enriched and empowered, not just the data collectors and publishers. “But why would businesses spend money on collecting data and then give them away?” you ask. Opening your data and making a profit are not mutually exclusive. Doing a quick Google search reveals many businesses that both offer open data and drives a business on them – and I believe these are the ones that should be highlighted as particularly progressive in articles such as the one from Computerworld….

We are seeing a rising trend of what can be termed “open-washing” (inspired by “greenwashing“) – meaning data publishers that are claiming their data is open, even when it’s not – but rather just available under limiting terms. If we – at this critical time in the formative period of the data driven society – aren’t critically aware of the difference, we’ll end up putting our vital data streams in siloed infrastructure built and owned by international corporations. But also to give our praise and support to the wrong kind of unsustainable technological development.”