Open Data Index provides first major assessment of state of open government data


Press Release from the Open Knowledge Foundation: “In the week of a major international summit on government transparency in London, the Open Knowledge Foundation has published its 2013 Open Data Index, showing that governments are still not providing enough information in an accessible form to their citizens and businesses.
The UK and US top the 2013 Index, which is a result of community-based surveys in 70 countries. They are followed by Denmark, Norway and the Netherlands. Of the countries assessed, Cyprus, St Kitts & Nevis, the British Virgin Islands, Kenya and Burkina Faso ranked lowest. There are many countries where the governments are less open but that were not assessed because of lack of openness or a sufficiently engaged civil society. This includes 30 countries who are members of the Open Government Partnership.
The Index ranks countries based on the availability and accessibility of information in ten key areas, including government spending, election results, transport timetables, and pollution levels, and reveals that whilst some good progress is being made, much remains to be done.
Rufus Pollock, Founder and CEO of the Open Knowledge Foundation said:

Opening up government data drives democracy, accountability and innovation. It enables citizens to know and exercise their rights, and it brings benefits across society: from transport, to education and health. There has been a welcome increase in support for open data from governments in the last few years, but this Index reveals that too much valuable information is still unavailable.

The UK and US are leaders on open government data but even they have room for improvement: the US for example does not provide a single consolidated and open register of corporations, while the UK Electoral Commission lets down the UK’s good overall performance by not allowing open reuse of UK election data.
There is a very disappointing degree of openness of company registers across the board: only 5 out of the 20 leading countries have even basic information available via a truly open licence, and only 10 allow any form of bulk download. This information is critical for range of reasons – including tackling tax evasion and other forms of financial crime and corruption.
Less than half of the key datasets in the top 20 countries are available to re-use as open data, showing that even the leading countries do not fully understand the importance of citizens and businesses being able to legally and technically use, reuse and redistribute data. This enables them to build and share commercial and non-commercial services.
To see the full results: https://index.okfn.org. For graphs of the data: https://index.okfn.org/visualisations.”

Google’s flu fail shows the problem with big data


Adam Kucharski in The Conversation: “When people talk about ‘big data’, there is an oft-quoted example: a proposed public health tool called Google Flu Trends. It has become something of a pin-up for the big data movement, but it might not be as effective as many claim.
The idea behind big data is that large amount of information can help us do things which smaller volumes cannot. Google first outlined the Flu Trends approach in a 2008 paper in the journal Nature. Rather than relying on disease surveillance used by the US Centers for Disease Control and Prevention (CDC) – such as visits to doctors and lab tests – the authors suggested it would be possible to predict epidemics through Google searches. When suffering from flu, many Americans will search for information related to their condition….
Between 2003 and 2008, flu epidemics in the US had been strongly seasonal, appearing each winter. However, in 2009, the first cases (as reported by the CDC) started in Easter. Flu Trends had already made its predictions when the CDC data was published, but it turned out that the Google model didn’t match reality. It had substantially underestimated the size of the initial outbreak.
The problem was that Flu Trends could only measure what people search for; it didn’t analyse why they were searching for those words. By removing human input, and letting the raw data do the work, the model had to make its predictions using only search queries from the previous handful of years. Although those 45 terms matched the regular seasonal outbreaks from 2003–8, they didn’t reflect the pandemic that appeared in 2009.
Six months after the pandemic started, Google – who now had the benefit of hindsight – updated their model so that it matched the 2009 CDC data. Despite these changes, the updated version of Flu Trends ran into difficulties again last winter, when it overestimated the size of the influenza epidemic in New York State. The incidents in 2009 and 2012 raised the question of how good Flu Trends is at predicting future epidemics, as opposed to merely finding patterns in past data.
In a new analysis, published in the journal PLOS Computational Biology, US researchers report that there are “substantial errors in Google Flu Trends estimates of influenza timing and intensity”. This is based on comparison of Google Flu Trends predictions and the actual epidemic data at the national, regional and local level between 2003 and 2013
Even when search behaviour was correlated with influenza cases, the model sometimes misestimated important public health metrics such as peak outbreak size and cumulative cases. The predictions were particularly wide of the mark in 2009 and 2012:

Original and updated Google Flu Trends (GFT) model compared with CDC influenza-like illness (ILI) data. PLOS Computational Biology 9:10
Click to enlarge

Although they criticised certain aspects of the Flu Trends model, the researchers think that monitoring internet search queries might yet prove valuable, especially if it were linked with other surveillance and prediction methods.
Other researchers have also suggested that other sources of digital data – from Twitter feeds to mobile phone GPS – have the potential to be useful tools for studying epidemics. As well as helping to analysing outbreaks, such methods could allow researchers to analyse human movement and the spread of public health information (or misinformation).
Although much attention has been given to web-based tools, there is another type of big data that is already having a huge impact on disease research. Genome sequencing is enabling researchers to piece together how diseases transmit and where they might come from. Sequence data can even reveal the existence of a new disease variant: earlier this week, researchers announced a new type of dengue fever virus….”

Text messages are saving Swedes from cardiac arrest


Philip A. Stephenson in Quartz: “Sweden has found a faster way to treat people experiencing cardiac emergencies through a text message and a few thousand volunteers.

A program called SMSlivräddare, (or SMSLifesaver) (link in Swedish) solicits people who’ve been trained in cardiopulmonary resuscitation (CPR). When a Stockholm resident dials 112 for emergency services, a text message is sent to all volunteers within 500 meters of the person in need. The volunteer then arrives at the location within the crucial first minutes to perform lifesaving CPR. The odds for surviving cardiac arrest drop 10% for every minute it takes first responders to arrive…

With ambulance resources stretched thin, the average response time is some eight minutes, allowing SMS-livräddare-volunteers to reach victims before ambulances in 54% of cases.

Through a combination of techniques, including SMS-livräddare, Stockholm County has seen survival rates after cardiac arrest rise from 3% to nearly 11%, over the last decade. Local officials have also enlisted fire and police departments to respond to cardiac emergencies, but the Lifesavers routinely arrive before them as well.

Currently 9,600 Stockholm residents are registered SMS-livräddare-volunteers and there are plans to continue to increase enrollment. An estimated 200,000 Swedes have completed the necessary CPR training, and could, potentially, join the program….

Medical officials in other countries, including Scotland, are now considering similar community-based programs for cardiac arrest.”

Our Privacy Problem is a Democracy Problem in Disguise


Evgeny Morozov in MIT Technology Review: “Intellectually, at least, it’s clear what needs to be done: we must confront the question not only in the economic and legal dimensions but also in a political one, linking the future of privacy with the future of democracy in a way that refuses to reduce privacy either to markets or to laws. What does this philosophical insight mean in practice?

First, we must politicize the debate about privacy and information sharing. Articulating the existence—and the profound political consequences—of the invisible barbed wire would be a good start. We must scrutinize data-intensive problem solving and expose its occasionally antidemocratic character. At times we should accept more risk, imperfection, improvisation, and inefficiency in the name of keeping the democratic spirit alive.
Second, we must learn how to sabotage the system—perhaps by refusing to self-track at all. If refusing to record our calorie intake or our whereabouts is the only way to get policy makers to address the structural causes of problems like obesity or climate change—and not just tinker with their symptoms through nudging—information boycotts might be justifiable. Refusing to make money off your own data might be as political an act as refusing to drive a car or eat meat. Privacy can then reëmerge as a political instrument for keeping the spirit of democracy alive: we want private spaces because we still believe in our ability to reflect on what ails the world and find a way to fix it, and we’d rather not surrender this capacity to algorithms and feedback loops.
Third, we need more provocative digital services. It’s not enough for a website to prompt us to decide who should see our data. Instead it should reawaken our own imaginations. Designed right, sites would not nudge citizens to either guard or share their private information but would reveal the hidden political dimensions to various acts of information sharing. We don’t want an electronic butler—we want an electronic provocateur. Instead of yet another app that could tell us how much money we can save by monitoring our exercise routine, we need an app that can tell us how many people are likely to lose health insurance if the insurance industry has as much data as the NSA, most of it contributed by consumers like us. Eventually we might discern such dimensions on our own, without any technological prompts.
Finally, we have to abandon fixed preconceptions about how our digital services work and interconnect. Otherwise, we’ll fall victim to the same logic that has constrained the imagination of so many well-­meaning privacy advocates who think that defending the “right to privacy”—not fighting to preserve democracy—is what should drive public policy. While many Internet activists would surely argue otherwise, what happens to the Internet is of only secondary importance. Just as with privacy, it’s the fate of democracy itself that should be our primary goal.

Why Nudge?: The Politics of Libertarian Paternalism


New and forthcoming book by Cass Sunstein: “Based on a series of pathbreaking lectures given at Yale University in 2012, this powerful, thought-provoking work by national best-selling author Cass R. Sunstein combines legal theory with behavioral economics to make a fresh argument about the legitimate scope of government, bearing on obesity, smoking, distracted driving, health care, food safety, and other highly volatile, high-profile public issues. Behavioral economists have established that people often make decisions that run counter to their best interests—producing what Sunstein describes as “behavioral market failures.” Sometimes we disregard the long term; sometimes we are unrealistically optimistic; sometimes we do not see what is in front of us. With this evidence in mind, Sunstein argues for a new form of paternalism, one that protects people against serious errors but also recognizes the risk of government overreaching and usually preserves freedom of choice.
Against those who reject paternalism of any kind, Sunstein shows that “choice architecture”—government-imposed structures that affect our choices—is inevitable, and hence that a form of paternalism cannot be avoided. He urges that there are profoundly moral reasons to ensure that choice architecture is helpful rather than harmful—and that it makes people’s lives better and longer.”

Bright Spots of open government to be recognised at global summit


Press Release of the UK Cabinet Office: “The 7 shortlisted initiatives vying for the Bright Spots award show how governments in Open Government Partnership countries are working with citizens to sharpen governance, harness new technologies to increase public participation and improve government responsiveness.
At the Open Government Partnership summit in London on 31 October 2013 and 1 November 2013, participants will be able to vote for one of the shortlisted projects. The winning project – the Bright Spot – will be announced in the summit’s final plenary session….
The shortlisted entries for the Bright Spots prize – which will be awarded at the London summit – are:

  • Chile – ChileAtiende

The aim of ChileAtiende has been to simplify government to citizens by providing a one-stop shop for accessing public services. Today, ChileAtiende has more than 190 offices across the whole country, a national call centre and a digital platform, through which citizens can access multiple services and benefits without having to navigate multiple government offices.

  • Estonia – People’s Assembly

The People’s Assembly is a deliberative democracy tool, designed to encourage input from citizens on the government’s legislative agenda. This web-based platform allows ordinary citizens to propose policy solutions to problems including fighting corruption. Within 3 weeks, 1,800 registered users posted nearly 6,000 ideas and comments. Parliament has since set a timetable for the most popular proposals to be introduced in the formal proceedings.

  • Georgia – improvements to the Freedom of Information Act

Civil society organisations in Georgia have successfully used the government’s participation in OGP to advocate improvements to the country’s Freedom of Information legislation. Government agencies are now obliged to proactively publish information in a way that is accessible to anyone, and to establish an electronic request system for information.

  • Indonesia – complaints portal

LAPOR! (meaning “to report” in Indonesian) is a social media channel where Indonesian citizens can submit complaints and enquiries about development programmes and public services. Comments are transferred directly to relevant ministries or government agencies, which can respond via the website. LAPOR! now has more than 225,350 registered users and receives an average of 1,435 inputs per day.

  • Montenegro – Be Responsible app

“Be Responsible” is a mobile app that allows citizens to report local problems – from illegal waste dumps, misuse of official vehicles and irregular parking, to failure to comply with tax regulations and issues over access to healthcare and education.

  • Philippines – citizen audits

The Citizen Participatory Audit (CPA) project is exploring ways in which citizens can be directly engaged in the audit process for government projects and contribute to ensuring greater efficiency and effectiveness in the use of public resources. 4 pilot audits are in progress, covering public works, welfare, environment and education projects.

  • Romania – transparency in public sector recruitment

The PublicJob.ro website was set up to counter corruption and lack of transparency in civil service recruitment. PublicJob.ro takes recruitment data from public organisations and e-mails it to more than 20,000 subscribers in a weekly newsletter. As a result, it has become more difficult to manipulate the recruitment process.”

Five Ways to Make Government Procurement Better


Mark Headd at Civic Innovations:  “Nothing in recent memory has focused attention on the need for wholesale reform of the government IT procurement system more than the troubled launch of healthcare.gov.
There has been a myriad of blog posts, stories and articles written in the last few weeks detailing all of the problems that led to the ignominious launch of the website meant to allow people to sign up for health care coverage.
Though the details of this high profile flop are in the latest headlines, the underlying cause has been talked about many times before – the process by which governments contract with outside parties to obtain IT services is broken…
With all of this in mind, here are – in no particular order – five suggested changes that can be adopted to improve the government procurement process.
Raise the threshold on simplified / streamlined procurement
Many governments use a separate, more streamlined process for smaller projects that do not require a full RFP (in the City of Philadelphia, professional services projects that do not exceed $32,000 annually go through this more streamlined bidding process). In Philadelphia, we’ve had great success in using these smaller projects to test new ideas and strategies for partnering with IT vendors. There is much we can learn from these experiments, and a modest increase to enable more experimentation would allow governments to gain valuable new insights.
Narrowing the focus of any enhanced thresholds for streamlined budding to web-based projects would help mitigate risk and foster a quicker process for testing new ideas.
Identify clear standards for projects
Having a clear set of vendor-agnostic IT standards to use when developing RFPs and in performing work can make a huge difference in how a project turns out. Clearly articulating standards for:

  • The various components that a system will use.
  • The environment in which it will be housed.
  • The testing it must undergo prior to final acceptance.

…can go a long way to reduce the risk an uncertainly inherent in IT projects.
It’s worth noting that most governments probably already have a set of IT standards that are usually made part of any IT solicitation. But these standards documents can quickly become out of date – they must undergo constant review and refinement. In addition, many of the people writing these standards may confuse a specific vendor product or platform with a true standard.
Require open source
Requiring that IT projects be open source during development or after completion can be an effective way to reduce risk on an IT project and enhance transparency. This is particularly true of web-based projects.
In addition, government RFPs should encourage the use of existing open source tools – leveraging existing software components that are in use in similar projects and maintained by an active community – to foster external participation by vendors and volunteers alike. When governments make the code behind their project open source, they enable anyone that understands software development to help make them better.
Develop a more robust internal capacity for IT project management and implementation
Governments must find ways to develop the internal capacity for developing, implementing and managing technology projects.
Part of the reason that governments make use of a variety of different risk mitigation provisions in public bidding is that there is a lack of people in government with hands on experience building or maintaining technology. There is a dearth of makers in government, and there is a direct relationship between the perceived risk that governments take on with new technology projects and the lack of experienced technologists working in government.
Governments need to find ways to develop a maker culture within their workforces and should prioritize recruitment from the local technology and civic hacking communities.
Make contracting, lobbying and campaign contribution data public as open data
One of the more disheartening revelations to come out of the analysis of healthcare.gov implementation is that some of the firms that were awarded work as part of the project also spent non-trivial amounts of money on lobbying. It’s a good bet that this kind of thing also happens at the state and local level as well.
This can seriously undermine confidence in the bidding process, and may cause many smaller firms – who lack funds or interest in lobbying elected officials – to simply throw up their hands and walk away.
In the absence of statutory or regulatory changes to prevent this from happening, governments can enhance the transparency around the bidding process by working to ensure that all contracting data as well as data listing publicly registered lobbyists and contributions to political campaigns is open.
Ensuring that all prospective participants in the public bidding process have confidence that the process will be fair and transparent is essential to getting as many firms to participate as possible – including small firms more adept at agile software development methodologies. More bids typically equates to higher quality proposals and lower prices.
None of the changes list above will be easy, and governments are positioned differently in how well they may achieve any one of them. Nor do they represent the entire universe of things we can do to improve the system in the near term – these are items that I personally think are important and very achievable.
One thing that could help speed the adoption of these and other changes is the development of robust communication framework between government contracting and IT professionals in different cities and different states. I think a “Municipal Procurement Academy” could go a long way toward achieving this.”

Democracy and Political Ignorance


Essay by Ilya Somin in Special issue on Is Smaller Government Smarter Government? of Cato Unbound: ” Democracy is supposed to be rule of the people, by the people, and for the people. But in order to rule effectively, the people need political knowledge. If they know little or nothing about government, it becomes difficult to hold political leaders accountable for their performance. Unfortunately, public knowledge about politics is disturbingly low. In addition, the public also often does a poor job of evaluating the political information they do know. This state of affairs has persisted despite rising education levels, increased availability of information thanks to modern technology, and even rising IQ scores. It is mostly the result of rational behavior, not stupidity. Such widespread and persistent political ignorance and irrationality strengthens the case for limiting and decentralizing the power of government….
Political ignorance in America is deep and widespread. The current government shutdown fight provides some good examples. Although Obamacare is at the center of that fight and much other recent political controversy, 44% percent of the public do not even realize it is still the law. Some 80 percent, according to a recent Kaiser survey, say they have heard “nothing at all” or “only a little” about the controversial insurance exchanges that are a major part of the law….
Some people react to data like the above by thinking that the voters must be stupid. Butpolitical ignorance is actually rational for most of the public, including most smart people. If your only reason to follow politics is to be a better voter, that turns out not be much of a reason at all. That is because there is very little chance that your vote will actually make a difference to the outcome of an election (about 1 in 60 million in a presidential race, for example).2 For most of us, it is rational to devote very little time to learning about politics, and instead focus on other activities that are more interesting or more likely to be useful. As former British Prime Minister Tony Blair puts it, “[t]he single hardest thing for a practising politician to understand is that most people, most  of the time, don’t give politics a first thought all day long. Or if they do, it is with a sigh…. before going back to worrying about the kids, the parents, the mortgage, the boss, their friends, their weight, their health, sex and rock ‘n’ roll.”3 Most people don’t precisely calculate the odds that their vote will make a difference. But they probably have an intuitive sense that the chances are very small, and act accordingly.
In the book, I also consider why many rationally ignorant people often still bother to vote.4 The key factor is that voting is a lot cheaper and less time-consuming than studying political issues. For many, it is rational to take the time to vote, but without learning much about the issues at stake….
Political ignorance is far from the only factor that must be considered in deciding the appropriate size, scope, and centralization of government. For example, some large-scale issues, such as global warming, are simply too big to be effectively addressed by lower-level governments or private organizations. Democracy and Political Ignorance is not a complete theory of the proper role of government in society. But it does suggest that the problem of political ignorance should lead us to limit and decentralize government more than we would otherwise.”
See also:  Ilya Somin, Democracy and Political Ignorance: Why Smaller Government is Smarter, (Stanford: Stanford University Press, 2013)

Data Discrimination Means the Poor May Experience a Different Internet


MIT Technology Review: “Data analytics are being used to implement a subtle form of discrimination, while anonymous data sets can be mined to reveal health data and other private information, a Microsoft researcher warned this morning at MIT Technology Review’s EmTech conference.
Kate Crawford, principal researcher at Microsoft Research, argued that these problems could be addressed with new legal approaches to the use of personal data.
In a new paper, she and a colleague propose a system of “due process” that would give people more legal rights to understand how data analytics are used in determinations made against them, such as denial of health insurance or a job. “It’s the very start of a conversation about how to do this better,” Crawford, who is also a visiting professor at the MIT Center for Civic Media, said in an interview before the event. “People think ‘big data’ avoids the problem of discrimination, because you are dealing with big data sets, but in fact big data is being used for more and more precise forms of discrimination—a form of data redlining.”
During her talk this morning, Crawford added that with big data, “you will never know what those discriminations are, and I think that’s where the concern begins.”

The Best American Infographics 2013


41DKY50w7vL._SX258_BO1,204,203,200_ New book by Gareth Cook:  “The rise of infographics across virtually all print and electronic media—from a striking breakdown of classic cocktails to a graphic tracking 200 influential moments that changed the world to visually arresting depictions of Twitter traffic—reveals patterns in our lives and our world in fresh and surprising ways. In the era of big data, where information moves faster than ever, infographics provide us with quick, often influential bursts of art and knowledge—on the environment, politics, social issues, health, sports, arts and culture, and more—to digest, to tweet, to share, to go viral.
The Best American Infographics captures the finest examples from the past year, including the ten best interactive infographics, of this mesmerizing new way of seeing and understanding our world.”
See also selection of some in Wired.