Fifty Shades of Open


Jeffrey Pomerantz and Robin Peek at First Monday: “Open source. Open access. Open society. Open knowledge. Open government. Even open food. Until quite recently, the word “open” had a fairly constant meaning. The over-use of the word “open” has led to its meaning becoming increasingly ambiguous. This presents a critical problem for this important word, as ambiguity leads to misinterpretation.

“Open” has been applied to a wide variety of words to create new terms, some of which make sense, and some not so much. When we started writing this essay, we thought our working title was simply amusing. But the working title became the actual title, as we found that there are at least 50 different terms in which the word “open” is used, encompassing nearly as many different criteria for openness. In this essay we will attempt to make sense of this open season on the word “open.”

Opening the door on open

The word “open” is, perhaps unsurprisingly, a very old one in the English language, harking back to Early Old English. Unlike some words in English, the definition of “open” has changed very little in the intervening thousand-plus years: the earliest recorded uses of the word are completely consistent with its modern usage as an adjective, indicating a passage through or an access into something (Oxford English Dictionary, 2016).

This meaning leads to the development in the fifteenth century of the phrases “open house,” meaning an establishment in which all are welcome, and “open air,” meaning unenclosed outdoor spaces. One such unenclosed outdoor space that figured large in the fifteenth century, and continues to do so today, is the Commons (Hardin, 1968): land or other resources that are not privately owned, but are available for use to all members of a community. The word “open” in these phrases indicates that all have access to a shared resource. All are welcome to visit an open house, but not to move in; all are welcome to walk in the open air or graze their sheep on the Commons, but not to fence the Commons as part of their backyard. (And the moment at which Commons land ceases to be open is precisely the moment it is fenced by an owner, which is in fact what happened in Great Britain during the Enclosure movement of the sixteenth through eighteenth centuries.)

Running against the grain of this cultural movement to enclosure, the nineteenth century saw the circulating library become the norm — rather than libraries in which massive tomes were literally chained to desks. The interpretation of the word “open” to mean a shared resource to which all had access, fit neatly into the philosophy of the modern library movement of the nineteenth century. The phrases “open shelves” and “open stacks” emerged at this time, referring to resources that were directly available to library users, without necessarily requiring intervention by a librarian. Naturally, however, not all library resources were made openly available, nor are they even today. Furthermore, resources are made openly available with the understanding that, like Commons land, they must be shared: library resources have a due date.

The twentieth century saw an increase in the use of the word “open,” as well as a hint of the confusion that was to come about the interpretation of the word. The term “open society” was coined prior to World War I, to indicate a society tolerant of religious diversity. The “open skies” policy enables a nation to allow other nations’ commercial aviation to fly through its airspace — though, importantly, without giving up control of its airspace. The Open University was founded in the United Kingdom in 1969, to provide a university education to all, with no formal entry requirements. The meaning of the word “open” is quite different across these three terms — or perhaps it would be more accurate to say that these terms use different shadings of the word.

But it has been the twenty-first century that has seen the most dramatic increase in the number of terms that use “open.” The story of this explosion in the use of the word “open” begins, however, with a different word entirely: the word “free.”….

Introduction
Opening the door on open
Speech, beer, and puppies
Open means rights
Open means access
Open means use
Open means transparent
Open means participatory
Open means enabling openness
Open means philosophically aligned with open principles
Openwashing and its discontents
Conclusion

Is behavioural economics ready to save the world?


Book review by Trenton G Smith of Behavioral Economics and Public Health : “Modern medicine has long doled out helpful advice to ailing patients about not only drug treatments, but also diet, exercise, alcohol abuse, and many other lifestyle decisions. And for just as long, patients have been failing to follow doctors’ orders. Many of today’s most pressing public health problems would disappear if people would just make better choices.

Enter behavioural economics. A fairly recent offshoot of the dismal science, behavioural economics aims to take the coldly rational decision makers who normally populate economic theories, and instil in them a host of human foibles. Neoclassical (ie, conventional) economics, after all is the study of optimising behaviour in the presence of material constraints—why not add constraints on cognitive capacity, or self-control, or susceptibility to the formation of bad habits? The hope is that by incorporating insights from other behavioural sciences (most notably cognitive psychology and neuroscience) while retaining the methodological rigour of neoclassical economics, behavioural economics will yield a more richly descriptive theory of human behaviour, and generate new and important insights to better inform public policy.

Policy makers have taken notice. In an era in which free-market rhetoric dominates the political landscape, the idea that small changes to public health policies might serve to nudge consumers towards healthier behaviours holds great appeal. Even though some (irrational) consumers might be better off, the argument goes, if certain unhealthy food products were banned (or worse, taxed), this approach would infringe on the rights of the many consumers who want to indulge occasionally, and fully understand the consequences. If governments could instead use evidence from consumer science to make food labels more effective, or to improve the way that healthy foods are presented in school cafeterias, more politically unpalatable interventions in the marketplace might not be needed. This idea, dubbed “libertarian paternalism” by Richard Thaler and Cass Sunstein, has been pursued with gusto in both the UK (David Cameron’s Government formed the Behavioural Insights Team—unofficially described as the Nudge Unit) and the USA (where Sunstein spent time in the Obama administration’s Office of Information and Regulatory Affairs).

Whatever public health practitioners might think about these developments—or indeed, of economics as a discipline—this turn of events has rather suddenly given scholars at the cutting edge of consumer science an influential voice in the regulatory process, and some of the best and brightest have stepped up to contribute. Behavioral Economics & Public Health (edited by Christina Roberto and Ichiro Kawachi) is the product of a 2014 Harvard University exploratory workshop on applying social science insights to public health. As might be expected in a volume that aims to bring together two such inherently multidisciplinary fields, the book’s 11 chapters offer an eclectic mix of perspectives. The editors begin with an excellent overview of the field of behavioural economics and its applications to public health, and an economic perspective can also be found in four of the other chapters: Justin White and William Dow write about intertemporal choice, Kristina Lewis and Jason Block review the use of incentives to promote health, Michael Sanders and Michael Hallsworth describe their experience working within the UK’s Behavioural Insights Team, and Frederick Zimmerman concludes with a thoughtful critique of the field of behavioural economics. The other contributions are largely from the perspectives of psychology and marketing: Dennis Runger and Wendy Wood discuss habit formation, Rebecca Ferrer and colleagues emphasise the importance of emotion in decision making, Brent McFerran discusses social norms in the context of obesity, Jason Riis and Rebecca Ratner explain why some public health communication strategies are more effective than others, and Zoe Chance and colleagues and Brian Wansink offer frameworks for designing environments (eg, in schools and workplaces) that are conducive to healthy choices.

This collection of essays holds many hidden gems, but the one that surprised me the most was the attention given (by Runger and Wood briefly, and Zimmerman extensively) to a dirty little secret that behavioural economists rarely mention: once it is acknowledged that sometimes-irrational consumers can be manipulated into making healthy choices, it does not require much of a leap to conclude that business interests can—and do—use the same methods to push back in the other direction. This conclusion leads Zimmerman to a discussion of power in the marketplace and in our collective political economy, and to a call to action on these larger structural issues in society that neoclassical theory has long neglected….(More; Book)

Yelp, Google Hold Pointers to Fix Governments


Christopher Mims at the Wall Street Journal: “When Kaspar Korjus was born, he was given a number before he was given a name, as are all babies in Estonia. “My name is 38712012796, which I got before my name of Kaspar,”says Mr. Korjus.

In Estonia, much of life—voting, digital signatures, prescriptions, taxes, banktransactions—is conducted with this number. The resulting services aren’t just more convenient, they are demonstrably better. It takes an Estonian three minutes to file his or her taxes.

Americans are unlikely to accept a unified national ID system. But Estonia offers an example of the kind of innovation possible around government services, a competitive factor for modern nations.

The former Soviet republic—with a population of 1.3 million, roughly the size of SanDiego—is regularly cited as a world leader in e-governance. At base, e-governance is about making government function as well as private enterprise, mostly by adopting the same information-technology infrastructure and management techniques as the world’s most technologically savvy corporations.

It isn’t that Estonia devotes more people to the problem—it took only 60 to build the identity system. It is that the country’s leaders are willing to empower those engineers.“There is a need for politicians to not only show leadership but also there is a need to take risks,” says Estonia’s prime minister, Taavi Rõivas.

In the U.S., Matt Lira, senior adviser for House Majority Leader Kevin McCarthy, says the gap between the government’s information technology and the private sector’s has grown larger than ever. Americans want to access government services—paying property taxes or renewing a driver’s license—as easily as they look up a restaurant on Yelp or a business on Alphabet’s Google, says Neil Kleiman, a professor of policy at New York University who collaborates with cities in this subject area.

The government is unlikely to catch up soon. The Government Accountability Office last year estimated that about 25% of the federal government’s 738 major IT investments—projected to cost a total of $42 billion—were in danger of significant delays or cost overruns.

One reason for such overruns is the government’s reliance on big, monolithic projects based on proposal documents that can run to hundreds of pages. It is an approach to software development that is at least 20 years out of date. Modern development emphasizes small chunks of code accomplished in sprints and delivered to end users quickly so that problems can be identified and corrected.

Two years ago, the Obama administration devised a novel way to address these issues:assembling a crack team of coders and project managers from the likes of Google,Amazon.com and Microsoft and assigning them to big government boondoggles to help existing IT staff run more like the private sector. Known as 18F, this organization and its sister group, the U.S. Digital Service, are set to hit 500 staffers by the end of 2016….(More)”

Can Crowdsourcing Help Make Life Easier For People With Disabilities?


Sean Captain at FastCompany: “These days GPS technology can get you as close as about 10 feet from your destination, close enough to see it—assuming you can see.

But those last few feet are a chasm for the blind (and GPS accuracy sometimes falls only within about 30 feet).

“Actually finding the bus stop, not the right street, but standing in the right place when the bus comes, is pretty hard,” says Dave Power, president and CEO of the Perkins School for the Blind near Boston. Helen Keller’s alma mater is developing a mobile app that will provide audio directions—contributed by volunteers—so that blind people can get close enough to the stop for the bus driver to notice them.

Perkins’s app is one of 29 projects that recently received a total of $20 million in funding from Google.org’s Google Impact Challenge: Disabilities awards. Several of the winning initiatives rely on crowdsourced information to help the disabled—be they blind, in a wheelchair, or cognitively impaired. It’s a commonsense approach to tackling big logistical projects in a world full of people who have snippets of downtime during which they might perform bite-size acts of kindness online. But moving these projects from being just clever concepts to extensive services, based on the goodwill of volunteers, is going to be quite a hurdle.

People with limited mobility may have trouble traversing the last few feet between them and a wheelchair ramp, automatic doors, or other accommodations that aren’t easy to find (or may not even exist in some places).Wheelmap, based in Berlin, is trying to help by building online maps of accessible locations. Its website incorporates crowdsourced data. The site lets users type in a city and search for accessible amenities such as restaurants, hotels, and public transit.

Paris-based J’accede (which received 500,000 euros from Google, which is the equivalent of about $565,000) provides similar capabilities in both a website and an app, with a slicker design somewhat resembling TripAdvisor.

Both services have a long way to go. J’accede lists 374 accessible bars/restaurants in its hometown and a modest selection in other French cities like Marseille. “We still have a lot of work to do to cover France,” says J’accede’s president Damien Birambeau in an email. The goal is to go global though, and the site is available in English, German, and Spanish, in addition to French. Likewise, Wheelmap (which got 825,000 euros, or $933,000) performs best in the German capital of Berlin and cities like Hamburg, but is less useful in other places.

These sites face the same challenge as many other volunteer-based, crowdsourced projects: getting a big enough crowd to contribute information to the service. J’accede hopes to make the process easier. In June, it will connect itself with Google Places, so contributors will only need to supply details about accommodations at a site; information like the location’s address and phone number will be pulled in automatically. But both J’accede and Wheelmap recognize that crowdsourcing has its limits. They are now going beyond voluntary contributions, setting up automated systems to scrape information from other databases of accessible locations, such as those maintained by governments.

Wheelmap and J’accede are dwarfed by general-interest crowdsourced sites like TripAdvisor and Yelp, which offer some information about accessibility, too. For instance, among the many filters they offer users searching for restaurants—such as price range and cuisine type—TripAdvisor and Yelp both offer a Wheelchair Accessible checkbox. Applying that filter to Parisian establishments brings up about 1,000 restaurants on TripAdvisor and 2,800 in Yelp.

So what can Wheelmap and J’accede provide that the big players can’t? Details. “A person in a wheelchair, for example, will face different obstacles than a partially blind person or a person with cognitive disabilities,” says Birambeau. “These different needs and profiles means that we need highly detailed information about the accessibility of public places.”…(More)”

What’s Wrong with Open-Data Sites–and How We Can Fix Them


César A. Hidalgo at Scientific American: “Imagine shopping in a supermarket where every item is stored in boxes that look exactly the same. Some are filled with cereal, others with apples, and others with shampoo. Shopping would be an absolute nightmare! The design of most open data sites—the (usually government) sites that distribute census, economic and other data to be used and redistributed freely—is not exactly equivalent to this nightmarish supermarket. But it’s pretty close.

During the last decade, such sites—data.gov, data.gov.uk, data.gob.cl,data.gouv.fr, and many others—have been created throughout the world. Most of them, however, still deliver data as sets of links to tables, or links to other sites that are also hard to comprehend. In the best cases, data is delivered through APIs, or application program interfaces, which are simple data query languages that require a user to have a basic knowledge of programming. So understanding what is inside each dataset requires downloading, opening, and exploring the set in ways that are extremely taxing for users. The analogy of the nightmarish supermarket is not that far off.

THE U.S. GOVERNMENT’S OPEN DATA SITE

The consensus among those who have participated in the creation of open data sites is that current efforts have failed and we need new options. Pointing your browser to these sites should show you why. Most open data sites are badly designed, and here I am not talking about their aesthetics—which are also subpar—but about the conceptual model used to organize and deliver data to users. The design of most open data sites follows a throwing-spaghetti-against-the-wall strategy, where opening more data, instead of opening data better, has been the driving force.

Some of the design flaws of current open data sites are pretty obvious. The datasets that are more important, or could potentially be more useful, are not brought into the surface of these sites or are properly organized. In our supermarket analogy, not only all boxes look the same, but also they are sorted in the order they came. This cannot be the best we can do.

There are other design problems that are important, even though they are less obvious. The first one is that most sites deliver data in the way in which it is collected, instead of used. People are often looking for data about a particular place, occupation, industry, or about an indicator (such as income, or population). If the data they need comes from the national survey of X, or the bureau of Y, it is secondary and often—although not always—irrelevant to the user. Yet, even though this is not the way we should be giving data back to users, this is often what open data sites do.

The second non-obvious design problem, which is probably the most important, is that most open data sites bury data in what is known as the deep web. The deep web is the fraction of the Internet that is not accessible to search engines, or that cannot be indexed properly. The surface of the web is made of text, pictures, and video, which search engines know how to index. But search engines are not good at knowing that the number that you are searching for is hidden in row 17,354 of a comma separated file that is inside a zip file linked in a poorly described page of an open data site. In some cases, pressing a radio button and selecting options from a number of dropdown menus can get you the desired number, but this does not help search engines either, because crawlers cannot explore dropdown menus. To make open data really open, we need to make it searchable, and for that we need to bring data to the surface of the web.

So how do we that? The solution may not be simple, but it starts by taking design seriously. This is something that I’ve been doing for more than half a decade when creating data visualization engines at MIT. The latest iteration of our design principles are now embodied in DataUSA, a site we created in a collaboration between Deloitte, Datawheel, and my group at MIT.

So what is design, and how do we use it to improve open data sites? My definition of design is simple. Design is discovering the forms that best fulfill a function….(More)”

Crowdsourced Deliberation: The Case of the Law on Off-Road Traffic in Finland


Tanja Aitamurto and Hélène Landemore in Policy & Internet: “This article examines the emergence of democratic deliberation in a crowdsourced law reform process. The empirical context of the study is a crowdsourced legislative reform in Finland, initiated by the Finnish government. The findings suggest that online exchanges in the crowdsourced process qualify as democratic deliberation according to the classical definition. We introduce the term “crowdsourced deliberation” to mean an open, asynchronous, depersonalized, and distributed kind of online deliberation occurring among self-selected participants in the context of an attempt by government or another organization to open up the policymaking or lawmaking process. The article helps to characterize the nature of crowdsourced policymaking and to understand its possibilities as a practice for implementing open government principles. We aim to make a contribution to the literature on crowdsourcing in policymaking, participatory and deliberative democracy and, specifically, the newly emerging subfield in deliberative democracy that focuses on “deliberative systems.”…(More)”

Citizen scientists aid Ecuador earthquake relief


Mark Zastrow at Nature: “After a magnitude-7.8 earthquake struck Ecuador’s Pacific coast on 16 April, a new ally joined the international relief effort: a citizen-science network called Zooniverse.

On 25 April, Zooniverse launched a website that asks volunteers to analyse rapidly-snapped satellite imagery of the disaster, which led to more than 650 reported deaths and 16,000 injuries. The aim is to help relief workers on the ground to find the most heavily damaged regions and identify which roads are passable.

Several crisis-mapping programmes with thousands of volunteers already exist — but it can take days to train satellites on the damaged region and to transmit data to humanitarian organizations, and results have not always proven useful. The Ecuador quake marked the first live public test for an effort dubbed the Planetary Response Network (PRN), which promises to be both more nimble than previous efforts, and to use more rigorous machine-learning algorithms to evaluate the quality of crowd-sourced analyses.

The network relies on imagery from the satellite company Planet Labs in San Francisco, California, which uses an array of shoebox-sized satellites to map the planet. In order to speed up the crowd-sourced process, it uses the Zooniverse platform to distribute the tasks of spotting features in satellite images. Machine-learning algorithms employed by a team at the University of Oxford, UK, then classify the reliability of each volunteer’s analysis and weight their contributions accordingly.

Rapid-fire data

Within 2 hours of the Ecuador test project going live with a first set of 1,300 images, each photo had been checked at least 20 times. “It was one of the fastest responses I’ve seen,” says Brooke Simmons, an astronomer at the University of California, San Diego, who leads the image processing. Steven Reece, who heads the Oxford team’s machine-learning effort, says that results — a “heat map” of damage with possible road blockages — were ready in another two hours.

In all, more than 2,800 Zooniverse users contributed to analysing roughly 25,000 square kilometres of imagery centred around the coastal cities of Pedernales and Bahia de Caraquez. That is where the London-based relief organization Rescue Global — which requested the analysis the day after the earthquake — currently has relief teams on the ground, including search dogs and medical units….(More)”

Opening up census data for research


Economic and Social Research Council (UK): “InFuse, an online search facility for census data, is enabling tailored search and investigation of UK census statistics – opening new opportunities for aggregating and comparing population counts.

Impacts

  • InFuse data were used for the ‘Smarter Travel’ research project studying how ‘smart choices’ for sustainable travel could be implemented and supported in transport planning. The research directly influenced UK climate-change agendas and policy, including:
    • the UK Committee on Climate Change recommendations on cost-effective-emission reductions
    • the Scottish Government’s targets and household advice for smarter travel
    • the UK Government’s Local Sustainable Transport Fund supporting 96 projects across England
    • evaluations for numerous Local Authority Transport Plans across the UK.
  • The Integration Hub, a web resource that was launched by Demos in 2015 to provide data about ethnic integration in England and Wales, uses data from InFuse to populate its interactive maps of the UK.
  • Census data downloaded from InFuse informed the Welsh Government for policies to engage Gypsy and Traveller families in education, showing that over 60 per cent aged over 16 from these communities had no qualifications.
  • Executive recruitment firm Sapphire Partners used census data from InFuse in a report on female representation on boards, revealing that 77 per cent of FTSE board members are men, and 70 per cent of new board appointments go to men.
  • A study by the Marie Curie charity into the differing needs of Black, Asian and minority ethnic groups in Scotland for end-of-life care used InFuse to determine that the minority ethnic population in Scotland has doubled since 2001 from 100,000 to 200,000 – highlighting the need for greater and more appropriate provision.
  • A Knowledge Transfer Partnership between homelessness charity Llamau and Cardiff University used InFuse data to show that Welsh young homeless people participating in the study were over twice as likely to have left school with no qualifications compared to UK-wide figures for their age group and gender….(More)”

 

Open Data Supply: Enriching the usability of information


Report by Phoensight: “With the emergence of increasing computational power, high cloud storage capacity and big data comes an eager anticipation of one of the biggest IT transformations of our society today.

Open data has an instrumental role to play in our digital revolution by creating unprecedented opportunities for governments and businesses to leverage off previously unavailable information to strengthen their analytics and decision making for new client experiences. Whilst virtually every business recognises the value of data and the importance of the analytics built on it, the ability to realise the potential for maximising revenue and cost savings is not straightforward. The discovery of valuable insights often involves the acquisition of new data and an understanding of it. As we move towards an increasing supply of open data, technological and other entrepreneurs will look to better utilise government information for improved productivity.

This report uses a data-centric approach to examine the usability of information by considering ways in which open data could better facilitate data-driven innovations and further boost our economy. It assesses the state of open data today and suggests ways in which data providers could supply open data to optimise its use. A number of useful measures of information usability such as accessibility, quantity, quality and openness are presented which together contribute to the Open Data Usability Index (ODUI). For the first time, a comprehensive assessment of open data usability has been developed and is expected to be a critical step in taking the open data agenda to the next level.

With over two million government datasets assessed against the open data usability framework and models developed to link entire country’s datasets to key industry sectors, never before has such an extensive analysis been undertaken. Government open data across Australia, Canada, Singapore, the United Kingdom and the United States reveal that most countries have the capacity for improvements in their information usability. It was found that for 2015 the United Kingdom led the way followed by Canada, Singapore, the United States and Australia. The global potential of government open data is expected to reach 20 exabytes by 2020, provided governments are able to release as much data as possible within legislative constraints….(More)”

The Open Data Barometer (3rd edition)


The Open Data Barometer: “Once the preserve of academics and statisticians, data has become a development cause embraced by everyone from grassroots activists to the UN Secretary-General. There’s now a clear understanding that we need robust data to drive democracy and development — and a lot of it.

Last year, the world agreed the Sustainable Development Goals (SDGs) — seventeen global commitments that set an ambitious agenda to end poverty, fight inequality and tackle climate change by 2030. Recognising that good data is essential to the success of the SDGs, the Global Partnership for Sustainable Development Data and the International Open Data Charter were launched as the SDGs were unveiled. These alliances mean the “data revolution” now has over 100 champions willing to fight for it. Meanwhile, Africa adopted the African Data Consensus — a roadmap to improving data standards and availability in a region that has notoriously struggled to capture even basic information such as birth registration.

But while much has been made of the need for bigger and better data to power the SDGs, this year’s Barometer follows the lead set by the International Open Data Charter by focusing on how much of this data will be openly available to the public.

Open data is essential to building accountable and effective institutions, and to ensuring public access to information — both goals of SDG 16. It is also essential for meaningful monitoring of progress on all 169 SDG targets. Yet the promise and possibilities offered by opening up data to journalists, human rights defenders, parliamentarians, and citizens at large go far beyond even these….

At a glance, here are this year’s key findings on the state of open data around the world:

    • Open data is entering the mainstream.The majority of the countries in the survey (55%) now have an open data initiative in place and a national data catalogue providing access to datasets available for re-use. Moreover, new open data initiatives are getting underway or are promised for the near future in a number of countries, including Ecuador, Jamaica, St. Lucia, Nepal, Thailand, Botswana, Ethiopia, Nigeria, Rwanda and Uganda. Demand is high: civil society and the tech community are using government data in 93% of countries surveyed, even in countries where that data is not yet fully open.
    • Despite this, there’s been little to no progress on the number of truly open datasets around the world.Even with the rapid spread of open government data plans and policies, too much critical data remains locked in government filing cabinets. For example, only two countries publish acceptable detailed open public spending data. Of all 1,380 government datasets surveyed, almost 90% are still closed — roughly the same as in the last edition of the Open Data Barometer (when only 130 out of 1,290 datasets, or 10%, were open). What is more, much of the approximately 10% of data that meets the open definition is of poor quality, making it difficult for potential data users to access, process and work with it effectively.
    • “Open-washing” is jeopardising progress. Many governments have advertised their open data policies as a way to burnish their democratic and transparent credentials. But open data, while extremely important, is just one component of a responsive and accountable government. Open data initiatives cannot be effective if not supported by a culture of openness where citizens are encouraged to ask questions and engage, and supported by a legal framework. Disturbingly, in this edition we saw a backslide on freedom of information, transparency, accountability, and privacy indicators in some countries. Until all these factors are in place, open data cannot be a true SDG accelerator.
    • Implementation and resourcing are the weakest links.Progress on the Barometer’s implementation and impact indicators has stalled or even gone into reverse in some cases. Open data can result in net savings for the public purse, but getting individual ministries to allocate the budget and staff needed to publish their data is often an uphill battle, and investment in building user capacity (both inside and outside of government) is scarce. Open data is not yet entrenched in law or policy, and the legal frameworks supporting most open data initiatives are weak. This is a symptom of the tendency of governments to view open data as a fad or experiment with little to no long-term strategy behind its implementation. This results in haphazard implementation, weak demand and limited impact.
    • The gap between data haves and have-nots needs urgent attention.Twenty-six of the top 30 countries in the ranking are high-income countries. Half of open datasets in our study are found in just the top 10 OECD countries, while almost none are in African countries. As the UN pointed out last year, such gaps could create “a whole new inequality frontier” if allowed to persist. Open data champions in several developing countries have launched fledgling initiatives, but too often those good open data intentions are not adequately resourced, resulting in weak momentum and limited success.
    • Governments at the top of the Barometer are being challenged by a new generation of open data adopters. Traditional open data stalwarts such as the USA and UK have seen their rate of progress on open data slow, signalling that new political will and momentum may be needed as more difficult elements of open data are tackled. Fortunately, a new generation of open data adopters, including France, Canada, Mexico, Uruguay, South Korea and the Philippines, are starting to challenge the ranking leaders and are adopting a leadership attitude in their respective regions. The International Open Data Charter could be an important vehicle to sustain and increase momentum in challenger countries, while also stimulating renewed energy in traditional open data leaders….(More)”