Open Budgets Portal


About: “The Open Budgets Portal is the first effort to create a one-stop shop for budget data worldwide with the hope of bringing visibility to countries’ efforts in this field, facilitating access and promoting use of spending data, and motivating other countries into action.

The portal offers the opportunity to showcase a subset of developing countries and subnational entities (identified by blue markers in the map) that have excelled in the exploration of new frontiers of fiscal transparency by choosing to disseminate their entire public spending datasets in accessible formats (i.e., soft copy), with the expectation that these efforts could motivate other countries into action . Users will be able to download the entire public expenditure landscape of the members of the portal in consolidated files, all of which were rigorously collected, cleaned and verified through the BOOST Initiative.

For each of these countries, the site also includes links to their original open data portals, which provide additional important information (i.e., higher frequencies other than annual, links to output data and other socio economic indicators, etc.). While every effort has been done to certify the quality of these databases according to BOOST approach and methodology, users are encouraged to refer back to the country-owned open data portals to ensure complete consistency of data with published official figures, as well as consult accompanying user manuals for potential caveats on uses of the data.

This portal represents a starting point to build momentum within the growing interest around fiscal transparency and the importance of data for enhanced decision-making processes and improved budget outcomes and accountability. While most initiatives on open budgets rightfully center on availability of key documents, little focus has been given to the quality of data dissemination and to the importance of its analytical use for incorporation into evidence-based decision-making processes.

This Open Budgets Portal aims to fill this gap by providing access to budget data worldwide and particularly to the most disaggregated and comprehensive data collected through the BOOST Initiative. The portal combines this information with a variety of tools, manuals, reports and best practices aimed at stimulating use by intermediaries, as well as easier to interpret visualization for non-experts. Our objective is to encourage all potential uses of this data to unbind the analytical power of such data.

The Open Budgets Portal was launched at the event “Boosting Fiscal Transparency for Better Policy Outcomes,” held on December 17, 2013 in Washington, DC. The following presentations were shown at the event:

Presentation of the Open Budgets Portal by Massimo Mastruzzi, Senior Economist, Open Goverment, World Bank.

Building a Citizen’s Budget Understanding – BudgetStories.md by Victoria Vlad, Economist of “Expert-Grup” from the Republic of Moldova.”

Selected Readings on Data Visualization


The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of data visualization was originally published in 2013.

Data visualization is a response to the ever-increasing amount of  information in the world. With big data, informatics and predictive analytics, we have an unprecedented opportunity to revolutionize policy-making. Yet data by itself can be overwhelming. New tools and techniques for visualizing information can help policymakers clearly articulate insights drawn from data. Moreover, the rise of open data is enabling those outside of government to create informative and visually arresting representations of public information that can be used to support decision-making by those inside or outside governing institutions.

Selected Reading List (in alphabetical order)

Annotated Selected Reading List (in alphabetical order)

Duke, D.J., K.W. Brodlie, D.A. Duce and I. Herman. “Do You See What I Mean? [Data Visualization].” IEEE Computer Graphics and Applications 25, no. 3 (2005): 6–9. http://bit.ly/1aeU6yA.

  • In this paper, the authors argue that a more systematic ontology for data visualization to ensure the successful communication of meaning. “Visualization begins when someone has data that they wish to explore and interpret; the data are encoded as input to a visualization system, which may in its turn interact with other systems to produce a representation. This is communicated back to the user(s), who have to assess this against their goals and knowledge, possibly leading to further cycles of activity. Each phase of this process involves communication between two parties. For this to succeed, those parties must share a common language with an agreed meaning.”
  • That authors “believe that now is the right time to consider an ontology for visualization,” and “as visualization move from just a private enterprise involving data and tools owned by a research team into a public activity using shared data repositories, computational grids, and distributed collaboration…[m]eaning becomes a shared responsibility and resource. Through the Semantic Web, there is both the means and motivation to develop a shared picture of what we see when we turn and look within our own field.”

Friendly, Michael. “A Brief History of Data Visualization.” In Handbook of Data Visualization, 15–56. Springer Handbooks Comp.Statistics. Springer Berlin Heidelberg, 2008. http://bit.ly/17fM1e9.

  • In this paper, Friendly explores the “deep roots” of modern data visualization. “These roots reach into the histories of the earliest map making and visual depiction, and later into thematic cartography, statistics and statistical graphics, medicine and other fields. Along the way, developments in technologies (printing, reproduction), mathematical theory and practice, and empirical observation and recording enabled the wider use of graphics and new advances in form and content.”
  • Just as the general the visualization of data is far from a new practice, Friendly shows that the graphical representation of government information has a similarly long history. “The collection, organization and dissemination of official government statistics on population, trade and commerce, social, moral and political issues became widespread in most of the countries of Europe from about 1825 to 1870. Reports containing data graphics were published with some regularity in France, Germany, Hungary and Finland, and with tabular displays in Sweden, Holland, Italy and elsewhere.”

Graves, Alvaro and James Hendler. “Visualization Tools for Open Government Data.” In Proceedings of the 14th Annual International Conference on Digital Government Research, 136–145. Dg.o ’13. New York, NY, USA: ACM, 2013. http://bit.ly/1eNSoXQ.

  • In this paper, the authors argue that, “there is a gap between current Open Data initiatives and an important part of the stakeholders of the Open Government Data Ecosystem.” As it stands, “there is an important portion of the population who could benefit from the use of OGD but who cannot do so because they cannot perform the essential operations needed to collect, process, merge, and make sense of the data. The reasons behind these problems are multiple, the most critical one being a fundamental lack of expertise and technical knowledge. We propose the use of visualizations to alleviate this situation. Visualizations provide a simple mechanism to understand and communicate large amounts of data.”
  • The authors also describe a prototype of a tool to create visualizations based on OGD with the following capabilities:
    • Facilitate visualization creation
    • Exploratory mechanisms
    • Viralization and sharing
    • Repurpose of visualizations

Hidalgo, César A. “Graphical Statistical Methods for the Representation of the Human Development Index and Its Components.” United Nations Development Programme Human Development Reports, September 2010. http://bit.ly/166TKur.

  • In this paper for the United Nations Human Development Programme, Hidalgo argues that “graphical statistical methods could be used to help communicate complex data and concepts through universal cognitive channels that are heretofore underused in the development literature.”
  • To support his argument, representations are provided that “show how graphical methods can be used to (i) compare changes in the level of development experienced by countries (ii) make it easier to understand how these changes are tied to each one of the components of the Human Development Index (iii) understand the evolution of the distribution of countries according to HDI and its components and (iv) teach and create awareness about human development by using iconographic representations that can be used to graphically narrate the story of countries and regions.”

Stowers, Genie. “The Use of Data Visualization in Government.” IBM Center for The Business of Government, Using Technology Series, 2013. http://bit.ly/1aame9K.

  • This report seeks “to help public sector managers understand one of the more important areas of data analysis today — data visualization. Data visualizations are more sophisticated, fuller graphic designs than the traditional spreadsheet charts, usually with more than two variables and, typically, incorporating interactive features.”
  • Stowers also offers numerous examples of “visualizations that include geographical and health data, or population and time data, or financial data represented in both absolute and relative terms — and each communicates more than simply the data that underpin it. In addition to these many examples of visualizations, the report discusses the history of this technique, and describes tools that can be used to create visualizations from many different kinds of data sets.”

AU: Govt finds one third of open data was "junk"


IT News: “The number of datasets available on the Government’s open data website has slimmed by more than half after the agency discovered one third of the datasets were junk.
Since its official launch in 2011 data.gov.au grew to hold 1200 datasets from government agencies for public consumption.
In July this year the Deaprtment of Finance migrated the portal to a new open source platform – the Open Knowledge Foundation CKAN platform – for greater ease of use and publishing ability.
Since July the number of datasets fell from 1200 to 500.
Australian Government CTO John Sheridan said in his blog late yesterday the agency had needed to review the 1200 datasets as a result of the CKAN migration, and discovered a significant amount of them were junk.
“We unfortunately found that a third of the “datasets” were just links to webpages or files that either didn’t exist anymore, or redirected somewhere not useful to genuine seekers of data,” Sheridan said.
“In the second instance, the original 1200 number included each individual file. On the new platform, a dataset may have multiple files. In one case we have a dataset with 200 individual files where before it was counted as 200 datasets.”
The number of datasets following the clean out now sits at 529. Around 123 government bodies contributed data to the portal.
Sheridan said the number was still too low.
“A lot of momentum has built around open data in Australia, including within governments around the country and we are pleased to report that a growing number of federal agencies are looking at how they can better publish data to be more efficient, improve policy development and analysis, deliver mobile services and support greater transparency and public innovation,” he said….
The Federal Government’s approach to open data has previously been criticised as “patchy” and slow, due in part to several shortcomings in the data.gov.au website as well as slow progress in agencies adopting an open approach by default.
The Australian Information Commissioner’s February report on open data in government outlined the manual uploading and updating of datasets, lack of automated entry for metadata and a lack of specific search functions within data.gov.au as obstacles affecting the efforts pushing a whole-of-government approach to open data.
The introduction of the new CKAN platform is expected to go some way to addressing the highlighted concerns.”

Buenos Aires, A Pocket of Civic Innovation in Argentina


Rebecca Chao in TechPresident: “…In only a few years, the government, civil society and media in Buenos Aires have actively embraced open data. The Buenos Aires city government has been publishing data under a creative commons license and encouraging civic innovation through hackathons. NGOs have launched a number of tech-driven tools and Argentina’s second largest newspaper, La Nación, has published several hard-hitting data journalism projects. The result is a fledgling but flourishing open data culture in Buenos Aires, in a country that has not yet adopted a freedom of information law.

A Wikipedia for Open Government Data

In late August of this year, the Buenos Aires government declared a creative commons license for all of its digital content, which allows it be used for free, like Wikipedia content, with proper attribution. This applies to their new open data catalog that allows users to visualize the data, examine apps that have been created using the data and even includes a design lab for posting app ideas. Launched only in March, the government has already published fairly substantial data sets, including the salaries of city officials. The website also embodies the principals of openness in its design; it is built with open-source software and its code is available for reuse via GitHub.
“We were the first city in Argentina doing open government,” Rudi Borrmann tells techPresident over Skype. Borrmann is the Director of Buenos Aires’ Open Government Initiative. Previously, he was the social media editor at the city’s New Media Office but he also worked for many years in digital media…
While the civil society and media sectors have forged ahead in using open data, Borrmann tells techPresident that up in the ivory tower, openness to open data has been lagging. “Only technical schools are starting to create areas focused on working on open data,” he says.
In an interview with NYU’s govlab, Borrmann explained the significance of academia in using and pushing for more open data. “They have the means, the resources, the methodology to analyze…because in government you don’t have that time to analyze,” he said.
Another issue with open data is getting other branches of the government to modernize. Borrmann says that a lot of the Open Government’s work is done behind the scenes. “In general, you have very poor IT infrastructure all over Latin America” that interferes with the gathering and publishing of data, he says. “So in some cases it’s not about publishing or not publishing,” but about “having robust infrastructure for the information.”
It seems that the behind the scenes work is bearing some fruit. Just last week, on Dec. 6, the team behind the Buenos Aires open data website launched an impressive, interactive timeline, based on a similar timelapse map developed by a 2013 Knight-Mozilla Fellow, Noah Veltman. Against faded black and white photos depicting the subway from different decades over the last century, colorful pops of the Subterráneo lines emerge alongside factoids that go all the way back to 1910.”

Data isn't a four-letter word


Speech by Neelie Kroes, Vice-President of the European Commission responsible for the Digital Agenda: “I want to talk about data too: the opportunity as well as the threat.
Making data the engine of the European economy: safeguarding fundamental rights capturing the data boost, and strengthening our defences.
Data is at a cross-roads. We have opportunities; open data, big data, datamining, cloud computing. Tim Berners Lee, creator of the world wide web, saw the massive potential of open data. As he put it, if you put that data online, it will be used by other people to do wonderful things, in ways that you could never imagine.
On the other hand, we have threats: to our privacy and our values, and to the openness that makes it possible to innovate, trade and exchange.
Get it right and we can safeguard a better economic future. Get it wrong, and we cut competitiveness without protecting privacy. So we remain dependent on the digital developments of others: and just as vulnerable to them.
How do we find that balance? Not with hysteria; nor by paralysis. Not by stopping the wonderful things, simply to prevent the not-so-wonderful. Not by seeing data as a dirty word.
We are seeing a whole economy develop around data and cloud computing. Businesses using them, whole industries depending on them, data volumes are increasing exponentially. Data is not just an economic sideshow, it is a whole new asset class; requiring new skills and creating new jobs.
And with a huge range of applications. From decoding human genes to predicting the traffic, and even the economy. Whatever you’re doing these days, chances are you’re using big data (like translation, search, apps, etc).
There is increasing recognition of the data boost on offer. For example, open data can make public administrations more transparent and stimulate a rich innovative market. That is what the G8 Leaders recognised in June, with their Open Data Charter. For scientists too, open data and open access offer new ways to research and progress.
That is a philosophy the Commission has shared for some time. And that is what our ‘Open Data’ package of December 2011 is all about. With new EU laws to open up public administrations, and a new EU Open Data Portal. And all EU-funded scientific publications available under open access.
Now not just the G8 and the Commission are seeing this data opportunity: but the European Council too. Last October, they recognised the potential of big data innovation, the need for a single market in cloud computing; and the urgency of Europe capitalising on both.
We will be acting on that. Next spring, I plan a strategic agenda for research on data. Working with private partners and national research funders to shape that agenda, and get the most bang for our research euro.
And, beyond research, there is much we can do to align our work and support secure big data. From training skilled workers, to modernising copyright for data and text mining, to different actors in the value chain working together: for example through a public-private partnership.
…Empowering people is not always easy in this complex online world. I want to see technical solutions emerge that can do that, give users control over their desired level of privacy, how their data will be used, and making it easier to verify online rights are respected.
How can we do that? How can we ensure systems that are empowering, transparent, and secure? There are a number of subtleties in play. Here’s my take.
First, companies engaged in big data will need to start thinking about privacy protection at every stage: and from system development, to procedures and practices.
This is the principle of “privacy by design”, set out clearly in the proposed Data Protection Regulation. In other words, from now on new business ideas have two purposes: delivering a service and protecting privacy at the right level.
Second, also under the regulation, big data applications that might put fundamental rights at risk would require the company to carry out a “Privacy Impact Assessment”. This is another good way to combine innovation and privacy: ensuring you think about any risks from the start.
Third, sometimes, particularly for personal data, a company might realise they need user consent. Consent is a cornerstone of data protection rules, and should stay that way.
But we need to get smart, and apply common sense to consent. Users can’t be expected to know everything. Nor asked to consent to what they cannot realistically understand. Nor presented with false dilemmas, a black-and-white choice between consenting or getting shut out of services.
Fourth, we can also get smart when it comes to anonymisation. Sometimes, full anonymisation means losing important information, so you can no longer make the links between data. That could make the difference between progress or paralysis. But using pseudonyms can let you to analyse large amounts of data: to spot, for example, that people with genetic pattern X also respond well to therapy Y.
So it is understandable why the European Parliament has proposed a more flexible data protection regime for this type of data. Companies would be able to process the data on grounds of legitimate interest, rather than consent. That could make all the positive difference to big data: without endangering privacy.
Of course, in those cases, companies still to minimise privacy risks. Their internal processes and risk assessments must show how they comply with the guiding principles of data protection law. And – if something does go wrong – the company remains accountable.
Indeed company accountability is another key element of our proposal. And here again we welcome the European Parliament’s efforts to reinforce that. Clearly, you might assure accountability in different ways for different companies. But standards for compliance and processes could make a real difference.
A single data protection law for Europe would be a big step forward. National fortresses and single market barriers just make it harder for Europe to lead in digital, harder for Europe to become the natural home of secure online services. Data protection cannot mean data protectionism. Rather, it means safeguarding privacy does not come at the expense of innovation: with laws both flexible and future proof, pragmatic and proportionate, for a changing world….
But data protection rules are really just the start. They are only part of our response to the Snowden revelations….”

We must create a culture of “open data makers”


Rufus Pollock (@rufuspollock), Founder and Director of the Open Knowledge Foundation: “Open data and open knowledge are fundamentally about empowerment, about giving people – citizens, journalists, NGOs, companies and policy-makers – access to the information they need to understand and shape the world around them.

Through openness, we can ensure that technology and data improve science, governance, and society. Without it, we may see the increasing centralisation of knowledge – and therefore power – in the hands of the few, and a huge loss in our potential, individually and collectively, to innovate, understand, and improve the world around us.

Open data is data that can be freely accessed, used, built upon and shared by anyone, for any purpose. With digital technology – from mobiles to the internet – increasingly everywhere, we’re seeing a data revolution. Its a revolution both in the amount of data available and in our ability to use, and share, that data. And it’s changing everything we do – from how we travel home from work to how scientists do research, to how government set policy….

its about people, the people who use data, and the people who use the insights from that data to drive change. We need to create a culture of “open data makers”, people able and ready to make apps and insights with open data. We need to connect open data with those who have the best questions and the biggest needs – a healthcare worker in Zambia, the London commuter travelling home – and go beyond the data geeks and the tech savvy.”

David Cameron's Transparency Revolution? The Impact of Open Data in the UK


Paper by Ben Worthy: “This article examines the impact of the UK Government’s Transparency agenda, focusing on the publication of spending data at local government level. It measures the democratic impact in terms of creating transparency and accountability, public participation and everyday information. The study uses a survey of local authorities, interviews and FOI requests to build a picture of use and impact.
It argues that the spending has led to some accountability, though from those already monitoring government rather than citizens. It has not led to increased participation, as it lacks the narrative or accountability instruments to fully use it. Nor has it created a new stream of information to underpin citizen choice, though new innovations offer this possibility.
The evidence points to third party innovations as the key. They can contextualise and ‘localise’ information and may also provide the comparison to the first step in more effective accountability.
The superficially simple and neutral reforms conceal complex political dynamics. The very design lends itself to certain framing effects, further compounded by assumptions and blurred concepts and a lack of accountability instruments to enforce problems raised by the data.”

The United States Releases its Second Open Government National Action Plan


Nick Sinai and Gayle Smith at the White House: “Since his first full day in office, President Obama has prioritized making government more open and accountable and has taken substantial steps to increase citizen participation, collaboration, and transparency in government. Today, the Obama Administration released the second U.S. Open Government National Action Plan, announcing 23 new or expanded open-government commitments that will advance these efforts even further.
…, in September 2011, the United States released its first Open Government National Action Plan, setting a series of ambitious goals to create a more open government. The United States has continued to implement and improve upon the open-government commitments set forth in the first Plan, along with many more efforts underway across government, including implementing individual Federal agency Open Government Plans. The second Plan builds on these efforts, in part through a series of key commitments highlighted in a preview report issued by the White House in October 2013, in conjunction with the Open Government Partnership Annual Summit in London.
Among the highlights of the second National Action Plan:

  • “We the People”: The White House will introduce new improvements to the We the People online petitions platform aimed at making it easier to collect and submit signatures and increase public participation in using this platform. Improvements will enable the public to perform data analysis on the signatures and petitions submitted to We the People, as well as include a more streamlined process for signing petitions and a new Application Programming Interface (API) that will allow third-parties to collect and submit signatures from their own websites.
  • Freedom of Information Act (FOIA) Modernization: The FOIA encourages accountability through transparency and represents an unwavering national commitment to open government principles. Improving FOIA administration is one of the most effective ways to make the U.S. Government more open and accountable. Today, we announced five commitments to further modernize FOIA processes, including launching a consolidated online FOIA service to improve customers’ experience, creating and making training resources available to FOIA professionals and other Federal employees, and developing common FOIA standards for agencies across government.
  • The Global Initiative on Fiscal Transparency (GIFT): The United States will join GIFT, an international network of governments and non-government organizations aimed at enhancing financial transparency, accountability, and stakeholder engagement. The U.S. Government will actively participate in the GIFT Working Group and seek opportunities to collaborate with stakeholders and champion greater fiscal openness and transparency in domestic and global spending.
  • Open Data to the Public: Over the past few years, government data has been used by journalists to uncover variations in hospital billings, by citizens to learn more about the social services provided by charities in their communities, and by entrepreneurs building new software tools to help farmers plan and manage their crops.  Building on the U.S. Government’s ongoing open data efforts, new commitments will make government data even more accessible and useful for the public, including by reforming how Federal agencies manage government data as a strategic asset, launching a new version of Data.gov to make it even easier to discover, understand, and use open government data, and expanding access to agriculture and nutrition data to help farmers and communities.
  • Participatory Budgeting: The United States will promote community-led participatory budgeting as a tool for enabling citizens to play a role in identifying, discussing, and prioritizing certain local public spending projects, and for giving citizens a voice in how taxpayer dollars are spent in their communities. This commitment will include steps by the U.S. Government to help raise awareness of the fact that participatory budgeting may be used for certain eligible Federal community development grant programs.

Other initiatives launched or expanded today include: increasing open innovation by encouraging expanded use of challenges, incentive prizes, citizen science, and crowdsourcing to harness American ingenuity, as well as modernizing the management of government records by leveraging technology to make records less burdensome to manage and easier to use and share. There are many other exciting open-government initiatives described in the second Plan — and you can view them all here.”

Government Digital Service: the best startup in Europe we can't invest in


Saul Klein in the Guardian: “Everyone is rightly excited about the wall of amazing tech-enabled startups being born in Europe and Israel, disrupting massive industries including media, marketing, fashion, retail, travel, finance and transportation. However, there’s one incredibly disruptive startup based in London that is going after one of the biggest markets of all, and is so opaque it is largely unknown in the world of business – and, much to my chagrin, it’s also impossible to invest in.
It’s not a private company, it wasn’t started by “conventional” tech entrepreneurs and the market (though huge) is decidedly unsexy.
Its name is the Government Digital Service (GDS) and it is disrupting the British public sector in an energetic, creative and effective way. In less than two years GDS has hired over 200 staff (including some of the UK’s top digital talent), shipped an award-winning service, and begun the long and arduous journey of completely revolutionising the way that 62 million citizens interact with more than 700 services from 24 government departments and their 331 agencies.
It’s a strange world we live in when the government is pioneering the way that large complex corporations reinvent themselves to not just massively reduce cost and complexity, but to deliver better and more responsive services to their customers and suppliers.
So what is it that GDS knows that every chairman and chief executive of a FTSE100 should know? Open innovation.
1. Open data
• Leads to radical and remarkable transparency like the amazing Transactions Explorer designed by Richard Sargeant and his team. I challenge any FTSE100 to deliver the same by December 2014, or even start to show basic public performance data – if not to the internet, at least to their shareholders and analysts.
• Leads to incredible and unpredictable innovation where public data is shared and brought together in new ways. In fact, the Data.gov.uk project is one of the world’s largest data sources of public data with over 9,000 data sets for anyone to use.
2. Open standards
• Deliver interoperability across devices and suppliers
• Provide freedom from lock-in to any one vendor
• Enable innovation from a level playing field of many companies, including cutting-edge startups
• The Standards Hub from the Cabinet Office is an example of how the government aims to achieve open standards
3. Cloud and open source software and services
• Use of open source, cloud and software-as-a-service solutions radically reduces cost, improves delivery and enables innovation
4. Open procurement
• In March 2011, the UK government set a target to award 25% of spend with third-party suppliers to SMEs by March 2015.”