Assessing the Evidence: The Effectiveness and Impact of Public Governance-Oriented Multi-Stakeholder Initiatives


Paper by Brandon Brockmyer and Jonathan A. Fox: “Transnational multi-stakeholder initiatives (MSIs) – voluntary partnerships between governments, civil society, and the private sector – are an increasingly prevalent strategy for promoting government responsiveness and accountability to citizens. While most transnational MSIs involve using voluntary standards to encourage socially and environmentally responsible private sector behavior, a handful of these initiatives – the Extractive Industries Transparency Initiative (EITI), the Construction Sector Transparency Initiative (CoST), the Open Government Partnership (OGP), the Global Initiative on Fiscal Transparency (GIFT) and the Open Contracting Partnership (OCP) – focus on information disclosure and participation in the public sector. Unlike private sector MSIs, which attempt to supplement weak government capacity to enforce basic social and environmental standards through partnerships between businesses and civil society, public sector MSIs ultimately seek to bolster public governance. But how exactly are these MSIs supposed to work? And how much has actually been achieved?

The purpose of this study is to identify and consolidate the current state of the evidence for public governance-oriented MSI effectiveness and impact. Researchers collected over 300 documents and interviewed more than two-dozen MSI stakeholders about their experiences with five public governance oriented multi-stakeholder initiatives.

This report provides a ‘snapshot’ of the evidence related to these five MSIs, and suggests that the process of leveraging transparency and participation through these initiatives for broader accountability gains remains uncertain. The report highlights the ongoing process of defining MSI success and impact, and how these initiatives intersect with other accountability actors and processes in complex ways. The study closes with key recommendations for MSI stakeholders….(More)”

Do We Need to Educate Open Data Users?


Tony Hirst at IODC: “Whilst promoting the publication of open data is a key, indeed necessary, ingredient in driving the global open data agenda, promoting initiatives that support the use of open data is perhaps an even more pressing need….

This, then, is the first issue we need to address: improving basic levels of literacy in interpreting  – and manipulating (for example, sorting and grouping) – simple tables and charts. Sensemaking, in other words: what does the chart you’ve just produced actually say? What story does it tell? And there’s an added benefit that arises from learning to read and critique charts better – it makes you better at creating your own.

Associated with reading stories from data comes the reason for telling the story and putting the data to work. How does “data” help you make a decision, or track the impact of a particular intervention? (Your original question should also have informed the data you searched for in the first place). Here we have a need to develop basic skills in how to actually use data, from finding anomalies to hold publishers to account, to using the data as part of a positive advocacy campaign.

After a quick read, on site, of some of the stories the data might have to tell, there may be a need to do further analysis, or more elaborate visualization work. At this point, a range of technical craft skills often come into play, as well as statistical knowledge.

Many openly published datasets just aren’t that good – they’re “dirty”, full of misspellings, missing data, things in the wrong place or wrong format, even if the data they do contain is true. A significant amount of time that should be spent analyzing the data gets spent trying to clean the data set and get it into a form where it can be worked with. I would argue here that a data technician, with a wealth of craft knowledge about how to repair what is essentially a broken dataset, can play an important timesaving role here getting data into a state where an analyst can actually start to do their job analyzing the data.

But at the same time, there are a range of tools and techniques that can help the everyday user improve the quality of their data. Many of these tools require an element of programming knowledge, but less than you might at first think. In the Open University/FutureLean MOOC “Learn to Code for Data Analysis” we use an interactive notebook style of computing to show how you can use code literally one line at a time to perform powerful data cleaning, analysis, and visualization operations on a range of open datasets, including data from the World Bank and Comtrade.

Here, then, is yet another area where skills development may be required: statistical literacy. At its heart, statistics simply provide us with a range of tools for comparing sets of numbers. But knowing what comparisons to make, or the basis on which particular comparisons can be made, knowing what can be said about those comparisons or how they might be interpreted, in short, understanding what story the stats appear to be telling, can quickly become bewildering. Just as we need to improve sensemaking skills associated with reading charts, so to we need to develop skills in making sense of statistics, even if not actually producing those statistics ourselves.

As more data gets published, there are more opportunities for more people to make use of that data. In many cases, what’s likely to hold back that final data use is a skills gap: primary among these are the skills required to interpret simple datasets and the statistics associated with them associated with developing knowledge about how to make decisions or track progress based on that interpretation. However, the path to producing the statistics or visualizations used by the end-users from the originally published open data dataset may also be a windy one, requiring skills not only in analyzing data and uncovering – and then telling – the stories it contains, but also in more mundane technical operational concerns such as actually accessing, and cleaning, dirty datasets….(More)”

Uninformed: Why People Seem to Know So Little about Politics and What We Can Do about It


Book by Arthur Lupia: “Research polls, media interviews, and everyday conversations reveal an unsettling truth: citizens, while well-meaning and even passionate about current affairs, appear to know very little about politics. Hundreds of surveys document vast numbers of citizens answering even basic questions about government incorrectly. Given this unfortunate state of affairs, it is not surprising that more knowledgeable people often deride the public for its ignorance. Some experts even think that less informed citizens should stay out of politics altogether.

As Arthur Lupia shows in Uninformed, this is not constructive. At root, critics of public ignorance fundamentally misunderstand the problem. Many experts believe that simply providing people with more facts will make them more competent voters. However, these experts fail to understand how most people learn, and hence don’t really know what types of information are even relevant to voters. Feeding them information they don’t find relevant does not address the problem. In other words, before educating the public, we need to educate the educators.

Lupia offers not just a critique, though; he also has solutions. Drawing from a variety of areas of research on topics like attention span and political psychology, he shows how we can actually increase issue competence among voters in areas ranging from gun regulation to climate change. To attack the problem, he develops an arsenal of techniques to effectively convey to people information they actually care about.

Citizens sometimes lack the knowledge that they need to make competent political choices, and it is undeniable that greater knowledge can improve decision making. But we need to understand that voters either don’t care about or pay attention to much of the information that experts think is important. Uninformed provides the keys to improving political knowledge and civic competence: understanding what information is important to others and knowing how to best convey it to them….(More)”

Jakarta’s Participatory Budget


Ramda Yanurzha in GovInsider: “…This is a map of Musrenbang 2014 in Jakarta. Red is a no-go, green means the proposal is approved.

To give you a brief background, musrenbang is Indonesia’s flavor of participatory, bottom-up budgeting. The idea is that people can propose any development for their neighbourhood through a multi-stage budgeting process, thus actively participating in shaping the final budget for the city level, which will then determine the allocation for each city at the provincial level, and so on.

The catch is, I’m confident enough to say that not many people (especially in big cities) are actually aware of this process. While civic activists tirelessly lament that the process itself is neither inclusive nor transparent, I’m leaning towards a simpler explanation that most people simply couldn’t connect the dots.

People know that the public works agency fixed that 3-foot pothole last week. But it’s less clear how they can determine who is responsible for fixing a new streetlight in that dark alley and where the money comes from. Someone might have complain to the neighbourhood leader (Pak RT) and somehow the message gets through, but it’s very hard to trace how it got through. Just keep complaining to the black box until you don’t have to. There are very few people (mainly researchers) who get to see the whole picture.

This has now changed because the brand-new Jakarta open data portal provides musrenbang data from 2009. Who proposed what to whom, for how much, where it should be implemented (geotagged!), down to kelurahan/village level, and whether the proposal is accepted into the final city budget. For someone who advocates for better availability of open data in Indonesia and is eager to practice my data wrangling skill, it’s a goldmine.

Diving In

data screenshot
All the different units of goods proposed.

The data is also, as expected, incredibly messy. While surprisingly most of the projects proposed are geotagged, there are a lot of formatting inconsistencies that makes the clean up stage painful. Some of them are minor (m? meter? meter2? m2? meter persegi?) while others are perplexing (latitude: -6,547,843,512,000  –  yes, that’s a value of more than a billion). Annoyingly, hundreds of proposals point to the center of the National Monument so it’s not exactly a representative dataset.

For fellow data wranglers, pull requests to improve the data are gladly welcome over here. Ibam generously wrote an RT extractor to yield further location data, and I’m looking into OpenStreetMap RW boundary data to create a reverse geocoder for the points.

A couple hours of scrubbing in OpenRefine yields me a dataset that is clean enough for me to generate the CartoDB map I embedded at the beginning of this piece. More precisely, it is a map of geotagged projects where each point is colored depending on whether it’s rejected or accepted.

Numbers and Patterns

40,511 proposals, some of them merged into broader ones, which gives us a grand total of 26,364 projects valued at over IDR 3,852,162,060,205, just over $250 million at the current exchange rate. This amount represents over 5% of Jakarta’s annual budget for 2015, with projects ranging from a IDR 27,500 (~$2) trash bin (that doesn’t sound right, does it?) in Sumur Batu to IDR 54 billion, 1.5 kilometer drainage improvement in Koja….(More)”

Fudging Nudging: Why ‘Libertarian Paternalism’ is the Contradiction It Claims It’s Not


Paper by Heidi M. Hurd: “In this piece I argue that so-called “libertarian paternalism” is as self-contradictory as it sounds. The theory of libertarian paternalism originally advanced by Richard Thaler and Cass Sunstein, and given further defense by Sunstein alone, is itself just a sexy ad campaign designed to nudge gullible readers into thinking that there is no conflict between libertarianism and welfare utilitarianism. But no one should lose sight of the fact that welfare utilitarianism just is welfare utilitarianism only if it sacrifices individual liberty whenever it is at odds with maximizing societal welfare. And thus no one who believes that people have rights to craft their own lives through the exercise of their own choices ought to be duped into thinking that just because paternalistic nudges are cleverly manipulative and often invisible, rather than overtly coercive, standard welfare utilitarianism can lay claim to being libertarian.

After outlining four distinct strains of libertarian theory and sketching their mutual incompatibility with so-called “libertarian paternalism,” I go on to demonstrate at some length how the two most prevalent strains — namely, opportunity set libertarianism and motivational libertarianism — make paternalistically-motivated nudges abuses of state power. As I argue, opportunity set libertarians should recognize nudges for what they are — namely, state incursions into the sphere of liberty in which individual choice is a matter of moral right, the boundaries of which are rightly defined, in part, by permissions to do actions that do not maximize welfare. And motivational libertarians should similarly recognize nudges for what they are — namely, illicitly motivated forms of legislative intervention that insult autonomy no less than do flat bans that leave citizens with no choice but to substitute the state’s agenda for their own. As I conclude, whatever its name, a political theory that recommends to state officials the use of “nudges” as means of ensuring that citizens’ advance the state’s understanding of their own best interests is no more compatible with libertarianism than is a theory that recommends more coercive means of paternalism….(More)”

Anonymous hackers could be Islamic State’s online nemesis


 at the Conversation: “One of the key issues the West has had to face in countering Islamic State (IS) is the jihadi group’s mastery of online propaganda, seen in hundreds of thousands of messages celebrating the atrocities against civilians and spreading the message of radicalisation. It seems clear that efforts to counter IS online are missing the mark.

A US internal State Department assessment noted in June 2015 how the violent narrative of IS had “trumped” the efforts of the world’s richest and most technologically advanced nations. Meanwhile in Europe, Interpol was to track and take down social media accounts linked to IS, as if that would solve the problem – when in fact doing so meant potentially missing out on intelligence gathering opportunities.

Into this vacuum has stepped Anonymous, a fragmented loose network of hacktivists that has for years launched occasional cyberattacks against government, corporate and civil society organisations. The group announced its intention to take on IS and its propaganda online, using its networks to crowd-source the identity of IS-linked accounts. Under the banner of #OpIsis and #OpParis, Anonymous published lists of thousands of Twitter accounts claimed to belong to IS members or sympathisers, claiming more than 5,500 had been removed.

The group pursued a similar approach following the attacks on Charlie Hebdo magazine in January 2015, with @OpCharlieHebdo taking down more than 200 jihadist Twitter acounts, bringing down the website Ansar-Alhaqq.net and publishing a list of 25,000 accounts alongside a guide on how to locate pro-IS material online….

Anonymous has been prosecuted for cyber attacks in many countries under cybercrime laws, as their activities are not seen as legitimate protest. It is worth mentioning the ethical debate around hacktivism, as some see cyber attacks that take down accounts or websites as infringing on others’ freedom of expression, while others argue that hacktivism should instead create technologies to circumvent censorship, enable digital equality and open access to information….(More)”

E-Gov’s Untapped Potential for Cutting the Public Workforce


Robert D. Atkinson at Governing: “Since the flourishing of the Internet in the mid-1990s, e-government advocates have promised that information technology not only would make it easier to access public services but also would significantly increase government productivity and lower costs. Compared to the private sector, however, this promise has remained largely unfulfilled, in part because of a resistance to employing technology to replace government workers.

It’s not surprising, then, that state budget directors and budget committees usually look at IT as a cost rather than as a strategic investment that can produce a positive financial return for taxpayers. Until governments make a strong commitment to using IT to increase productivity — including as a means of workforce reduction — it will remain difficult to bring government into the 21st-century digital economy.

The benefits can be sizeable. My organization, the Information Technology and Innovation Foundation, estimates that if states focus on using IT to drive productivity, they stand to save more than $11 billion over the next five years. States can achieve these productivity gains in two primary ways:

First, they can use e-government to substitute for person-to-person interactions. For example, by moving just nine state services online — from one-stop business registration to online vehicle-license registration — Utah reduced the need for government employees to interact with citizens, saving an average of $13 per transaction.

And second, they can use IT to optimize performance and cut costs. In 2013, for example, Pennsylvania launched a mobile app to streamline the inspection process for roads and bridges, reducing the time it took for manual data entry. Inspectors saved about 15 minutes per survey, which added up to a savings of over $550,000 in 2013.

So if technology can cut costs, why has e-government not lived up to its original promise? One key reason is that most state governments have focused first and foremost on using IT to improve service quality and access rather than to increase productivity. In part, this is because boosting productivity involves reducing headcount, and state chief information officers and other policymakers often are unwilling to openly advocate for using technology in this way for fear that it will generate opposition from government workers and their unions. This is why replacing labor with modern IT tools has long been the third rail for the public-sector IT community.

This is not necessarily the case in some other nations that have moved to aggressively deploy IT to reduce headcount. The first goal of the Danish Agency for Digitisation’s strategic plan is “a productive and efficient public sector.” To get there, the agency plans to focus on automation of public administrative procedures. Denmark even introduced a rule in which all communications with government need to be done electronically, eliminating telephone receptionists at municipal offices. Likewise, the United Kingdom’s e-government strategy set a goal of increasing productivity by 2.5 percent, including through headcount cuts.

Another reason e-government has not lived up to its full promise is that many state IT systems are woefully out of date, especially compared to the systems the corporate sector uses. But if CIOs and other advocates of modern digital government are going to be able to make their case effectively for resources to bring their technology into the 21st century, they will need to make a more convincing bottom-line case to appropriators. This argument should be about saving money, including through workforce reduction.

Policymakers should base this case not just on savings for government but also for the state’s businesses and citizens….(More)”

Privacy in a Digital, Networked World: Technologies, Implications and Solutions


Book edited by Zeadally, Sherali and Badra, Mohamad: “This comprehensive textbook/reference presents a focused review of the state of the art in privacy research, encompassing a range of diverse topics. The first book of its kind designed specifically to cater to courses on privacy, this authoritative volume provides technical, legal, and ethical perspectives on privacy issues from a global selection of renowned experts. Features: examines privacy issues relating to databases, P2P networks, big data technologies, social networks, and digital information networks; describes the challenges of addressing privacy concerns in various areas; reviews topics of privacy in electronic health systems, smart grid technology, vehicular ad-hoc networks, mobile devices, location-based systems, and crowdsourcing platforms; investigates approaches for protecting privacy in cloud applications; discusses the regulation of personal information disclosure and the privacy of individuals; presents the tools and the evidence to better understand consumers’ privacy behaviors….(More)”

Predictive policing is ‘technological racism’


Shaun King at the New York Daily News: “The future is here.

For years now, the NYPD, the Miami PD, and many police departments around the country have been using new technology that claims it can predict where crime will happen and where police should focus their energies in order. They call it predictive policing. Months ago, I raised several red flags to such software because it does not appear to properly account for the presence of racism or racial profiling in how it predicts where crimes will be committed.

See, these systems claim to predict where crimes will happen based on prior arrest data. What they don’t account for is the widespread reality that race and racial profiling have everything to do with who is arrested and where they are arrested. For instance, study after study has shown that white people actually are more likely to sell drugs and do drugs than black people, but are exponentially less likely to be arrested for either crime. But, and this is where these systems fail, if the only data being entered into systems is based not on the more complex reality of who sells and purchases drugs, but on a racial stereotype, then the system will only perpetuate the racism that preceded it…

In essence, it’s not predicting who will sell drugs and where they will sell it, as much as it is actually predicting where a certain race of people may sell or purchase drugs. It’s technological racism at its finest.

Now, in addition to predictive policing, the state of Pennsylvania is pioneering predictive prison sentencing. Through complex questionnaires and surveys completed not by inmates, but by prison staff members, inmates may be given a smaller bail or shorter sentences or a higher bail and lengthier prison sentences. The surveys focus on family background, economic background, prior crimes, education levels and more.

When all of the data is scored, the result classifies prisoners as low, medium or high risk. While this may sound benign, it isn’t. No prisoner should ever be given a harsh sentence or an outrageous bail amount because of their family background or economic status. Even these surveys lend themselves to being racist and putting black and brown women and men in positions where it’s nearly impossible to get a good score because of prevalent problems in communities of color….(More)”

Push, Pull, and Spill: A Transdisciplinary Case Study in Municipal Open Government


Paper by Jan Whittington et al: “Cities hold considerable information, including details about the daily lives of residents and employees, maps of critical infrastructure, and records of the officials’ internal deliberations. Cities are beginning to realize that this data has economic and other value: If done wisely, the responsible release of city information can also release greater efficiency and innovation in the public and private sector. New services are cropping up that leverage open city data to great effect.

Meanwhile, activist groups and individual residents are placing increasing pressure on state and local government to be more transparent and accountable, even as others sound an alarm over the privacy issues that inevitably attend greater data promiscuity. This takes the form of political pressure to release more information, as well as increased requests for information under the many public records acts across the country.

The result of these forces is that cities are beginning to open their data as never before. It turns out there is surprisingly little research to date into the important and growing area of municipal open data. This article is among the first sustained, cross-disciplinary assessments of an open municipal government system. We are a team of researchers in law, computer science, information science, and urban studies. We have worked hand-in-hand with the City of Seattle, Washington for the better part of a year to understand its current procedures from each disciplinary perspective. Based on this empirical work, we generate a set of recommendations to help the city manage risk latent in opening its data….(More)”