G20/OECD Compendium of good practices on the use of open data for Anti-corruption


OECD: “This compendium of good practices was prepared by the OECD at the request of the G20 Anti-corruption Working Group (ACWG), to raise awareness of the benefits of open data policies and initiatives in: 

  • fighting corruption,
  • increasing public sector transparency and integrity,
  • fostering economic development and social innovation.

This compendium provides an overview of initiatives for the publication and re-use of open data to fight corruption across OECD and G20 countries and underscores the impact that a digital transformation of the public sector can deliver in terms of better governance across policy areas.  The practices illustrate the use of open data as a way of fighting corruption and show how open data principles can be translated into concrete initiatives.

The publication is divided into three sections:

Section 1 discusses the benefits of open data for greater public sector transparency and performance, national competitiveness and social engagement, and how these initiatives contribute to greater public trust in government.

Section 2 highlights the preconditions necessary across different policy areas related to anti-corruption (e.g. open government, public procurement) to sustain the implementation of an “Open by default” approach that could help government move from a perspective that focuses on increasing access to public sector information to one that enhances the publication of open government data for re-use and value co-creation. 

Section 3 presents the results of the OECD survey administered across OECD and G20 countries, good practices on the publishing and reusing of open data for anti-corruption in G20 countries, and lessons learned from the definition and implementation of these initiatives. This chapter also discusses the implications for broader national matters such as freedom of press, and the involvement of key actors of the open data ecosystem (e.g. journalists and civil society organisations) as key partners in open data re-use for anti-corruption…(More)”.

Data Flow in the Smart City: Open Data Versus the Commons


Chapter by Richard Beckwith, John Sherry and David Prendergast in The Hackable City: “Much of the recent excitement around data, especially ‘Big Data,’ focuses on the potential commercial or economic value of data. How that data will affect people isn’t much discussed. People know that smart cities will deploy Internet-based monitoring and that flows of the collected data promise to produce new values. Less considered is that smart cities will be sites of new forms of citizen action—enabled by an ‘economy’ of data that will lead to new methods of collectivization, accountability, and control which, themselves, can provide both positive and negative values to the citizenry. Therefore, smart city design needs to consider not just measurement and publication of data but also the implications of city-wide deployment, data openness, and the possibility of unintended consequences if data leave the city….(More)”.

The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence


Blog by Julia Powles and Helen Nissenbaum: “Serious thinkers in academia and business have swarmed to the A.I. bias problem, eager to tweak and improve the data and algorithms that drive artificial intelligence. They’ve latched onto fairness as the objective, obsessing over competing constructs of the term that can be rendered in measurable, mathematical form. If the hunt for a science of computational fairness was restricted to engineers, it would be one thing. But given our contemporary exaltation and deference to technologists, it has limited the entire imagination of ethics, law, and the media as well.

There are three problems with this focus on A.I. bias. The first is that addressing bias as a computational problem obscures its root causes. Bias is a social problem, and seeking to solve it within the logic of automation is always going to be inadequate.

Second, even apparent success in tackling bias can have perverse consequences. Take the example of a facial recognition system that works poorly on women of color because of the group’s underrepresentation both in the training data and among system designers. Alleviating this problem by seeking to “equalize” representation merely co-opts designers in perfecting vast instruments of surveillance and classification.

When underlying systemic issues remain fundamentally untouched, the bias fighters simply render humans more machine readable, exposing minorities in particular to additional harms.

Third — and most dangerous and urgent of all — is the way in which the seductive controversy of A.I. bias, and the false allure of “solving” it, detracts from bigger, more pressing questions. Bias is real, but it’s also a captivating diversion.

What has been remarkably underappreciated is the key interdependence of the twin stories of A.I. inevitability and A.I. bias. Against the corporate projection of an otherwise sunny horizon of unstoppable A.I. integration, recognizing and acknowledging bias can be seen as a strategic concession — one that subdues the scale of the challenge. Bias, like job losses and safety hazards, becomes part of the grand bargain of innovation.

The reality that bias is primarily a social problem and cannot be fully solved technically becomes a strength, rather than a weakness, for the inevitability narrative. It flips the script. It absorbs and regularizes the classification practices and underlying systems of inequality perpetuated by automation, allowing relative increases in “fairness” to be claimed as victories — even if all that is being done is to slice, dice, and redistribute the makeup of those negatively affected by actuarial decision-making.

In short, the preoccupation with narrow computational puzzles distracts us from the far more important issue of the colossal asymmetry between societal cost and private gain in the rollout of automated systems. It also denies us the possibility of asking: Should we be building these systems at all?…(More)”.

Harnessing Digital Tools to Revitalize European Democracy


Article by Elisa Lironi: “…Information and communication technology (ICT) can be used to implement more participatory mechanisms and foster democratic processes. Often referred to as e-democracy, there is a large range of very different possibilities for online engagement, including e-initiatives, e-consultations, crowdsourcing, participatory budgeting, and e-voting. Many European countries have started exploring ICT’s potential to reach more citizens at a lower cost and to tap into the so-called wisdom of the crowd, as governments attempt to earn citizens’ trust and revitalize European democracy by developing more responsive, transparent, and participatory decisionmaking processes.

For instance, when Anne Hidalgo was elected mayor of Paris in May 2014, one of her priorities was to make the city more collaborative by allowing Parisians to propose policy and develop projects together. In order to build a stronger relationship with the citizens, she immediately started to implement a citywide participatory budgeting project for the whole of Paris, including all types of policy issues. It started as a small pilot, with the city of Paris putting forward fifteen projects that could be funded with up to about 20 million euros and letting citizens vote on which projects to invest in, via ballot box or online. Parisians and local authorities deemed this experiment successful, so Hidalgo decided it was worth taking further, with more ideas and a bigger pot of money. Within two years, the level of participation grew significantly—from 40,000 voters in 2014 to 92,809 in 2016, representing 5 percent of the total urban population. Today, Paris Budget Participatif is an official platform that lets Parisians decide how to spend 5 percent of the investment budget from 2014 to 2020, amounting to around 500 million euros. In addition, the mayor also introduced two e-democracy platforms—Paris Petitions, for e-petitions, and Idée Paris, for e-consultations. Citizens in the French capital now have multiple channels to express their opinions and contribute to the development of their city.

In Latvia, civil society has played a significant role in changing how legislative procedures are organized. ManaBalss (My Voice) is a grassroots NGO that creates tools for better civic participation in decisionmaking processes. Its online platform, ManaBalss.lv, is a public e-participation website that lets Latvian citizens propose, submit, and sign legislative initiatives to improve policies at both the national and municipal level. …

In Finland, the government itself introduced an element of direct democracy into the Finnish political system, through the 2012 Citizens’ Initiative Act (CI-Act) that allows citizens to submit initiatives to the parliament. …

Other civic tech NGOs across Europe have been developing and experimenting with a variety of digital tools to reinvigorate democracy. These include initiatives like Science For You (SCiFY) in Greece, Netwerk Democratie in the Netherlands, and the Citizens Foundation in Iceland, which got its start when citizens were asked to crowdsource their constitution in 2010.

Outside of civil society, several private tech companies are developing digital platforms for democratic participation, mainly at the local government level. One example is the Belgian start-up CitizenLab, an online participation platform that has been used by more than seventy-five municipalities around the world. The young founders of CitizenLab have used technology to innovate the democratic process by listening to what politicians need and including a variety of functions, such as crowdsourcing mechanisms, consultation processes, and participatory budgeting. Numerous other European civic tech companies have been working on similar concepts—Cap Collectif in France, Delib in the UK, and Discuto in Austria, to name just a few. Many of these digital tools have proven useful to elected local or national representatives….

While these initiatives are making a real impact on the quality of European democracy, most of the EU’s formal policy focus is on constraining the power of the tech giants rather than positively aiding digital participation….(More)”

When a Nudge Backfires: Using Observation with Social and Economic Incentives to Promote Pro-Social Behavior


Paper by Gary Bolton, Eugen Dimant and Ulrich Schmidt: “Both theory and recent empirical evidence on nudging suggests that observability of behavior acts as an instrument for promoting (discouraging) pro-social (anti-social) behavior.

Our study questions the universality of these claims. We employ a novel four-party setup to disentangle the roles three observational mechanisms play in mediating behavior. We systematically vary the observability of one’s actions by others as well as the (non-)monetary relationship between observer and observee. Observability involving economic incentives
crowds-out anti-social behavior in favor of more pro-social behavior.

Surprisingly, social observation without economic incentives fails to achieve any aggregate pro-social effect, and if anything it backfires. Additional experiments confirm that observability without additional monetary incentives can indeed backfire. However, they also show that the effect of observability on pro-social behavior is increased when social norms are made salient….(More)”.

Bad Landlord? These Coders Are Here to Help


Luis Ferré-Sadurní in the New York Times: “When Dan Kass moved to New York City in 2013 after graduating from college in Boston, his introduction to the city was one that many New Yorkers are all too familiar with: a bad landlord….

Examples include an app called Heatseek, created by students at a coding academy, that allows tenants to record and report the temperature in their homes to ensure that landlords don’t skimp on the heat. There’s also the Displacement Alert Project, built by a coalition of affordable housing groups, that maps out buildings and neighborhoods at risk of displacement.

Now, many of these civic coders are trying to band together and formalize a community.

For more than a year, Mr. Kass and other housing-data wonks have met each month at a shared work space in Brooklyn to exchange ideas about projects and talk about data sets over beer and snacks. Some come from prominent housing advocacy groups; others work unrelated day jobs. They informally call themselves the Housing Data Coalition.

“The real estate industry has many more programmers, many more developers, many more technical tools at their disposal,” said Ziggy Mintz, 30, a computer programmer who is part of the coalition. “It never quite seems fair that the tenant side of the equation doesn’t have the same tools.”

“Our collaboration is a counteracting force to that,” said Lucy Block, a research and policy associate at the Association for Neighborhood & Housing Development, the group behind the Displacement Alert Project. “We are trying to build the capacity to fight the displacement of low-income people in the city.”

This week, Mr. Kass and his team at JustFix.nyc, a nonprofit technology start-up, launched a new database for tenants that was built off ideas raised during those monthly meetings.

The tool, called Who Owns What, allows tenants to punch in an address and look up other buildings associated with the landlord or management company. It might sound inconsequential, but the tool goes a long way in piercing the veil of secrecy that shrouds the portfolios of landlords….(More)”.

To Reduce Privacy Risks, the Census Plans to Report Less Accurate Data


Mark Hansen at the New York Times: “When the Census Bureau gathered data in 2010, it made two promises. The form would be “quick and easy,” it said. And “your answers are protected by law.”

But mathematical breakthroughs, easy access to more powerful computing, and widespread availability of large and varied public data sets have made the bureau reconsider whether the protection it offers Americans is strong enough. To preserve confidentiality, the bureau’s directors have determined they need to adopt a “formal privacy” approach, one that adds uncertainty to census data before it is published and achieves privacy assurances that are provable mathematically.

The census has always added some uncertainty to its data, but a key innovation of this new framework, known as “differential privacy,” is a numerical value describing how much privacy loss a person will experience. It determines the amount of randomness — “noise” — that needs to be added to a data set before it is released, and sets up a balancing act between accuracy and privacy. Too much noise would mean the data would not be accurate enough to be useful — in redistricting, in enforcing the Voting Rights Act or in conducting academic research. But too little, and someone’s personal data could be revealed.

On Thursday, the bureau will announce the trade-off it has chosen for data publications from the 2018 End-to-End Census Test it conducted in Rhode Island, the only dress rehearsal before the actual census in 2020. The bureau has decided to enforce stronger privacy protections than companies like Apple or Google had when they each first took up differential privacy….

In presentation materials for Thursday’s announcement, special attention is paid to lessening any problems with redistricting: the potential complications of using noisy counts of voting-age people to draw district lines. (By contrast, in 2000 and 2010 the swapping mechanism produced exact counts of potential voters down to the block level.)

The Census Bureau has been an early adopter of differential privacy. Still, instituting the framework on such a large scale is not an easy task, and even some of the big technology firms have had difficulties. For example, shortly after Apple’s announcement in 2016 that it would use differential privacy for data collected from its macOS and iOS operating systems, it was revealed that the actual privacy loss of their systems was much higher than advertised.

Some scholars question the bureau’s abandonment of techniques like swapping in favor of differential privacy. Steven Ruggles, Regents Professor of history and population studies at the University of Minnesota, has relied on census data for decades. Through the Integrated Public Use Microdata Series, he and his team have regularized census data dating to 1850, providing consistency between questionnaires as the forms have changed, and enabling researchers to analyze data across years.

“All of the sudden, Title 13 gets equated with differential privacy — it’s not,” he said, adding that if you make a guess about someone’s identity from looking at census data, you are probably wrong. “That has been regarded in the past as protection of privacy. They want to make it so that you can’t even guess.”

“There is a trade-off between usability and risk,” he added. “I am concerned they may go far too far on privileging an absolutist standard of risk.”

In a working paper published Friday, he said that with the number of private services offering personal data, a prospective hacker would have little incentive to turn to public data such as the census “in an attempt to uncover uncertain, imprecise and outdated information about a particular individual.”…(More)”.

Motivating Participation in Crowdsourced Policymaking: The Interplay of Epistemic and Interactive Aspects


Paper by Tanja Aitamurto and Jorge Saldivar in Proceedings of ACM Human-Computer Interaction (CSCW ’18):  “…we examine the changes in motivation factors in crowdsourced policymaking. By drawing on longitudinal data from a crowdsourced law reform, we show that people participated because they wanted to improve the law, learn, and solve problems. When crowdsourcing reached a saturation point, the motivation factors weakened and the crowd disengaged. Learning was the only factor that did not weaken. The participants learned while interacting with others, and the more actively the participants commented, the more likely they stayed engaged. Crowdsourced policymaking should thus be designed to support both epistemic and interactive aspects. While the crowd’s motives were rooted in self-interest, their knowledge perspective showed common-good orientation, implying that rather than being dichotomous, motivation factors move on a continuum. The design of crowdsourced policymaking should support the dynamic nature of the process and the motivation factors driving it….(More)”.

Chatbots Are a Danger to Democracy


Jamie Susskind in the New York Times: “As we survey the fallout from the midterm elections, it would be easy to miss the longer-term threats to democracy that are waiting around the corner. Perhaps the most serious is political artificial intelligence in the form of automated “chatbots,” which masquerade as humans and try to hijack the political process.

Chatbots are software programs that are capable of conversing with human beings on social media using natural language. Increasingly, they take the form of machine learning systems that are not painstakingly “taught” vocabulary, grammar and syntax but rather “learn” to respond appropriately using probabilistic inference from large data sets, together with some human guidance.

Some chatbots, like the award-winning Mitsuku, can hold passable levels of conversation. Politics, however, is not Mitsuku’s strong suit. When asked “What do you think of the midterms?” Mitsuku replies, “I have never heard of midterms. Please enlighten me.” Reflecting the imperfect state of the art, Mitsuku will often give answers that are entertainingly weird. Asked, “What do you think of The New York Times?” Mitsuku replies, “I didn’t even know there was a new one.”

Most political bots these days are similarly crude, limited to the repetition of slogans like “#LockHerUp” or “#MAGA.” But a glance at recent political history suggests that chatbots have already begun to have an appreciable impact on political discourse. In the buildup to the midterms, for instance, an estimated 60 percent of the online chatter relating to “the caravan” of Central American migrants was initiated by chatbots.

In the days following the disappearance of the columnist Jamal Khashoggi, Arabic-language social media erupted in support for Crown Prince Mohammed bin Salman, who was widely rumored to have ordered his murder. On a single day in October, the phrase “we all have trust in Mohammed bin Salman” featured in 250,000 tweets. “We have to stand by our leader” was posted more than 60,000 times, along with 100,000 messages imploring Saudis to “Unfollow enemies of the nation.” In all likelihood, the majority of these messages were generated by chatbots.

Chatbots aren’t a recent phenomenon. Two years ago, around a fifth of all tweets discussing the 2016 presidential election are believed to have been the work of chatbots. And a third of all traffic on Twitter before the 2016 referendum on Britain’s membership in the European Union was said to come from chatbots, principally in support of the Leave side….

We should also be exploring more imaginative forms of regulation. Why not introduce a rule, coded into platforms themselves, that bots may make only up to a specific number of online contributions per day, or a specific number of responses to a particular human? Bots peddling suspect information could be challenged by moderator-bots to provide recognized sources for their claims within seconds. Those that fail would face removal.

We need not treat the speech of chatbots with the same reverence that we treat human speech. Moreover, bots are too fast and tricky to be subject to ordinary rules of debate. For both those reasons, the methods we use to regulate bots must be more robust than those we apply to people. There can be no half-measures when democracy is at stake….(More)”.

Prototyping for policy


Camilla Buchanan at Policy Lab Blog: “…Prototyping is common in the product and industrial design process – it has also extended to less tangible design sub-specialisms like service design. Prototypes are low fidelity mockups of an imagined idea or solution and they allow for testing before implementation. A product can be tested in cardboard form, a website can be tested through a hand drawn wireframe, a service interaction can be tested with roleplay….

Policy is a more hazy concept, it implies a message or statement of intent which sets a direction of work. Before a policy statement is made there will be some form of strategic conversation. In governments this usually takes place at the political level amongst ministers or within political parties and there is little scope for outsiders to enter these spaces. Policies set by elected officials tend to be high-level statements – as short as a line or two in a manifesto – expressed through speeches or other policy documents like White Papers.

A policy statement therefore expresses a goal and it sets in motion realisations of that goal through laws, programmes or other activities. A short policy statement can determine major programmes of government work for many years. Policy programmes have their own problem spaces to define and there is much to do in order to translate a policy goal into practical activities. Whether consciously or not, policy programmes touch the lives of millions of people and the unintended consequences or conflicting results from the enactment of poor policies can be extremely harmful. The potential benefits of testing policy goals before they are put in place are therefore huge.

The idea of design interacting directly with policy making has been explored in the last five or so years, and the first book on this subject was published in 2014. In government terms this work is very new and there is relatively little precision in current explanations. Prototyping for Policy made space to explore this better….

It is still early days for articulating exactly how and why the “physical making” aspect of design is so important in government contexts but almost all designers working in this way will emphasis it. An obvious benefit to building something real is that operational errors become more evident. And because prototypes make ideas manifest, they can help to build consensus or reveal where it is absent. They are also a way of asking questions and the presence of a prototype often prompts discussion of broader issues.

As an example, the picture below shows staff from the Service Design team at the consultancy OpenRoad in Vancouver considering advanced prototypes of changes to transit fare policy for the city for their client TransLink….(More).

Prototypes of changes to transit fares by OpenRoad