New possibilities for cutting corruption in the public sector


Rema Hanna and Vestal McIntyre at VoxDev: “In their day-to-day dealings with the government, citizens of developing countries frequently encounter absenteeism, demands for bribes, and other forms of low-level corruption. When researchers used unannounced visits to gauge public-sector attendance across six countries, they found that 19% of teachers and 35% of health workers were absent during work hours (Chaudhury et al. 2006). A recent survey found that nearly 70% of Indians reported paying a bribe to access public services.

Corruption can set into motion vicious cycles: the government is impoverished of resources to provide services, and citizens are deprived of the things they need. For the poor, this might mean that they live without quality education, electricity, healthcare, and so forth. In contrast, the rich can simply pay the bribe or obtain the service privately, furthering inequality.

Much of the discourse around corruption focuses on punishing corrupt offenders. But punitive measures can only go so far, especially when corruption is seen as the ‘norm’ and is thus ingrained in institutions. 

What if we could find ways of identifying the ‘goodies’ – those who enter the public sector out of a sense of civic responsibility, and serve honestly – and weeding out the ‘baddies’ before they are hired? New research shows this may be possible....

You can test personality

For decades, questionnaires have dissected personality into the ‘Big Five’ traits of openness, conscientiousness, extraversion, agreeableness, and neuroticism. These traits have been shown to be predictors of behaviour and outcomes in the workplace (Heckman 2011). As a result, private sector employers often use them in recruiting. Nobel laureate James Heckman and colleagues found that standardized adolescent measures of locus control and self-esteem (components of neuroticism) predict adult earnings to a similar degree as intelligence (Kautz et al. 2014).

Personality tests have also been put to use for the good of the poor: our colleague at Harvard’s Evidence for Policy Design (EPoD), Asim Ijaz Khwaja and collaborators have tested, and then subsequently expanded, personality tests as a basis for identifying reliable borrowers. This way, lenders can offer products to poor entrepreneurs who lack traditional credit histories, but who are nonetheless creditworthy. (See the Entrepreneurial Finance Lab’s website.)

You can test for civic-mindedness and honesty

Out of the personality-test literature grew the Perry Public Sector Motivation questionnaire (Perry 1996), which comprises a series of statements that respondents can state their level of agreement or disagreement with measures of civic-mindedness. The questionnaire has six modules, including “Attraction to Policy Making”, “Commitment to Public Interest”, “Social Justice”, “Civic Duty”, “Compassion”, and “Self-Sacrifice.” Studies have found that scores on the instrument correlate positively with job performance, ethical behaviour, participation in civic organisations, and a host of other good outcomes (for a review, see Perry and Hondeghem 2008).

You can also measure honesty in different ways. For example, Fischbacher and Föllmi-Heusi (2013) formulated a game in which subjectsroll a die and write down the number that they get, receiving higher cash rewards for larger reported numbers. While this does not reveal with certainty if any one subject lied since no one else sees the die, it does reveal how far their reported numbers were from the uniform distribution. Those with high dice high points have a higher probability of having cheated. Implementing this, the authors found that “about 20% of inexperienced subjects lie to the fullest extent possible while 39% of subjects are fully honest.”

These and a range of other tools for psychological profiling have opened up new possibilities for improving governance. Here are a few lessons this new literature has yielded….(More)”.

Beyond GDP: Measuring What Counts for Economic and Social Performance


OECD Book: “Metrics matter for policy and policy matters for well-being. In this report, the co-chairs of the OECD-hosted High Level Expert Group on the Measurement of Economic Performance and Social Progress, Joseph E. Stiglitz, Jean-Paul Fitoussi and Martine Durand, show how over-reliance on GDP as the yardstick of economic performance misled policy makers who did not see the 2008 crisis coming. When the crisis did hit, concentrating on the wrong indicators meant that governments made inadequate policy choices, with severe and long-lasting consequences for many people.

While GDP is the most well-known, and most powerful economic indicator, it can’t tell us everything we need to know about the health of countries and societies. In fact, it can’t even tell us everything we need to know about economic performance. We need to develop dashboards of indicators that reveal who is benefitting from growth, whether that growth is environmentally sustainable, how people feel about their lives, what factors contribute to an individual’s or a country’s success. This book looks at progress made over the past 10 years in collecting well-being data, and in using them to inform policies. An accompanying volume, For Good Measure: Advancing Research on Well-being Metrics Beyond GDP, presents the latest findings from leading economists and statisticians on selected issues within the broader agenda on defining and measuring well-being….(More)”

Time to step away from the ‘bright, shiny things’? Towards a sustainable model of journalism innovation in an era of perpetual change


Paper by Julie Posetti: “The news industry has a focus problem. ‘Shiny Things Syndrome’ –obsessive pursuit of technology in the absence of clear and research-informed strategies – is the diagnosis offered by participants in this research. The cure suggested involves a conscious shift by news publishers from being technology-led, to audience-focused and technology-empowered.

This report presents the first research from the Journalism Innovation Project anchored within the Reuters Institute for the Study of Journalism at the University of Oxford. It is based on analysis of discussions with 39 leading journalism innovators from around the world, representing 27 different news publishers. The main finding of this research is that relentless, high-speed pursuit of technology-driven innovation could be almost as dangerous as stagnation. While ‘random acts of innovation’, organic experimentation, and willingness to embrace new technology remain valuable features of an innovation culture, there is evidence of an increasingly urgent requirement for the cultivation of sustainable innovation frameworks and clear, longer-term strategies within news organisations.

Such a ‘pivot’ could also address the growing problem of burnout associated with ‘innovation fatigue’. To be effective, such strategies need to be focused on engaging audiences – the ‘end users’ – and they would benefit from research-informed innovation ‘indicators’.

The key themes identified in this report are:
a. The risks of ‘Shiny Things Syndrome’ and the impacts of ‘innovation fatigue’ in an era of perpetual change
b. Audiences: starting (again) with the end user
c. The need for a ‘user-led’ approach to researching journalism innovation and developing foundational frameworks to support it

Additionally, new journalism innovation considerations are noted, such as the implications of digital technologies’ ‘unintended consequences’, and the need to respond innovatively to media freedom threats – such as gendered online harassment, privacy breaches, and orchestrated disinformation campaigns….(More)”.

Open Government Data for Inclusive Development


Chapter by F. van Schalkwyk and M,  Cañares in  “Making Open Development Inclusive”, MIT Press by Matthew L. Smith and Ruhiya Kris Seward (Eds):  “This chapter examines the relationship between open government data and social inclusion. Twenty-eight open data initiatives from the Global South are analyzed to find out how and in what contexts the publication of open government data tend to result in the inclusion of habitually marginalized communities in governance processes such that they may lead better lives.

The relationship between open government data and social inclusion is examined by presenting an analysis of the outcomes of open data projects. This analysis is based on a constellation of factors that were identified as having a bearing on open data initiatives with respect to inclusion. The findings indicate that open data can contribute to an increase in access and participation— both components of inclusion. In these cases, this particular finding indicates that a more open, participatory approach to governance practice is taking root. However, the findings also show that access and participation approaches to open government data have, in the cases studied here, not successfully disrupted the concentration of power in political and other networks, and this has placed limits on open data’s contribution to a more inclusive society.

The chapter starts by presenting a theoretical framework for the analysis of the relationship between open data and inclusion. The framework sets out the complex relationship between social actors, information and power in the network society. This is critical, we suggest, in developing a realistic analysis of the contexts in which open data activates its potential for
transformation. The chapter then articulates the research question and presents the methodology used to operationalize those questions. The findings and discussion section that follows examines the factors affecting the relationship between open data and inclusion, and how these factors
are observed to play out across several open data initiatives in different contexts. The chapter ends with concluding remarks and an attempt to synthesize the insights that emerged in the preceding sections….(More)”.

Better Data for Doing Good: Responsible Use of Big Data and Artificial Intelligence


Report by the World Bank: “Describes opportunities for harnessing the value of big data and artificial intelligence (AI) for social good and how new families of AI algorithms now make it possible to obtain actionable insights automatically and at scale. Beyond internet business or commercial applications, multiple examples already exist of how big data and AI can help achieve shared development objectives, such as the 2030 Agenda for Sustainable Development and the Sustainable Development Goals (SDGs). But ethical frameworks in line with increased uptake of these new technologies remain necessary—not only concerning data privacy but also relating to the impact and consequences of using data and algorithms. Public recognition has grown concerning AI’s potential to create both opportunities for societal benefit and risks to human rights. Development calls for seizing the opportunity to shape future use as a force for good, while at the same time ensuring the technologies address inequalities and avoid widening the digital divide….(More)”.

Common-Knowledge Attacks on Democracy


Paper by Henry Farrell and Bruce Schneier:  “Existing approaches to cybersecurity emphasize either international state-to-state logics (such as deterrence theory) or the integrity of individual information systems. Neither provides a good understanding of new “soft cyber” attacks that involve the manipulation of expectations and common understandings. We argue that scaling up computer security arguments to the level of the state, so that the entire polity is treated as an information system with associated attack surfaces and threat models, provides the best immediate way to understand these attacks and how to mitigate them.

We demonstrate systematic differences between how autocracies and democracies work as information systems, because they rely on different mixes of common and contested political knowledge. Stable autocracies will have common knowledge over who is in charge and their associated ideological or policy goals, but will generate contested knowledge over who the various political actors in society are, and how they might form coalitions and gain public support, so as to make it more difficult for coalitions to displace the regime. Stable democracies will have contested knowledge over who is in charge, but common knowledge over who the political actors are, and how they may form coalitions and gain public support. These differences are associated with notably different attack surfaces and threat models. Specifically, democracies are vulnerable to measures that “flood” public debate and disrupt shared decentralized understandings of actors and coalitions, in ways that autocracies are not….(More)”.

The Constitution of Knowledge


Jonathan Rauch at National Affairs: “America has faced many challenges to its political culture, but this is the first time we have seen a national-level epistemic attack: a systematic attack, emanating from the very highest reaches of power, on our collective ability to distinguish truth from falsehood. “These are truly uncharted waters for the country,” wrote Michael Hayden, former CIA director, in the Washington Post in April. “We have in the past argued over the values to be applied to objective reality, or occasionally over what constituted objective reality, but never the existence or relevance of objective reality itself.” To make the point another way: Trump and his troll armies seek to undermine the constitution of knowledge….

The attack, Hayden noted, is on “the existence or relevance of objective reality itself.” But what is objective reality?

In everyday vernacular, reality often refers to the world out there: things as they really are, independent of human perception and error. Reality also often describes those things that we feel certain about, things that we believe no amount of wishful thinking could change. But, of course, humans have no direct access to an objective world independent of our minds and senses, and subjective certainty is in no way a guarantee of truth. Philosophers have wrestled with these problems for centuries, and today they have a pretty good working definition of objective reality. It is a set of propositions: propositions that have been validated in some way, and have thereby been shown to be at least conditionally true — true, that is, unless debunked. Some of these propositions reflect the world as we perceive it (e.g., “The sky is blue”). Others, like claims made by quantum physicists and abstract mathematicians, appear completely removed from the world of everyday experience.

It is worth noting, however, that the locution “validated in some way” hides a cheat. In what way? Some Americans believe Elvis Presley is alive. Should we send him a Social Security check? Many people believe that vaccines cause autism, or that Barack Obama was born in Africa, or that the murder rate has risen. Who should decide who is right? And who should decide who gets to decide?

This is the problem of social epistemology, which concerns itself with how societies come to some kind of public understanding about truth. It is a fundamental problem for every culture and country, and the attempts to resolve it go back at least to Plato, who concluded that a philosopher king (presumably someone like Plato himself) should rule over reality. Traditional tribal communities frequently use oracles to settle questions about reality. Religious communities use holy texts as interpreted by priests. Totalitarian states put the government in charge of objectivity.

There are many other ways to settle questions about reality. Most of them are terrible because they rely on authoritarianism, violence, or, usually, both. As the great American philosopher Charles Sanders Peirce said in 1877, “When complete agreement could not otherwise be reached, a general massacre of all who have not thought in a certain way has proved a very effective means of settling opinion in a country.”

As Peirce implied, one way to avoid a massacre would be to attain unanimity, at least on certain core issues. No wonder we hanker for consensus. Something you often hear today is that, as Senator Ben Sasse put it in an interview on CNN, “[W]e have a risk of getting to a place where we don’t have shared public facts. A republic will not work if we don’t have shared facts.”

But that is not quite the right answer, either. Disagreement about core issues and even core facts is inherent in human nature and essential in a free society. If unanimity on core propositions is not possible or even desirable, what is necessary to have a functional social reality? The answer is that we need an elite consensus, and hopefully also something approaching a public consensus, on the method of validating propositions. We needn’t and can’t all agree that the same things are true, but a critical mass needs to agree on what it is we do that distinguishes truth from falsehood, and more important, on who does it.

Who can be trusted to resolve questions about objective truth? The best answer turns out to be no one in particular….(More)”.

Data Collaboration, Pooling and Hoarding under Competition Law


Paper by Bjorn Lundqvist: “In the Internet of Things era devices will monitor and collect data, whilst device producing firms will store, distribute, analyse and re-use data on a grand scale. Great deal of data analytics will be used to enable firms to understand and make use of the collected data. The infrastructure around the collected data is controlled and access to the data flow is thus restricted on technical, but also on legal grounds. Legally, the data are being obscured behind a thicket of property rights, including intellectual property rights. Therefore, there is no general “data commons” for everyone to enjoy.

If firms would like to combine data, they need to give each other access either by sharing, trading, or pooling the data. On the one hand, industry-wide pooling of data could increase efficiency of certain services, and contribute to the innovation of other services, e.g., think about self-driven cars or personalized medicine. On the other hand, firms combining business data may use the data, not to advance their services or products, but to collude, to exclude competitors or to abuse their market position. Indeed by combining their data in a pool, they can gain market power, and, hence, the ability to violate competition law. Moreover, we also see firms hoarding data from various source creating de facto data pools. This article will discuss what implications combining data in data pools by firms might have on competition, and when competition law should be applicable. It develops the idea that data pools harbour great opportunities, whilst acknowledging that there are still risks to take into consideration, and to regulate….(More)”.

Why We Need to Audit Algorithms


James Guszcza, Iyad Rahwan, Will Bible, Manuel Cebrian and Vic Katyal at Harvard Business Review: “Algorithmic decision-making and artificial intelligence (AI) hold enormous potential and are likely to be economic blockbusters, but we worry that the hype has led many people to overlook the serious problems of introducing algorithms into business and society. Indeed, we see many succumbing to what Microsoft’s Kate Crawford calls “data fundamentalism” — the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. A more nuanced view is needed. It is by now abundantly clear that, left unchecked, AI algorithms embedded in digital and social technologies can encode societal biasesaccelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and even impair our mental wellbeing.

Ensuring that societal values are reflected in algorithms and AI technologies will require no less creativity, hard work, and innovation than developing the AI technologies themselves. We have a proposal for a good place to start: auditing. Companies have long been required to issue audited financial statements for the benefit of financial markets and other stakeholders. That’s because — like algorithms — companies’ internal operations appear as “black boxes” to those on the outside. This gives managers an informational advantage over the investing public which could be abused by unethical actors. Requiring managers to report periodically on their operations provides a check on that advantage. To bolster the trustworthiness of these reports, independent auditors are hired to provide reasonable assurance that the reports coming from the “black box” are free of material misstatement. Should we not subject societally impactful “black box” algorithms to comparable scrutiny?

Indeed, some forward thinking regulators are beginning to explore this possibility. For example, the EU’s General Data Protection Regulation (GDPR) requires that organizations be able to explain their algorithmic decisions. The city of New York recently assembled a task force to study possible biases in algorithmic decision systems. It is reasonable to anticipate that emerging regulations might be met with market pull for services involving algorithmic accountability.

So what might an algorithm auditing discipline look like? First, it should adopt a holistic perspective. Computer science and machine learning methods will be necessary, but likely not sufficient foundations for an algorithm auditing discipline. Strategic thinking, contextually informed professional judgment, communication, and the scientific method are also required.

As a result, algorithm auditing must be interdisciplinary in order for it to succeed….(More)”.

Reimagining Public-Private Partnerships: Four Shifts and Innovations in Sharing and Leveraging Private Assets and Expertise for the Public Good


Blog by Stefaan G. Verhulst and Andrew J. Zahuranec: “For years, public-private partnerships (PPPs) have promised to help governments do more for less. Yet, the discussion and experimentation surrounding PPPs often focus on outdated models and narratives, and the field of experimentation has not fully embraced the opportunities provided by an increasingly networked and data-rich private sector.

Private-sector actors (including businesses and NGOs) have expertise and assets that, if brought to bear in collaboration with the public sector, could spur progress in addressing public problems or providing public services. Challenges to date have largely involved the identification of effective and legitimate means for unlocking the public value of private-sector expertise and assets. Those interested in creating public value through PPPs are faced with a number of questions, including:

  • How do we broaden and deepen our understanding of PPPs in the 21st Century?
  • How can we innovate and improve the ways that PPPs tap into private-sector assets and expertise for the public good?
  • How do we connect actors in the PPP space with open governance developments and practices, especially given that PPPs have not played a major role in the governance innovation space to date?

The PPP Knowledge Lab defines a PPP as a “long-term contract between a private party and a government entity, for providing a public asset or service, in which the private party bears significant risk and management responsibility and remuneration is linked to performance.”…

To maximize the value of PPPs, we don’t just need new tools or experiments but new models for using assets and expertise in different sectors. We need to bring that capacity to public problems.

At the latest convening of the MacArthur Foundation Research Network on Opening Governance, Network members and experts from across the field tried to chart this new course by exploring questions about the future of PPPs.

The group explored the new research and thinking that enables many new types of collaboration beyond the typical “contract” based approaches. Through their discussions, Network members identified four shifts representing ways that cross-sector collaboration could evolve in the future:

  1. From Formal to Informal Trust Mechanisms;
  2. From Selection to Iterative and Inclusive Curation;
  3. From Partnership to Platform; and
  4. From Shared Risk to Shared Outcome….(More)”.
Screen Shot 2018-11-09 at 6.07.40 PM