Using insights from behavioral economics to nudge individuals towards healthier choices when eating out


Paper by Stéphane Bergeron, Maurice Doyon, Laure Saulais and JoAnne Labrecque: “Using a controlled experiment in a restaurant with naturally occurring clients, this study investigates how nudging can be used to design menus that guide consumers to make healthier choices. It examines the use of default options, focusing specifically on two types of defaults that can be found when ordering food in a restaurant: automatic and standard defaults. Both types of defaults significantly affected choices, but did not adversely impact the satisfaction of individual choices. The results suggest that menu design could effectively use non-informational strategies such as nudging to promote healthier individual choices without restricting the offer or reducing satisfaction….(More)”.

G20/OECD Compendium of good practices on the use of open data for Anti-corruption


OECD: “This compendium of good practices was prepared by the OECD at the request of the G20 Anti-corruption Working Group (ACWG), to raise awareness of the benefits of open data policies and initiatives in: 

  • fighting corruption,
  • increasing public sector transparency and integrity,
  • fostering economic development and social innovation.

This compendium provides an overview of initiatives for the publication and re-use of open data to fight corruption across OECD and G20 countries and underscores the impact that a digital transformation of the public sector can deliver in terms of better governance across policy areas.  The practices illustrate the use of open data as a way of fighting corruption and show how open data principles can be translated into concrete initiatives.

The publication is divided into three sections:

Section 1 discusses the benefits of open data for greater public sector transparency and performance, national competitiveness and social engagement, and how these initiatives contribute to greater public trust in government.

Section 2 highlights the preconditions necessary across different policy areas related to anti-corruption (e.g. open government, public procurement) to sustain the implementation of an “Open by default” approach that could help government move from a perspective that focuses on increasing access to public sector information to one that enhances the publication of open government data for re-use and value co-creation. 

Section 3 presents the results of the OECD survey administered across OECD and G20 countries, good practices on the publishing and reusing of open data for anti-corruption in G20 countries, and lessons learned from the definition and implementation of these initiatives. This chapter also discusses the implications for broader national matters such as freedom of press, and the involvement of key actors of the open data ecosystem (e.g. journalists and civil society organisations) as key partners in open data re-use for anti-corruption…(More)”.

The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence


Blog by Julia Powles and Helen Nissenbaum: “Serious thinkers in academia and business have swarmed to the A.I. bias problem, eager to tweak and improve the data and algorithms that drive artificial intelligence. They’ve latched onto fairness as the objective, obsessing over competing constructs of the term that can be rendered in measurable, mathematical form. If the hunt for a science of computational fairness was restricted to engineers, it would be one thing. But given our contemporary exaltation and deference to technologists, it has limited the entire imagination of ethics, law, and the media as well.

There are three problems with this focus on A.I. bias. The first is that addressing bias as a computational problem obscures its root causes. Bias is a social problem, and seeking to solve it within the logic of automation is always going to be inadequate.

Second, even apparent success in tackling bias can have perverse consequences. Take the example of a facial recognition system that works poorly on women of color because of the group’s underrepresentation both in the training data and among system designers. Alleviating this problem by seeking to “equalize” representation merely co-opts designers in perfecting vast instruments of surveillance and classification.

When underlying systemic issues remain fundamentally untouched, the bias fighters simply render humans more machine readable, exposing minorities in particular to additional harms.

Third — and most dangerous and urgent of all — is the way in which the seductive controversy of A.I. bias, and the false allure of “solving” it, detracts from bigger, more pressing questions. Bias is real, but it’s also a captivating diversion.

What has been remarkably underappreciated is the key interdependence of the twin stories of A.I. inevitability and A.I. bias. Against the corporate projection of an otherwise sunny horizon of unstoppable A.I. integration, recognizing and acknowledging bias can be seen as a strategic concession — one that subdues the scale of the challenge. Bias, like job losses and safety hazards, becomes part of the grand bargain of innovation.

The reality that bias is primarily a social problem and cannot be fully solved technically becomes a strength, rather than a weakness, for the inevitability narrative. It flips the script. It absorbs and regularizes the classification practices and underlying systems of inequality perpetuated by automation, allowing relative increases in “fairness” to be claimed as victories — even if all that is being done is to slice, dice, and redistribute the makeup of those negatively affected by actuarial decision-making.

In short, the preoccupation with narrow computational puzzles distracts us from the far more important issue of the colossal asymmetry between societal cost and private gain in the rollout of automated systems. It also denies us the possibility of asking: Should we be building these systems at all?…(More)”.

When a Nudge Backfires: Using Observation with Social and Economic Incentives to Promote Pro-Social Behavior


Paper by Gary Bolton, Eugen Dimant and Ulrich Schmidt: “Both theory and recent empirical evidence on nudging suggests that observability of behavior acts as an instrument for promoting (discouraging) pro-social (anti-social) behavior.

Our study questions the universality of these claims. We employ a novel four-party setup to disentangle the roles three observational mechanisms play in mediating behavior. We systematically vary the observability of one’s actions by others as well as the (non-)monetary relationship between observer and observee. Observability involving economic incentives
crowds-out anti-social behavior in favor of more pro-social behavior.

Surprisingly, social observation without economic incentives fails to achieve any aggregate pro-social effect, and if anything it backfires. Additional experiments confirm that observability without additional monetary incentives can indeed backfire. However, they also show that the effect of observability on pro-social behavior is increased when social norms are made salient….(More)”.

Chatbots Are a Danger to Democracy


Jamie Susskind in the New York Times: “As we survey the fallout from the midterm elections, it would be easy to miss the longer-term threats to democracy that are waiting around the corner. Perhaps the most serious is political artificial intelligence in the form of automated “chatbots,” which masquerade as humans and try to hijack the political process.

Chatbots are software programs that are capable of conversing with human beings on social media using natural language. Increasingly, they take the form of machine learning systems that are not painstakingly “taught” vocabulary, grammar and syntax but rather “learn” to respond appropriately using probabilistic inference from large data sets, together with some human guidance.

Some chatbots, like the award-winning Mitsuku, can hold passable levels of conversation. Politics, however, is not Mitsuku’s strong suit. When asked “What do you think of the midterms?” Mitsuku replies, “I have never heard of midterms. Please enlighten me.” Reflecting the imperfect state of the art, Mitsuku will often give answers that are entertainingly weird. Asked, “What do you think of The New York Times?” Mitsuku replies, “I didn’t even know there was a new one.”

Most political bots these days are similarly crude, limited to the repetition of slogans like “#LockHerUp” or “#MAGA.” But a glance at recent political history suggests that chatbots have already begun to have an appreciable impact on political discourse. In the buildup to the midterms, for instance, an estimated 60 percent of the online chatter relating to “the caravan” of Central American migrants was initiated by chatbots.

In the days following the disappearance of the columnist Jamal Khashoggi, Arabic-language social media erupted in support for Crown Prince Mohammed bin Salman, who was widely rumored to have ordered his murder. On a single day in October, the phrase “we all have trust in Mohammed bin Salman” featured in 250,000 tweets. “We have to stand by our leader” was posted more than 60,000 times, along with 100,000 messages imploring Saudis to “Unfollow enemies of the nation.” In all likelihood, the majority of these messages were generated by chatbots.

Chatbots aren’t a recent phenomenon. Two years ago, around a fifth of all tweets discussing the 2016 presidential election are believed to have been the work of chatbots. And a third of all traffic on Twitter before the 2016 referendum on Britain’s membership in the European Union was said to come from chatbots, principally in support of the Leave side….

We should also be exploring more imaginative forms of regulation. Why not introduce a rule, coded into platforms themselves, that bots may make only up to a specific number of online contributions per day, or a specific number of responses to a particular human? Bots peddling suspect information could be challenged by moderator-bots to provide recognized sources for their claims within seconds. Those that fail would face removal.

We need not treat the speech of chatbots with the same reverence that we treat human speech. Moreover, bots are too fast and tricky to be subject to ordinary rules of debate. For both those reasons, the methods we use to regulate bots must be more robust than those we apply to people. There can be no half-measures when democracy is at stake….(More)”.

New possibilities for cutting corruption in the public sector


Rema Hanna and Vestal McIntyre at VoxDev: “In their day-to-day dealings with the government, citizens of developing countries frequently encounter absenteeism, demands for bribes, and other forms of low-level corruption. When researchers used unannounced visits to gauge public-sector attendance across six countries, they found that 19% of teachers and 35% of health workers were absent during work hours (Chaudhury et al. 2006). A recent survey found that nearly 70% of Indians reported paying a bribe to access public services.

Corruption can set into motion vicious cycles: the government is impoverished of resources to provide services, and citizens are deprived of the things they need. For the poor, this might mean that they live without quality education, electricity, healthcare, and so forth. In contrast, the rich can simply pay the bribe or obtain the service privately, furthering inequality.

Much of the discourse around corruption focuses on punishing corrupt offenders. But punitive measures can only go so far, especially when corruption is seen as the ‘norm’ and is thus ingrained in institutions. 

What if we could find ways of identifying the ‘goodies’ – those who enter the public sector out of a sense of civic responsibility, and serve honestly – and weeding out the ‘baddies’ before they are hired? New research shows this may be possible....

You can test personality

For decades, questionnaires have dissected personality into the ‘Big Five’ traits of openness, conscientiousness, extraversion, agreeableness, and neuroticism. These traits have been shown to be predictors of behaviour and outcomes in the workplace (Heckman 2011). As a result, private sector employers often use them in recruiting. Nobel laureate James Heckman and colleagues found that standardized adolescent measures of locus control and self-esteem (components of neuroticism) predict adult earnings to a similar degree as intelligence (Kautz et al. 2014).

Personality tests have also been put to use for the good of the poor: our colleague at Harvard’s Evidence for Policy Design (EPoD), Asim Ijaz Khwaja and collaborators have tested, and then subsequently expanded, personality tests as a basis for identifying reliable borrowers. This way, lenders can offer products to poor entrepreneurs who lack traditional credit histories, but who are nonetheless creditworthy. (See the Entrepreneurial Finance Lab’s website.)

You can test for civic-mindedness and honesty

Out of the personality-test literature grew the Perry Public Sector Motivation questionnaire (Perry 1996), which comprises a series of statements that respondents can state their level of agreement or disagreement with measures of civic-mindedness. The questionnaire has six modules, including “Attraction to Policy Making”, “Commitment to Public Interest”, “Social Justice”, “Civic Duty”, “Compassion”, and “Self-Sacrifice.” Studies have found that scores on the instrument correlate positively with job performance, ethical behaviour, participation in civic organisations, and a host of other good outcomes (for a review, see Perry and Hondeghem 2008).

You can also measure honesty in different ways. For example, Fischbacher and Föllmi-Heusi (2013) formulated a game in which subjectsroll a die and write down the number that they get, receiving higher cash rewards for larger reported numbers. While this does not reveal with certainty if any one subject lied since no one else sees the die, it does reveal how far their reported numbers were from the uniform distribution. Those with high dice high points have a higher probability of having cheated. Implementing this, the authors found that “about 20% of inexperienced subjects lie to the fullest extent possible while 39% of subjects are fully honest.”

These and a range of other tools for psychological profiling have opened up new possibilities for improving governance. Here are a few lessons this new literature has yielded….(More)”.

Beyond GDP: Measuring What Counts for Economic and Social Performance


OECD Book: “Metrics matter for policy and policy matters for well-being. In this report, the co-chairs of the OECD-hosted High Level Expert Group on the Measurement of Economic Performance and Social Progress, Joseph E. Stiglitz, Jean-Paul Fitoussi and Martine Durand, show how over-reliance on GDP as the yardstick of economic performance misled policy makers who did not see the 2008 crisis coming. When the crisis did hit, concentrating on the wrong indicators meant that governments made inadequate policy choices, with severe and long-lasting consequences for many people.

While GDP is the most well-known, and most powerful economic indicator, it can’t tell us everything we need to know about the health of countries and societies. In fact, it can’t even tell us everything we need to know about economic performance. We need to develop dashboards of indicators that reveal who is benefitting from growth, whether that growth is environmentally sustainable, how people feel about their lives, what factors contribute to an individual’s or a country’s success. This book looks at progress made over the past 10 years in collecting well-being data, and in using them to inform policies. An accompanying volume, For Good Measure: Advancing Research on Well-being Metrics Beyond GDP, presents the latest findings from leading economists and statisticians on selected issues within the broader agenda on defining and measuring well-being….(More)”

Time to step away from the ‘bright, shiny things’? Towards a sustainable model of journalism innovation in an era of perpetual change


Paper by Julie Posetti: “The news industry has a focus problem. ‘Shiny Things Syndrome’ –obsessive pursuit of technology in the absence of clear and research-informed strategies – is the diagnosis offered by participants in this research. The cure suggested involves a conscious shift by news publishers from being technology-led, to audience-focused and technology-empowered.

This report presents the first research from the Journalism Innovation Project anchored within the Reuters Institute for the Study of Journalism at the University of Oxford. It is based on analysis of discussions with 39 leading journalism innovators from around the world, representing 27 different news publishers. The main finding of this research is that relentless, high-speed pursuit of technology-driven innovation could be almost as dangerous as stagnation. While ‘random acts of innovation’, organic experimentation, and willingness to embrace new technology remain valuable features of an innovation culture, there is evidence of an increasingly urgent requirement for the cultivation of sustainable innovation frameworks and clear, longer-term strategies within news organisations.

Such a ‘pivot’ could also address the growing problem of burnout associated with ‘innovation fatigue’. To be effective, such strategies need to be focused on engaging audiences – the ‘end users’ – and they would benefit from research-informed innovation ‘indicators’.

The key themes identified in this report are:
a. The risks of ‘Shiny Things Syndrome’ and the impacts of ‘innovation fatigue’ in an era of perpetual change
b. Audiences: starting (again) with the end user
c. The need for a ‘user-led’ approach to researching journalism innovation and developing foundational frameworks to support it

Additionally, new journalism innovation considerations are noted, such as the implications of digital technologies’ ‘unintended consequences’, and the need to respond innovatively to media freedom threats – such as gendered online harassment, privacy breaches, and orchestrated disinformation campaigns….(More)”.

Open Government Data for Inclusive Development


Chapter by F. van Schalkwyk and M,  Cañares in  “Making Open Development Inclusive”, MIT Press by Matthew L. Smith and Ruhiya Kris Seward (Eds):  “This chapter examines the relationship between open government data and social inclusion. Twenty-eight open data initiatives from the Global South are analyzed to find out how and in what contexts the publication of open government data tend to result in the inclusion of habitually marginalized communities in governance processes such that they may lead better lives.

The relationship between open government data and social inclusion is examined by presenting an analysis of the outcomes of open data projects. This analysis is based on a constellation of factors that were identified as having a bearing on open data initiatives with respect to inclusion. The findings indicate that open data can contribute to an increase in access and participation— both components of inclusion. In these cases, this particular finding indicates that a more open, participatory approach to governance practice is taking root. However, the findings also show that access and participation approaches to open government data have, in the cases studied here, not successfully disrupted the concentration of power in political and other networks, and this has placed limits on open data’s contribution to a more inclusive society.

The chapter starts by presenting a theoretical framework for the analysis of the relationship between open data and inclusion. The framework sets out the complex relationship between social actors, information and power in the network society. This is critical, we suggest, in developing a realistic analysis of the contexts in which open data activates its potential for
transformation. The chapter then articulates the research question and presents the methodology used to operationalize those questions. The findings and discussion section that follows examines the factors affecting the relationship between open data and inclusion, and how these factors
are observed to play out across several open data initiatives in different contexts. The chapter ends with concluding remarks and an attempt to synthesize the insights that emerged in the preceding sections….(More)”.

Better Data for Doing Good: Responsible Use of Big Data and Artificial Intelligence


Report by the World Bank: “Describes opportunities for harnessing the value of big data and artificial intelligence (AI) for social good and how new families of AI algorithms now make it possible to obtain actionable insights automatically and at scale. Beyond internet business or commercial applications, multiple examples already exist of how big data and AI can help achieve shared development objectives, such as the 2030 Agenda for Sustainable Development and the Sustainable Development Goals (SDGs). But ethical frameworks in line with increased uptake of these new technologies remain necessary—not only concerning data privacy but also relating to the impact and consequences of using data and algorithms. Public recognition has grown concerning AI’s potential to create both opportunities for societal benefit and risks to human rights. Development calls for seizing the opportunity to shape future use as a force for good, while at the same time ensuring the technologies address inequalities and avoid widening the digital divide….(More)”.