Ethical Dilemmas in Cyberspace


Paper by Martha Finnemore: “This essay steps back from the more detailed regulatory discussions in other contributions to this roundtable on “Competing Visions for Cyberspace” and highlights three broad issues that raise ethical concerns about our activity online. First, the commodification of people—their identities, their data, their privacy—that lies at the heart of business models of many of the largest information and communication technologies companies risks instrumentalizing human beings. Second, concentrations of wealth and market power online may be contributing to economic inequalities and other forms of domination. Third, long-standing tensions between the security of states and the human security of people in those states have not been at all resolved online and deserve attention….(More)”.

The role of Ombudsman Institutions in Open Government


Report by K.Zuegel, E. Cantera, and A. Bellantoni: “Ombudsman institutions (OIs) act as the guardians of citizens’ rights and as a mediator between citizens and the public administration. While the very existence of such institutions is rooted in the notion of open government, the role they can play in promoting openness throughout the public administration has not been adequately recognized or exploited. Based on a survey of 94 OIs, this report examines the role they play in open government policies and practices. It also provides recommendations on how, given their privileged contact with both people and governments, OIs can better promote transparency, integrity, accountability, and stakeholder participation; how their role in national open government strategies and initiatives can be strengthened; and how they can be at the heart of a truly open state….(More)”.

Towards matching user mobility traces in large-scale datasets


Paper by Daniel Kondor, Behrooz Hashemian,  Yves-Alexandre de Montjoye and Carlo Ratti: “The problem of unicity and reidentifiability of records in large-scale databases has been studied in different contexts and approaches, with focus on preserving privacy or matching records from different data sources. With an increasing number of service providers nowadays routinely collecting location traces of their users on unprecedented scales, there is a pronounced interest in the possibility of matching records and datasets based on spatial trajectories. Extending previous work on reidentifiability of spatial data and trajectory matching, we present the first large-scale analysis of user matchability in real mobility datasets on realistic scales, i.e. among two datasets that consist of several million people’s mobility traces, coming from a mobile network operator and transportation smart card usage. We extract the relevant statistical properties which influence the matching process and analyze their impact on the matchability of users. We show that for individuals with typical activity in the transportation system (those making 3-4 trips per day on average), a matching algorithm based on the co-occurrence of their activities is expected to achieve a 16.8% success only after a one-week long observation of their mobility traces, and over 55% after four weeks. We show that the main determinant of matchability is the expected number of co-occurring records in the two datasets. Finally, we discuss different scenarios in terms of data collection frequency and give estimates of matchability over time. We show that with higher frequency data collection becoming more common, we can expect much higher success rates in even shorter intervals….(More)”.

We Need an FDA For Algorithms


Interview with Hannah Fry on the promise and danger of an AI world by Michael Segal:”…Why do we need an FDA for algorithms?

It used to be the case that you could just put any old colored liquid in a glass bottle and sell it as medicine and make an absolute fortune. And then not worry about whether or not it’s poisonous. We stopped that from happening because, well, for starters it’s kind of morally repugnant. But also, it harms people. We’re in that position right now with data and algorithms. You can harvest any data that you want, on anybody. You can infer any data that you like, and you can use it to manipulate them in any way that you choose. And you can roll out an algorithm that genuinely makes massive differences to people’s lives, both good and bad, without any checks and balances. To me that seems completely bonkers. So I think we need something like the FDA for algorithms. A regulatory body that can protect the intellectual property of algorithms, but at the same time ensure that the benefits to society outweigh the harms.

Why is the regulation of medicine an appropriate comparison?

If you swallow a bottle of colored liquid and then you keel over the next day, then you know for sure it was poisonous. But there are much more subtle things in pharmaceuticals that require expert analysis to be able to weigh up the benefits and the harms. To study the chemical profile of these drugs that are being sold and make sure that they actually are doing what they say they’re doing. With algorithms it’s the same thing. You can’t expect the average person in the street to study Bayesian inference or be totally well read in random forests, and have the kind of computing prowess to look up a code and analyze whether it’s doing something fairly. That’s not realistic. Simultaneously, you can’t have some code of conduct that every data science person signs up to, and agrees that they won’t tread over some lines. It has to be a government, really, that does this. It has to be government that analyzes this stuff on our behalf and makes sure that it is doing what it says it does, and in a way that doesn’t end up harming people.

How did you come to write a book about algorithms?

Back in 2011 in London, we had these really bad riots in London. I’d been working on a project with the Metropolitan Police, trying mathematically to look at how these riots had spread and to use algorithms to ask how could the police have done better. I went to go and give a talk in Berlin about this paper we’d published about our work, and they completely tore me apart. They were asking questions like, “Hang on a second, you’re creating this algorithm that has the potential to be used to suppress peaceful demonstrations in the future. How can you morally justify the work that you’re doing?” I’m kind of ashamed to say that it just hadn’t occurred to me at that point in time. Ever since, I have really thought a lot about the point that they made. And started to notice around me that other researchers in the area weren’t necessarily treating the data that they were working with, and the algorithms that they were creating, with the ethical concern they really warranted. We have this imbalance where the people who are making algorithms aren’t talking to the people who are using them. And the people who are using them aren’t talking to the people who are having decisions made about their lives by them. I wanted to write something that united those three groups….(More)”.

Using insights from behavioral economics to nudge individuals towards healthier choices when eating out


Paper by Stéphane Bergeron, Maurice Doyon, Laure Saulais and JoAnne Labrecque: “Using a controlled experiment in a restaurant with naturally occurring clients, this study investigates how nudging can be used to design menus that guide consumers to make healthier choices. It examines the use of default options, focusing specifically on two types of defaults that can be found when ordering food in a restaurant: automatic and standard defaults. Both types of defaults significantly affected choices, but did not adversely impact the satisfaction of individual choices. The results suggest that menu design could effectively use non-informational strategies such as nudging to promote healthier individual choices without restricting the offer or reducing satisfaction….(More)”.

G20/OECD Compendium of good practices on the use of open data for Anti-corruption


OECD: “This compendium of good practices was prepared by the OECD at the request of the G20 Anti-corruption Working Group (ACWG), to raise awareness of the benefits of open data policies and initiatives in: 

  • fighting corruption,
  • increasing public sector transparency and integrity,
  • fostering economic development and social innovation.

This compendium provides an overview of initiatives for the publication and re-use of open data to fight corruption across OECD and G20 countries and underscores the impact that a digital transformation of the public sector can deliver in terms of better governance across policy areas.  The practices illustrate the use of open data as a way of fighting corruption and show how open data principles can be translated into concrete initiatives.

The publication is divided into three sections:

Section 1 discusses the benefits of open data for greater public sector transparency and performance, national competitiveness and social engagement, and how these initiatives contribute to greater public trust in government.

Section 2 highlights the preconditions necessary across different policy areas related to anti-corruption (e.g. open government, public procurement) to sustain the implementation of an “Open by default” approach that could help government move from a perspective that focuses on increasing access to public sector information to one that enhances the publication of open government data for re-use and value co-creation. 

Section 3 presents the results of the OECD survey administered across OECD and G20 countries, good practices on the publishing and reusing of open data for anti-corruption in G20 countries, and lessons learned from the definition and implementation of these initiatives. This chapter also discusses the implications for broader national matters such as freedom of press, and the involvement of key actors of the open data ecosystem (e.g. journalists and civil society organisations) as key partners in open data re-use for anti-corruption…(More)”.

The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence


Blog by Julia Powles and Helen Nissenbaum: “Serious thinkers in academia and business have swarmed to the A.I. bias problem, eager to tweak and improve the data and algorithms that drive artificial intelligence. They’ve latched onto fairness as the objective, obsessing over competing constructs of the term that can be rendered in measurable, mathematical form. If the hunt for a science of computational fairness was restricted to engineers, it would be one thing. But given our contemporary exaltation and deference to technologists, it has limited the entire imagination of ethics, law, and the media as well.

There are three problems with this focus on A.I. bias. The first is that addressing bias as a computational problem obscures its root causes. Bias is a social problem, and seeking to solve it within the logic of automation is always going to be inadequate.

Second, even apparent success in tackling bias can have perverse consequences. Take the example of a facial recognition system that works poorly on women of color because of the group’s underrepresentation both in the training data and among system designers. Alleviating this problem by seeking to “equalize” representation merely co-opts designers in perfecting vast instruments of surveillance and classification.

When underlying systemic issues remain fundamentally untouched, the bias fighters simply render humans more machine readable, exposing minorities in particular to additional harms.

Third — and most dangerous and urgent of all — is the way in which the seductive controversy of A.I. bias, and the false allure of “solving” it, detracts from bigger, more pressing questions. Bias is real, but it’s also a captivating diversion.

What has been remarkably underappreciated is the key interdependence of the twin stories of A.I. inevitability and A.I. bias. Against the corporate projection of an otherwise sunny horizon of unstoppable A.I. integration, recognizing and acknowledging bias can be seen as a strategic concession — one that subdues the scale of the challenge. Bias, like job losses and safety hazards, becomes part of the grand bargain of innovation.

The reality that bias is primarily a social problem and cannot be fully solved technically becomes a strength, rather than a weakness, for the inevitability narrative. It flips the script. It absorbs and regularizes the classification practices and underlying systems of inequality perpetuated by automation, allowing relative increases in “fairness” to be claimed as victories — even if all that is being done is to slice, dice, and redistribute the makeup of those negatively affected by actuarial decision-making.

In short, the preoccupation with narrow computational puzzles distracts us from the far more important issue of the colossal asymmetry between societal cost and private gain in the rollout of automated systems. It also denies us the possibility of asking: Should we be building these systems at all?…(More)”.

When a Nudge Backfires: Using Observation with Social and Economic Incentives to Promote Pro-Social Behavior


Paper by Gary Bolton, Eugen Dimant and Ulrich Schmidt: “Both theory and recent empirical evidence on nudging suggests that observability of behavior acts as an instrument for promoting (discouraging) pro-social (anti-social) behavior.

Our study questions the universality of these claims. We employ a novel four-party setup to disentangle the roles three observational mechanisms play in mediating behavior. We systematically vary the observability of one’s actions by others as well as the (non-)monetary relationship between observer and observee. Observability involving economic incentives
crowds-out anti-social behavior in favor of more pro-social behavior.

Surprisingly, social observation without economic incentives fails to achieve any aggregate pro-social effect, and if anything it backfires. Additional experiments confirm that observability without additional monetary incentives can indeed backfire. However, they also show that the effect of observability on pro-social behavior is increased when social norms are made salient….(More)”.

Chatbots Are a Danger to Democracy


Jamie Susskind in the New York Times: “As we survey the fallout from the midterm elections, it would be easy to miss the longer-term threats to democracy that are waiting around the corner. Perhaps the most serious is political artificial intelligence in the form of automated “chatbots,” which masquerade as humans and try to hijack the political process.

Chatbots are software programs that are capable of conversing with human beings on social media using natural language. Increasingly, they take the form of machine learning systems that are not painstakingly “taught” vocabulary, grammar and syntax but rather “learn” to respond appropriately using probabilistic inference from large data sets, together with some human guidance.

Some chatbots, like the award-winning Mitsuku, can hold passable levels of conversation. Politics, however, is not Mitsuku’s strong suit. When asked “What do you think of the midterms?” Mitsuku replies, “I have never heard of midterms. Please enlighten me.” Reflecting the imperfect state of the art, Mitsuku will often give answers that are entertainingly weird. Asked, “What do you think of The New York Times?” Mitsuku replies, “I didn’t even know there was a new one.”

Most political bots these days are similarly crude, limited to the repetition of slogans like “#LockHerUp” or “#MAGA.” But a glance at recent political history suggests that chatbots have already begun to have an appreciable impact on political discourse. In the buildup to the midterms, for instance, an estimated 60 percent of the online chatter relating to “the caravan” of Central American migrants was initiated by chatbots.

In the days following the disappearance of the columnist Jamal Khashoggi, Arabic-language social media erupted in support for Crown Prince Mohammed bin Salman, who was widely rumored to have ordered his murder. On a single day in October, the phrase “we all have trust in Mohammed bin Salman” featured in 250,000 tweets. “We have to stand by our leader” was posted more than 60,000 times, along with 100,000 messages imploring Saudis to “Unfollow enemies of the nation.” In all likelihood, the majority of these messages were generated by chatbots.

Chatbots aren’t a recent phenomenon. Two years ago, around a fifth of all tweets discussing the 2016 presidential election are believed to have been the work of chatbots. And a third of all traffic on Twitter before the 2016 referendum on Britain’s membership in the European Union was said to come from chatbots, principally in support of the Leave side….

We should also be exploring more imaginative forms of regulation. Why not introduce a rule, coded into platforms themselves, that bots may make only up to a specific number of online contributions per day, or a specific number of responses to a particular human? Bots peddling suspect information could be challenged by moderator-bots to provide recognized sources for their claims within seconds. Those that fail would face removal.

We need not treat the speech of chatbots with the same reverence that we treat human speech. Moreover, bots are too fast and tricky to be subject to ordinary rules of debate. For both those reasons, the methods we use to regulate bots must be more robust than those we apply to people. There can be no half-measures when democracy is at stake….(More)”.

New possibilities for cutting corruption in the public sector


Rema Hanna and Vestal McIntyre at VoxDev: “In their day-to-day dealings with the government, citizens of developing countries frequently encounter absenteeism, demands for bribes, and other forms of low-level corruption. When researchers used unannounced visits to gauge public-sector attendance across six countries, they found that 19% of teachers and 35% of health workers were absent during work hours (Chaudhury et al. 2006). A recent survey found that nearly 70% of Indians reported paying a bribe to access public services.

Corruption can set into motion vicious cycles: the government is impoverished of resources to provide services, and citizens are deprived of the things they need. For the poor, this might mean that they live without quality education, electricity, healthcare, and so forth. In contrast, the rich can simply pay the bribe or obtain the service privately, furthering inequality.

Much of the discourse around corruption focuses on punishing corrupt offenders. But punitive measures can only go so far, especially when corruption is seen as the ‘norm’ and is thus ingrained in institutions. 

What if we could find ways of identifying the ‘goodies’ – those who enter the public sector out of a sense of civic responsibility, and serve honestly – and weeding out the ‘baddies’ before they are hired? New research shows this may be possible....

You can test personality

For decades, questionnaires have dissected personality into the ‘Big Five’ traits of openness, conscientiousness, extraversion, agreeableness, and neuroticism. These traits have been shown to be predictors of behaviour and outcomes in the workplace (Heckman 2011). As a result, private sector employers often use them in recruiting. Nobel laureate James Heckman and colleagues found that standardized adolescent measures of locus control and self-esteem (components of neuroticism) predict adult earnings to a similar degree as intelligence (Kautz et al. 2014).

Personality tests have also been put to use for the good of the poor: our colleague at Harvard’s Evidence for Policy Design (EPoD), Asim Ijaz Khwaja and collaborators have tested, and then subsequently expanded, personality tests as a basis for identifying reliable borrowers. This way, lenders can offer products to poor entrepreneurs who lack traditional credit histories, but who are nonetheless creditworthy. (See the Entrepreneurial Finance Lab’s website.)

You can test for civic-mindedness and honesty

Out of the personality-test literature grew the Perry Public Sector Motivation questionnaire (Perry 1996), which comprises a series of statements that respondents can state their level of agreement or disagreement with measures of civic-mindedness. The questionnaire has six modules, including “Attraction to Policy Making”, “Commitment to Public Interest”, “Social Justice”, “Civic Duty”, “Compassion”, and “Self-Sacrifice.” Studies have found that scores on the instrument correlate positively with job performance, ethical behaviour, participation in civic organisations, and a host of other good outcomes (for a review, see Perry and Hondeghem 2008).

You can also measure honesty in different ways. For example, Fischbacher and Föllmi-Heusi (2013) formulated a game in which subjectsroll a die and write down the number that they get, receiving higher cash rewards for larger reported numbers. While this does not reveal with certainty if any one subject lied since no one else sees the die, it does reveal how far their reported numbers were from the uniform distribution. Those with high dice high points have a higher probability of having cheated. Implementing this, the authors found that “about 20% of inexperienced subjects lie to the fullest extent possible while 39% of subjects are fully honest.”

These and a range of other tools for psychological profiling have opened up new possibilities for improving governance. Here are a few lessons this new literature has yielded….(More)”.