Nudging people to make good choices can backfire


Bruce Bower in ScienceNews: “Nudges are a growth industry. Inspired by a popular line of psychological research and introduced in a best-selling book a decade ago, these inexpensive behavior changers are currently on a roll.

Policy makers throughout the world, guided by behavioral scientists, are devising ways to steer people toward decisions deemed to be in their best interests. These simple interventions don’t force, teach or openly encourage anyone to do anything. Instead, they nudge, exploiting for good — at least from the policy makers’ perspective — mental tendencies that can sometimes lead us astray.

But new research suggests that low-cost nudges aimed at helping the masses have drawbacks. Even simple interventions that work at first can lead to unintended complications, creating headaches for nudgers and nudgees alike…

Promising results of dozens of nudge initiatives appear in two government reports issued last September. One came from the White House, which released the second annual report of its Social and Behavioral Sciences Team. The other came from the United Kingdom’s Behavioural Insights Team. Created by the British government in 2010, the U.K. group is often referred to as the Nudge Unit.

In a September 20, 2016, Bloomberg View column, Sunstein said the new reports show that nudges work, but often increase by only a few percentage points the number of people who, say, receive government benefits or comply with tax laws. He called on choice architects to tackle bigger challenges, such as finding ways to nudge people out of poverty or into higher education.

Missing from Sunstein’s comments and from the government reports, however, was any mention of a growing conviction among some researchers that well-intentioned nudges can have negative as well as positive effects. Accepting automatic enrollment in a company’s savings plan, for example, can later lead to regret among people who change jobs frequently or who realize too late that a default savings rate was set too low for their retirement needs. E-mail reminders to donate to a charity may work at first, but annoy recipients into unsubscribing from the donor list.

“I don’t want to get rid of nudges, but we’ve been a bit too optimistic in applying them to public policy,” says behavioral economist Mette Trier Damgaard of Aarhus University in Denmark.

Nudges, like medications for physical ailments, require careful evaluation of intended and unintended effects before being approved, she says. Policy makers need to know when and with whom an intervention works well enough to justify its side effects.

Default downer

That warning rings especially true for what is considered a shining star in the nudge universe — automatic enrollment of employees in retirement savings plans. The plans, called defaults, take effect unless workers decline to participate….

But little is known about whether automatic enrollees are better or worse off as time passes and their personal situations change, says Harvard behavioral economist Brigitte Madrian. She coauthored the 2001 paper on the power of default savings plans.

Although automatic plans increase savings for those who otherwise would have squirreled away little or nothing, others may lose money because they would have contributed more to a self-directed retirement account, Madrian says. In some cases, having an automatic savings account may encourage irresponsible spending or early withdrawals of retirement money (with penalties) to cover debts. Such possibilities are plausible but have gone unstudied.

In line with Madrian’s concerns, mathematical models developed by finance professor Bruce Carlin of the University of California, Los Angeles and colleagues suggest that people who default into retirement plans learn less about money matters, and share less financial information with family and friends, than those who join plans that require active investment choices.

Opt-out savings programs “have been oversimplified to the public and are being sold as a great way to change behavior without addressing their complexities,” Madrian says. Research needs to address how well these plans mesh with individuals’ personalities and decision-making styles, she recommends….

Researchers need to determine how defaults and other nudges instigate behavior changes before unleashing them on the public, says philosopher of science Till Grüne-Yanoff of the Royal Institute of Technology in Stockholm….

Sometimes well-intentioned, up-front attempts to get people to do what seems right come back to bite nudgers on the bottom line.

Consider e-mail prompts and reminders. ….A case in point is a study submitted for publication by Damgaard and behavioral economist Christina Gravert of the University of Gothenburg in Sweden. E-mailed donation reminders sent to people who had contributed to a Danish anti-poverty charity increased the number of donations in the short term, but also triggered an upturn in the number of people unsubscribing from the list.

People’s annoyance at receiving reminders perceived as too frequent or pushy cost the charity money over the long haul, Damgaard holds. Losses of list subscribers more than offset the financial gains from the temporary uptick in donations, she and Gravert conclude.

“Researchers have tended to overlook the hidden costs of nudging,” Damgaard says….(More)”

Drones used in fight against plastic pollution on UK beaches


Tom Cheshire at SkyNews: “On a beach in Kent, Peter Koehler and Ellie Mackay are teaching a drone how to see.

Their project, Plastic Tide, aims to create software that will automatically pick out the pieces of plastic that wash up here on the shingle.

“One of the major challenges we face is that we can only account for 1% of those millions and millions of tonnes [of plastic] that are coming into our oceans every year,” Mr Koehler told Sky News.

“So the question is, where is that 99% going?”

He added: “We just don’t know. It could be in the water, it could be in wildlife, or it could be on beaches.

“And so what the Plastic Tide is doing, it’s using drone technology to image beaches in a way that’s never been done before, on a scientific scale. So that you can build up a picture of how much of that missing 99% is washing up on our beaches.”

Mr Koehler and Ms Mackay use an off-the-shelf drone. They select the area of beach they want to film and a free app comes up with a survey pattern flight path – the drone moves systematically up and down the beach as if it were ploughing it.

The images taken are then uploaded to a scientific crowd-sourcing platform called Zooniverse.

Anyone can log on, look at the images and tag bits of plastic in them.

That will build up a huge amount of data, which will be used to train a machine-learning algorithm to spot plastic by itself – no humans required.

The hope is that, eventually, anyone will be able to fly a drone, take images, then computers will automatically scan the images and determine the levels of plastic pollution on a beach.

This summer, Mr Koehler and Ms Mackay will travel all 3,200 miles of the UK coastline, surveying beaches….

There’s no new, groundbreaking piece of technology here.

Just off-the-shelf components, smart thinking and a desire to put a small dent in a huge problem….(More)”

Can social media, loud and inclusive, fix world politics


 at the Conversation: “Privacy is no longer a social norm, said Facebook founder Mark Zuckerberg in 2010, as social media took a leap to bring more private information into the public domain.

But what does it mean for governments, citizens and the exercise of democracy? Donald Trump is clearly not the first leader to use his Twitter account as a way to both proclaim his policies and influence the political climate. Social media presents novel challenges to strategic policy, and has become a managerial issues for many governments.

But it also offers a free platform for public participation in government affairs. Many argue that the rise of social media technologies can give citizens and observers a better opportunity to identify pitfalls of government and their politics.

As government embrace the role of social media and the influence of negative or positive feedback on the success of their project, they are also using this tool to their advantages by spreading fabricated news.

This much freedom of expression and opinion can be a double-edged sword.

A tool that triggers change

On the positive side, social media include social networking applications such as Facebook and Google+, microblogging services such as Twitter, blogs, video blogs (vlogs), wikis, and media-sharing sites such as YouTube and Flickr, among others.

Social media as a collaborative and participatory tool, connects users with each other and help shaping various communities. Playing a key role in delivering public service value to citizens it also helps people to engage in politics and policy-making, making processes easier to understand, through information and communication technologies (ICTs).

Today four out of five countries in the world have social media features on their national portals to promote interactive networking and communication with the citizen. Although we don’t have any information about the effectiveness of such tools or whether they are used to their full potential, 20% of these countries shows that they have “resulted in new policy decisions, regulation or service”.

Social media can be an effective tool to trigger changes in government policies and services if well used. It can be used to prevent corruption, as it is direct method of reaching citizens. In developing countries, corruption is often linked to governmental services that lack automated processes or transparency in payments.

The UK is taking the lead on this issue. Its anti-corruption innovation hub aims to connect several stakeholders – including civil society, law enforcement and technologies experts – to engage their efforts toward a more transparent society.

With social media, governments can improve and change the way they communicate with their citizens – and even question government projects and policies. In Kazakhstan, for example, a migration-related legislative amendment entered into force early January 2017 and compelled property owners to register people residing in their homes immediately or else face a penalty charge starting in February 2017.

Citizens were unprepared for this requirement, and many responded with indignation on social media. At first the government ignored this reaction. However, as the growing anger soared via social media, the government took action and introduced a new service to facilitate the registration of temporary citizens….

But the campaigns that result do not always evolve into positive change.

Egypt and Libya are still facing several major crises over the last years, along with political instability and domestic terrorism. The social media influence that triggered the Arab Spring did not permit these political systems to turn from autocracy to democracy.

Brazil exemplifies a government’s failure to react properly to a massive social media outburst. In June 2013 people took to the streets to protest the rising fares of public transportation. Citizens channelled their anger and outrage through social media to mobilise networks and generate support.

The Brazilian government didn’t understand that “the message is the people”. Though the riots some called the “Tropical Spring” disappeared rather abruptly in the months to come, they had major and devastating impact on Brazil’s political power, culminating in the impeachment of President Rousseff in late 2016 and the worst recession in Brazil’s history.

As in the Arab Spring countries, the use of social media in Brazil did not result in economic improvement. The country has tumbled down into depression, and unemployment has risen to 12.6%…..

Government typically asks “how can we adapt social media to the way in which we do e-services, and then try to shape their policies accordingly. They would be wiser to ask, “how can social media enable us to do things differently in a way they’ve never been done before?” – that is, policy-making in collaboration with people….(More)”.

The Conversation

The Problem With Facts


Tim Hartford: “…In 1995, Robert Proctor, a historian at Stanford University who has studied the tobacco case closely, coined the word “agnotology”. This is the study of how ignorance is deliberately produced; the entire field was started by Proctor’s observation of the tobacco industry. The facts about smoking — indisputable facts, from unquestionable sources — did not carry the day. The indisputable facts were disputed. The unquestionable sources were questioned. Facts, it turns out, are important, but facts are not enough to win this kind of argument.

Agnotology has never been more important. “We live in a golden age of ignorance,” says Proctor today. “And Trump and Brexit are part of that.”

In the UK’s EU referendum, the Leave side pushed the false claim that the UK sent £350m a week to the EU. It is hard to think of a previous example in modern western politics of a campaign leading with a transparent untruth, maintaining it when refuted by independent experts, and going on to triumph anyway. That performance was soon to be eclipsed by Donald Trump, who offered wave upon shameless wave of demonstrable falsehood, only to be rewarded with the presidency. The Oxford Dictionaries declared “post-truth” the word of 2016. Facts just didn’t seem to matter any more.

The instinctive reaction from those of us who still care about the truth — journalists, academics and many ordinary citizens — has been to double down on the facts. Fact-checking organisations, such as Full Fact in the UK and PolitiFact in the US, evaluate prominent claims by politicians and journalists. I should confess a personal bias: I have served as a fact checker myself on the BBC radio programme More or Less, and I often rely on fact-checking websites. They judge what’s true rather than faithfully reporting both sides as a traditional journalist would. Public, transparent fact checking has become such a feature of today’s political reporting that it’s easy to forget it’s barely a decade old.

Mainstream journalists, too, are starting to embrace the idea that lies or errors should be prominently identified. Consider a story on the NPR website about Donald Trump’s speech to the CIA in January: “He falsely denied that he had ever criticised the agency, falsely inflated the crowd size at his inauguration on Friday . . . —” It’s a bracing departure from the norms of American journalism, but then President Trump has been a bracing departure from the norms of American politics.

Facebook has also drafted in the fact checkers, announcing a crackdown on the “fake news” stories that had become prominent on the network after the election. Facebook now allows users to report hoaxes. The site will send questionable headlines to independent fact checkers, flag discredited stories as “disputed”, and perhaps downgrade them in the algorithm that decides what each user sees when visiting the site.

We need some agreement about facts or the situation is hopeless. And yet: will this sudden focus on facts actually lead to a more informed electorate, better decisions, a renewed respect for the truth? The history of tobacco suggests not. The link between cigarettes and cancer was supported by the world’s leading medical scientists and, in 1964, the US surgeon general himself. The story was covered by well-trained journalists committed to the values of objectivity. Yet the tobacco lobbyists ran rings round them.

In the 1950s and 1960s, journalists had an excuse for their stumbles: the tobacco industry’s tactics were clever, complex and new. First, the industry appeared to engage, promising high-quality research into the issue. The public were assured that the best people were on the case. The second stage was to complicate the question and sow doubt: lung cancer might have any number of causes, after all. And wasn’t lung cancer, not cigarettes, what really mattered? Stage three was to undermine serious research and expertise. Autopsy reports would be dismissed as anecdotal, epidemiological work as merely statistical, and animal studies as irrelevant. Finally came normalisation: the industry would point out that the tobacco-cancer story was stale news. Couldn’t journalists find something new and interesting to say?

Such tactics are now well documented — and researchers have carefully examined the psychological tendencies they exploited. So we should be able to spot their re-emergence on the political battlefield.

“It’s as if the president’s team were using the tobacco industry’s playbook,” says Jon Christensen, a journalist turned professor at the University of California, Los Angeles, who wrote a notable study in 2008 of the way the tobacco industry tugged on the strings of journalistic tradition.

One infamous internal memo from the Brown & Williamson tobacco company, typed up in the summer of 1969, sets out the thinking very clearly: “Doubt is our product.” Why? Because doubt “is the best means of competing with the ‘body of fact’ that exists in the mind of the general public. It is also the means of establishing a controversy.” Big Tobacco’s mantra: keep the controversy alive.

Doubt is usually not hard to produce, and facts alone aren’t enough to dispel it. We should have learnt this lesson already; now we’re going to have to learn it all over again.

Tempting as it is to fight lies with facts, there are three problems with that strategy….(More)”

iGod


Novel by Willemijn Dicke and Dirk Helbing: “iGod is a science fiction novel with heroes, love, defeat and hope. But it is much more than that. This book aims to explore how societies may develop, given the technologies that we see at present. As Dirk Helbing describes it in his introduction:

We have come to the conclusion that neither a scientific study nor an investigative report would allow one to talk about certain things that, we believe, need to be thought and talked about. So, a science fiction story appeared to be the right approach. It seems the perfect way to think “what if scenarios” through. It is not the first time that this avenue has been taken. George Orwell’s “1984” and “Animal Farm” come to mind, or Dave Eggers “The Circle”. The film ‘The Matrix’ and the Netflix series ‘Black Mirror are good examples too.

“iGod” outlines how life could be in a couple of years from now, certainly in our lifetime. At some places, this story about our future society seems far-fetched. For example, in “iGod”, all citizens have a Social Citizen Score. This score is established based on their buying habits, their communication in social media and social contacts they maintain. It is obtained by mass-surveillance and has a major impact on everyone’s life. It determines whether you are entitled to get a loan, what jobs you are offered, and even how long you will receive medical care.

The book is set in the near future in Amsterdam, the Netherlands. Lex is an unemployed biologist. One day he is contacted by a computer which, gradually reveals the machinery behind the reality we see. It is a bleak world. Together with his girlfriend Diana and Seldon, a Professor at Amsterdam Tech, he starts the quest to regain freedom….(More) (Individual chapters)”

Google DeepMind and healthcare in an age of algorithms


Julia Powles and Hal Hodson in Health and Technology: “Data-driven tools and techniques, particularly machine learning methods that underpin artificial intelligence, offer promise in improving healthcare systems and services. One of the companies aspiring to pioneer these advances is DeepMind Technologies Limited, a wholly-owned subsidiary of the Google conglomerate, Alphabet Inc. In 2016, DeepMind announced its first major health project: a collaboration with the Royal Free London NHS Foundation Trust, to assist in the management of acute kidney injury. Initially received with great enthusiasm, the collaboration has suffered from a lack of clarity and openness, with issues of privacy and power emerging as potent challenges as the project has unfolded. Taking the DeepMind-Royal Free case study as its pivot, this article draws a number of lessons on the transfer of population-derived datasets to large private prospectors, identifying critical questions for policy-makers, industry and individuals as healthcare moves into an algorithmic age….(More)”

Digital Democracy in Belgium and the Netherlands. A Socio-Legal Analysis of Citizenlab.be and Consultatie.nl


Chapter by Koen Van Aeken in: Prins, C. et. al (eds.) Digital Democracy in a Globalized World (Edward Elgar, 2017), Forthcoming: “The research question is how technologies characterized by ubiquitous Web 2.0 interactivity may contribute to democracy. Following a case study design, two applications were evaluated: the Belgian CitizenLab, a mobile, social and local private application to support public decision making in cities, and the Dutch governmental website Internetconsultatie. Available data suggest that the Dutch consultation platform is mainly visited by the ‘usual suspects’ and lacks participatory functionalities. In contrast, CitizenLab explicitly aims at policy co-creation through broad participation. Its novelty, however, prevents making sound empirical statements.

A comprehensive conceptualization precedes the case studies. To avoid instrumentalist reduction, the social setting of the technologies is reconstructed. Since its constituents, embedding and expectations – initially represented as the nation state and representative democracy – are increasingly challenged, their transformations are consequently discussed. The new embedding emerges as a governance constellation; new expectations concern the participatory dimension of politics. Future assessments of technologies may benefit from this conceptualization….(More)”

Did artificial intelligence deny you credit?


 in The Conversation: “People who apply for a loan from a bank or credit card company, and are turned down, are owed an explanation of why that happened. It’s a good idea – because it can help teach people how to repair their damaged credit – and it’s a federal law, the Equal Credit Opportunity Act. Getting an answer wasn’t much of a problem in years past, when humans made those decisions. But today, as artificial intelligence systems increasingly assist or replace people making credit decisions, getting those explanations has become much more difficult.

Traditionally, a loan officer who rejected an application could tell a would-be borrower there was a problem with their income level, or employment history, or whatever the issue was. But computerized systems that use complex machine learning models are difficult to explain, even for experts.

Consumer credit decisions are just one way this problem arises. Similar concerns exist in health care, online marketing and even criminal justice. My own interest in this area began when a research group I was part of discovered gender bias in how online ads were targeted, but could not explain why it happened.

All those industries, and many others, who use machine learning to analyze processes and make decisions have a little over a year to get a lot better at explaining how their systems work. In May 2018, the new European Union General Data Protection Regulation takes effect, including a section giving people a right to get an explanation for automated decisions that affect their lives. What shape should these explanations take, and can we actually provide them?

Identifying key reasons

One way to describe why an automated decision came out the way it did is to identify the factors that were most influential in the decision. How much of a credit denial decision was because the applicant didn’t make enough money, or because he had failed to repay loans in the past?

My research group at Carnegie Mellon University, including PhD student Shayak Sen and then-postdoc Yair Zick created a way to measure the relative influence of each factor. We call it the Quantitative Input Influence.

In addition to giving better understanding of an individual decision, the measurement can also shed light on a group of decisions: Did an algorithm deny credit primarily because of financial concerns, such as how much an applicant already owes on other debts? Or was the applicant’s ZIP code more important – suggesting more basic demographics such as race might have come into play?…(More)”

Fighting Corruption in Health Care? There’s an App for That


Akjibek Beishebaeva at Voices (OSF): “As an industry that relies heavily on approvals from government officials, the pharmaceutical field in places like Ukraine and Kyrgyzstan—which lack strong mechanisms for public oversight—is particularly susceptible to corruption.

The problem in those countries is exacerbated by the absence of any reliable system to monitor market prices for drugs. For example, a hospital manager bribed by a pharmaceutical representative could agree to procure a drug at a price 10 times higher than at a neighboring hospital. In addition, those medicines procured by the state and meant to be dispensed freely to patients often appear for sale at hospital-based pharmacies instead.

These aren’t victimless crimes. The most needy patients are often the first to suffer when funds are diverted away from lifesaving treatments and medicines.

To tackle this issue, last year the Soros Foundation–Kyrgyzstan and the International Renaissance Foundation jointly conducted the Health Data Hackathon in the Yssyk-Kul region of Kyrgyzstan. Two teams from Ukraine and three teams from Kyrgyzstan—consisting of coders, journalists, and activists—took part. Their goal was to find innovative solutions to address corruption in public procurements and access to health services for vulnerable populations.

Over the two-and-a-half-day effort, one of the Ukrainian teams developed a prototype for a software application to improve the e-tendering platform for all public procurement in Ukraine—ProZorro.

ProZorro itself revolutionized the tender process when it first launched in 2015. It combined a centralized database of online markets and was made accessible to the public. Journalists, activists, and patients today can log in to the system and scrutinize tenders approved by the government. The transparency provided by the system has already shown savings of more than a billion UAH (US$37 million). However, the database is huge and can be tricky to navigate without training.

The application developed at the hackathon makes it even easier to monitor the purchase prices of medicines in Ukraine. Specfically, it will allow users to automatically and instantly compare prices for the same products—a process which previously took many days of manual effort.

The application also offers a more intuitive interface and improved search functionality that will help further reduce corruption and save money—savings that can be redirected towards treatments for people living with HIV, cancer, and hepatitis C. The team is now testing the software and working with the government to introduce it early this year.

Another team came up with the idea to let patients monitor supplies of medicine at facilities in real time. If a hospital representative says that a patient needs to buy drugs that should be readily available, for example, the patient can check online and hold the hospital accountable if the medicines are meant to be provided for free. The tool, called WikiLiky, has already been implemented in the Sumy region of Ukraine.

Likewise, one of the Kyrgyz teams looked at price monitoring in their own country, focusing on the inefficient and mistake-prone acquisition process. For instance, the name of one drug might be misspelled in several different ways, making it difficult to track prices accurately. The team redesigned the functionality of the government e-procurement portal called Codifier, creating uniformity across the system of names, dosages, and other medical specifications….(More)”

Big data helps Belfort, France, allocate buses on routes according to demand


 in Digital Trends: “As modern cities smarten up, the priority for many will be transportation. Belfort, a mid-sized French industrial city of 50,000, serves as proof of concept for improved urban transportation that does not require the time and expense of covering the city with sensors and cameras.

Working with Tata Consultancy Services (TCS) and GFI Informatique, the Board of Public Transportation of Belfort overhauled bus service management of the city’s 100-plus buses. The project entailed a combination of ID cards, GPS-equipped card readers on buses, and big data analysis. The collected data was used to measure bus speed from stop to stop, passenger flow to observe when and where people got on and off, and bus route density. From start to finish, the proof of concept project took four weeks.

Using the TCS Intelligent Urban Exchange system, operations managers were able to detect when and where about 20 percent of all bus passengers boarded and got off on each city bus route. Utilizing big data and artificial intelligence the city’s urban planners were able to use that data analysis to make cost-effective adjustments including the allocation of additional buses on routes and during times of greater passenger demand. They were also able to cut back on buses for minimally used routes and stops. In addition, the system provided feedback on the effect of city construction projects on bus service….

Going forward, continued data analysis will help the city budget wisely for infrastructure changes and new equipment purchases. The goal is to put the money where the needs are greatest rather than just spending and then waiting to see if usage justified the expense. The push for smarter cities has to be not just about improved services, but also smart resource allocation — in the Belfort project, the use of big data showed how to do both….(More)”