How people update beliefs about climate change: good news and bad news


Paper by Cass R. Sunstein, Sebastian Bobadilla-Suarez, Stephanie C. Lazzaro & Tali Sharot: “People are frequently exposed to competing evidence about climate change. We examined how new information alters people’s beliefs. We find that people who are not sure that man-made climate change is occurring, and who do not favor an international agreement to reduce greenhouse gas emissions, show a form of asymmetrical updating: They change their beliefs in response to unexpected good news (suggesting that average temperature rise is likely to be less than previously thought) and fail to change their beliefs in response to unexpected bad news (suggesting that average temperature rise is likely to be greater than previously thought). By contrast, people who strongly believe that manmade climate change is occurring, and who favor an international agreement, show the opposite asymmetry: They change their beliefs far more in response to unexpected bad news (suggesting that average temperature rise is likely to be greater than previously thought) than in response to unexpected good news (suggesting that average temperature rise is likely to be smaller than previously thought). The results suggest that exposure to varied scientific evidence about climate change may increase polarization within a population due to asymmetrical updating. We explore the implications of our findings for how people will update their beliefs upon receiving new evidence about climate change, and also for other beliefs relevant to politics and law….(More)”.

How “Big Data” Went Bust


The problem with “big data” is not that data is bad. It’s not even that big data is bad: Applied carefully, massive data sets can reveal important trends that would otherwise go undetected. It’s the fetishization of data, and its uncritical use, that tends to lead to disaster, as Julia Rose West recently wrote for Slate. And that’s what “big data,” as a catchphrase, came to represent.

By its nature, big data is hard to interpret. When you’re collecting billions of data points—clicks or cursor positions on a website; turns of a turnstile in a large public space; hourly wind speed observations from around the world; tweets—the provenance of any given data point is obscured. This in turn means that seemingly high-level trends might turn out to be artifacts of problems in the data or methodology at the most granular level possible. But perhaps the bigger problem is that the data you have are usually only a proxy for what you really want to know. Big data doesn’t solve that problem—it magnifies it….

Aside from swearing off data and reverting to anecdote and intuition, there are at least two viable ways to deal with the problems that arise from the imperfect relationship between a data set and the real-world outcome you’re trying to measure or predict.

One is, in short: moar data. This has long been Facebook’s approach. When it became apparent that users’ “likes” were a flawed proxy for what they actually wanted to see more of in their feeds, the company responded by adding more and more proxies to its model. It began measuring other things, like the amount of time they spent looking at a post in their feed, the amount of time they spent reading a story they had clicked on, and whether they hit “like” before or after they had read the piece. When Facebook’s engineers had gone as far as they could in weighting and optimizing those metrics, they found that users were still unsatisfied in important ways. So the company added yet more metrics to the sauce: It started running huge user-survey panels, added new reaction emojis by which users could convey more nuanced sentiments, and started using A.I. to detect clickbait-y language in posts by pages and publishers. The company knows none of these proxies are perfect. But by constantly adding more of them to the mix, it can theoretically edge ever closer to an algorithm that delivers to users the posts that they most want to see.

One downside of the moar data approach is that it’s hard and expensive. Another is that the more variables are added to your model, the more complex, opaque, and unintelligible its methodology becomes. This is part of the problem Pasquale articulated in The Black Box Society. Even the most sophisticated algorithm, drawing on the best data sets, can go awry—and when it does, diagnosing the problem can be nigh-impossible. There are also the perils of “overfitting” and false confidence: The more sophisticated your model becomes, the more perfectly it seems to match up with all your past observations, and the more faith you place in it, the greater the danger that it will eventually fail you in a dramatic way. (Think mortgage crisis, election prediction models, and Zynga.)

Another possible response to the problems that arise from biases in big data sets is what some have taken to calling “small data.” Small data refers to data sets that are simple enough to be analyzed and interpreted directly by humans, without recourse to supercomputers or Hadoop jobs. Like “slow food,” the term arose as a conscious reaction to the prevalence of its opposite….(More)”

 

A Brief History of Living Labs: From Scattered Initiatives to Global Movement


Paper by Seppo Leminen, Veli-Pekka Niitamo, and Mika Westerlund presented at the Open Living Labs Day Conference: “This paper analyses the emergence of living labs based on a literature review and interviews with early living labs experts. Our study makes a contribution to the growing literature of living labs by analysing the emergence of living labs from the perspectives of (i) early living lab pioneers, (ii) early living lab activities in Europe and especially Nokia Corporation, (iii) framework programs of the European Union supporting the development of living labs, (iv) emergence of national living lab networks, and (v) emergence of the European Network of Living Labs (ENoLL). Moreover, the paper highlights major events in the emergence of living lab movement and labels three consecutive phases of the global living lab movement as (i) toward a new paradigm, (ii) practical experiences, and (iii) professional living labs….(More)”.

Open Space: The Global Effort for Open Access to Environmental Satellite Data


Book by Mariel Borowitz: “Key to understanding and addressing climate change is continuous and precise monitoring of environmental conditions. Satellites play an important role in collecting climate data, offering comprehensive global coverage that can’t be matched by in situ observation. And yet, as Mariel Borowitz shows in this book, much satellite data is not freely available but restricted; this remains true despite the data-sharing advocacy of international organizations and a global open data movement. Borowitz examines policies governing the sharing of environmental satellite data, offering a model of data-sharing policy development and applying it in case studies from the United States, Europe, and Japan—countries responsible for nearly half of the unclassified government Earth observation satellites.

Borowitz develops a model that centers on the government agency as the primary actor while taking into account the roles of such outside actors as other government officials and non-governmental actors, as well as the economic, security, and normative attributes of the data itself. The case studies include the U.S. National Aeronautics and Space Administration (NASA) and the U.S. National Oceanographic and Atmospheric Association (NOAA), and the United States Geological Survey (USGS); the European Space Agency (ESA) and the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT); and the Japanese Aerospace Exploration Agency (JAXA) and the Japanese Meteorological Agency (JMA). Finally, she considers the policy implications of her findings for the future and provides recommendations on how to increase global sharing of satellite data….(More)”.

Our Gutenberg Moment: It’s Time To Grapple With The Internet’s Effect On Democracy


Alberto Ibargüen at HuffPost: “When clashes wracked Charlottesville, many Americans saw neo-nazi demonstrators as the obvious instigators. But others focused on counter-demonstrators, a view amplified by the president blaming “many sides.” The rift in perception underscored an uncomfortable but unavoidable truth about the flow of information today: Americans no longer have a shared foundation of facts upon which we can agree.

Politics has long been a messy, divisive business. I lived through the 1960s, a period of similar dissatisfaction, disillusionment, and disunity, brilliantly chronicled by Ken Burns’ new film “The Vietnam War” on PBS. But common, local knowledge —of history and current events — has always been the great equalizer in American society. Today, however, a decrease in shared knowledge has led to a collapse in trust. Over the past few years, we have watched our capacity to compromise wane as not only our politics, but also our most basic value systems, have become polarized.

The key difference between then and now is how news is delivered and consumed. At the beginning of our Republic, the reach of media was local and largely verifiable. That direct relationship between media outlets and their communities — local newspapers and, later, radio and TV stations — held until the second half of the 20th century. Network TV began to create a sense of national community but it fractioned with the sudden ability to offer targeted, membership-based models via cable.

But cable was nothing compared to Internet. Internet’s unique ability to personalize and to create virtual communities of interest accelerated the decline of newspapers and television business models and altered the flow of information in ways that we are still uncovering. “Media” now means digital and cable, cool mediums that require hot performance. Trust in all media, including traditional media, is at an all-time low, and we’re just now beginning to grapple with the threat to democracy posed by this erosion of trust.

Internet is potentially the greatest democratizing tool in history. It is also democracy’s greatest challenge. In offering access to information that can support any position and confirm any bias, social media has propelled the erosion of our common set of everyday facts….(More)”.

The Unexamined Algorithm Is Not Worth Using


Ruben Mancha & Haslina Ali at Stanford Social Innovation Review: “In 1983, at the height of the Cold War, just one man stood between an algorithm and the outbreak of nuclear war. Stanislav Petrov, a colonel of the Soviet Air Defence Forces, was on duty in a secret command center when early-warning alarms went off indicating the launch of intercontinental ballistic missiles from an American base. The systems reported that the alarm was of the highest possible reliability. Petrov’s role was to advise his superiors on the veracity of the alarm that, in turn, would affect their decision to launch a retaliatory nuclear attack. Instead of trusting the algorithm, Petrov went with his gut and reported that the alarm was a malfunction. He turned out to be right.

This historical nugget represents an extreme example of the effect that algorithms have on our lives. The detection algorithm, it turns out, mistook the sun’s reflection for a missile launch. It is a sobering thought that a poorly designed or malfunctioning algorithm could have changed the course of history and resulted in millions of deaths….

We offer five recommendations to guide the ethical development and evaluation of algorithms used in your organization:

  1. Consider ethical outcomes first, speed and efficiency second. Organizations seeking speed and efficiency through algorithmic automation should remember that customer value comes through higher strategic speed, not higher operational speed. When implementing algorithms, organizations should never forget their ultimate goal is creating customer value, and fast yet potentially unethical algorithms defile that objective.
  2. Make ethical guiding principles salient to your organization. Your organization should reflect on the ethical principles guiding it and convey them clearly to employees, business partners, and customers. A corporate social responsibility framework is a good starting point for any organization ready to articulate its ethical principles.
  3. Employ programmers well versed in ethics. The computer engineers responsible for designing and programming algorithms should understand the ethical implications of the products of their work. While some ethical decisions may seem intuitive (such as do not use an algorithm to steal data from a user’s computer), most are not. The study of ethics and the practice of ethical inquiry should be part of every coding project.
  4. Interrogate your algorithms against your organization’s ethical standards. Through careful evaluation of the your algorithms’ behavior and outcomes, your organization can identify those circumstances, real or simulated, in which they do not meet the ethical standards.
  5. Engage your stakeholders. Transparently share with your customers, employees, and business partners details about the processes and outcomes of your algorithms. Stakeholders can help you identify and address ethical gaps….(More).

Data for Development


New Report by the OECD: “The 2017 volume of the  Development Co-operation Report focuses on Data for Development. “Big Data” and “the Internet of Things” are more than buzzwords: the data revolution is transforming the way that economies and societies are functioning across the planet. The Sustainable Development Goals along with the data revolution are opportunities that should not be missed: more and better data can help boost inclusive growth, fight inequalities and combat climate change. These data are also essential to measure and monitor progress against the Sustainable Development Goals.

The value of data in enabling development is uncontested. Yet, there continue to be worrying gaps in basic data about people and the planet and weak capacity in developing countries to produce the data that policy makers need to deliver reforms and policies that achieve real, visible and long-lasting development results. At the same time, investing in building statistical capacity – which represented about 0.30% of ODA in 2015 – is not a priority for most providers of development assistance.

There is a need for stronger political leadership, greater investment and more collective action to bridge the data divide for development. With the unfolding data revolution, developing countries and donors have a unique chance to act now to boost data production and use for the benefit of citizens. This report sets out priority actions and good practices that will help policy makers and providers of development assistance to bridge the global data divide, notably by strengthening statistical systems in developing countries to produce better data for better policies and better lives….(More)”

A Systematic Scoping Review of the Choice Architecture Movement: Towards Understanding When and Why Nudges Work


Barnabas Imre Szaszi et al in the Journal of Behavioral Decision Making: “In this paper, we provide a domain-general scoping review of the nudge movement by reviewing 422 choice architecture interventions in 156 empirical studies. We report the distribution of the studies across countries, years, domains, subdomains of applicability, intervention types, and the moderators associated with each intervention category to review the current state of the nudge movement. Furthermore, we highlight certain characteristics of the studies and experimental and reporting practices which can hinder the accumulation of evidence in the field. Specifically, we found that 74 % of the studies were mainly motivated to assess the effectiveness of the interventions in one specific setting, while only 24% of the studies focused on the exploration of moderators or underlying processes. We also observed that only 7% of the studies applied power analysis, 2% used guidelines aiming to improve the quality of reporting, no study in our database was preregistered, and the used intervention nomenclatures were non-exhaustive and often have overlapping categories. Building on our current observations and proposed solutions from other fields, we provide directly applicable recommendations for future research to support the evidence accumulation on why and when nudges work….(More)”.

Open data, democracy and public service reform


Mark Thompson at Computer Weekly: “Discussion around reforming public services is as important as better information sharing rules if government is to make the most of public data…

Our public services face two paradoxes in relation to data sharing. First, on the demand side, “Zuckerberg’s law” – which claims that the amount of data we’re happy to share with companies increases exponentially year-on-year – flies in the face of our wariness as citizens to share with the state….

The upcoming General Data Protection Regulation (GDPR) – a beefed-up version of the existing Data Protection Act (DPA) – is likely to only exacerbate a fundamental problem, therefore: citizens don’t want the state to know much about them, and public servants don’t want to share. Each behaviour is paradoxical, and thus complex to address culturally.

Worse, we need to accelerate our public conversation considerably if we are to maintain pace with accelerating technological developments.

Existing complexity in the data space will shortly be exacerbated by new abilities to process unstructured data such as images and natural language – abilities which offer entirely new opportunities for commercial exploitation as well as surveillance…(More)”.

The Challenges of Prediction: Lessons from Criminal Justice


Paper by David G. Robinson: “Government authorities at all levels increasingly rely on automated predictions, grounded in statistical patterns, to shape people’s lives. Software that wields government power deserves special attention, particularly when it uses historical data to decide automatically what ought to happen next.

In this article, I draw examples primarily from the domain of criminal justice — and in particular, the intersection of civil rights and criminal justice — to illustrate three structural challenges that can arise whenever law or public policy contemplates adopting predictive analytics as a tool:

1) What matters versus what the data measure;
2) Current goals versus historical patterns; and
3) Public authority versus private expertise.

After explaining each of these challenges and illustrating each with concrete examples, I describe feasible ways to avoid these problems and to do prediction more successfully…(More)”