How to stop being so easily manipulated by misleading statistics


Q&A by Akshat Rathi in Quartz: “There are three kinds of lies: Lies, damned lies, and statistics.” Few people know the struggle of correcting such lies better than David Spiegelhalter. Since 2007, he has been the Winton professor for the public understanding of risk (though he prefers “statistics” to “risk”) at the University of Cambridge.In a sunlit hotel room in Washington DC, Quartz caught up with Spiegelhalter recently to talk about his unique job. The conversation sprawled from the wisdom of eating bacon (would you swallow any other known carcinogen?), to the serious crime of manipulating charts, to the right way to talk about rare but scary diseases.

In a sunlit hotel room in Washington DC, Quartz caught up with Spiegelhalter recently to talk about his unique job. The conversation sprawled from the wisdom of eating bacon (would you swallow any other known carcinogen?), to the serious crime of manipulating charts, to the right way to talk about rare but scary diseases.

 When he isn’t fixing people’s misunderstandings of numbers, he works to communicate numbers better so that misunderstandings can be avoided from the beginning. The interview is edited and condensed for clarity….
What’s a recent example of misrepresentation of statistics that drove you bonkers?
I got very grumpy at an official graph of British teenage pregnancy rates that apparently showed they had declined to nearly zero. Until I realized that the bottom part of the axis had been cut off, which made it impossible to visualize the (very impressive) 50% reduction since 2000.You once said graphical representation of data does not always communicate what we think it communicates. What do you mean by that?
Graphs can be as manipulative as words. Using tricks such as cutting axes, rescaling things, changing data from positive to negative, etc. Sometimes putting zero on the y-axis is wrong. So to be sure that you are communicating the right things, you need to evaluate the message that people are taking away. There are no absolute rules. It all depends on what you want to communicate….

Poorly communicated risk can have a severe effect. For instance, the news story about the risk that pregnant women are exposing their unborn child to when they drink alcohol caused stress to one of our news editors who had consumed wine moderately through her pregnancy.

 I think it’s irresponsible to say there is a risk when they actually don’t know if there is one. There is scientific uncertainty about that.
  “‘Absence of evidence is not evidence of absence.’ I hate that phrase…It’s always used in a manipulative way.” In such situations of unknown risk, there is a phrase that is often used: “Absence of evidence is not evidence of absence.” I hate that phrase. I get so angry when people use that phrase. It’s always used in a manipulative way. I say to them that it’s not evidence of absence, but if you’ve looked hard enough you’ll see that most of the time the evidence shows a very small effect, if at all.

So on the risks of drinking alcohol while being pregnant, the UK’s health authority said that as a precautionary step it’s better not to drink. That’s fair enough. This honesty is important. To say that we don’t definitely know if drinking is harmful, but to be safe we say you shouldn’t. That’s treating people as adults and allowing them to use their own judgement.

Science is a bigger and bigger part of our lives. What is the limitation in science journalism right now and how can we improve it?...(More)

The Function of—and Need for—Institutional Review Boards


Review by  of The Censor’s Hand: The Misregulation of Human-Subject Research (Carl E. Schneider, The MIT Press): “Scientific research can be a laborious and frustrating process even before it gets started—especially when it involves living human subjects. Universities and other research institutions maintain Institutional Review Boards that scrutinize research proposals and their methodologies, consent and privacy procedures, and so on. Similarly intensive reviews are required when the intention is to use human tissue—if, say, tissue from diagnostic cancer biopsies could potentially be used to gauge the prevalence of some other illness across the population. These procedures can generate absurdities. A doctor who wanted to know which television characters children recognized, for example, was advised to seek ethics committee approval, and told that he needed to do a pilot study as a precursor.

Today’s IRB system is the response to a historic problem: academic researchers’ tendency to behave abominably when left unmonitored. Nazi medical and pseudomedical experiments provide an obvious and well-known reference, but such horrors are not found only in totalitarian regimes. The Tuskegee syphilis study, for example, deliberately left black men untreated over the course of decades so researchers could study the natural course of the disease. On a much smaller but equally disturbing scale is the case of Dan Markingson, a 26-year-old University of Michigan graduate. Suffering from psychotic illness, Markingson was coercively enrolled in a study of antipsychotics to which he could not consent, and concerns about his deteriorating condition were ignored. In 2004, he was found dead, having almost decapitated himself with a box cutter.

Many thoughtful ethicists are aware of the imperfections of IRBs. They have worried publicly for some time that the IRB system, or parts of it, may claim an authority with which even many bioethicists are uncomfortable, and hinder science for no particularly good reason. Does the system need re-tuning, a total re-build, or something even more drastic?

When it comes to IRBs, Carl E. Schneider, a professor of law and internal medicine at the University of Michigan, belongs to the abolitionist camp. In The Censor’s Hand: The Misregulation of Human-Subject Research, he presents the case against the IRB system plainly. It is a case that rests on seven related charges.

IRBs, Schneider posits, cannot be shown to do good, with regulators able to produce “no direct evidence that IRBs prevented harm”; that an IRB at least went through the motions of reviewing the trial in which Markingson died might be cited as evidence of this. On top of that, he claims, IRBs sometimes cause harm, at least insofar as they slow down medical innovation. They are built to err on the side of caution, since “research on humans” can cover a vast range of activities and disciplines, and they struggle to take this range into proper account. Correspondingly, they “lack a legible and convincing ethics”; the autonomy of IRBs means that they come to different decisions on identical cases. (In one case, an IRB thought that providing supplemental vitamin A in a study was so dangerous that it should not be allowed; another thought that withholding it in the same study was so dangerous that it should not be allowed.) IRBs have unrealistically high expectations of their members, who are often fairly ad hoc groupings with no obvious relevant expertise. They overemphasize informed consent, with the unintended consequence that cramming every possible eventuality into a consent form makes it utterly incomprehensible. Finally, Schneider argues, IRBs corrode free expression by restricting what researchers can do and how they can do it….(More)”

Innovation Prizes in Practice and Theory


Paper by Michael J. Burstein and Fiona Murray: “Innovation prizes in reality are significantly different from innovation prizes in theory. The former are familiar from popular accounts of historical prizes like the Longitude Prize: the government offers a set amount for a solution to a known problem, like £20,000 for a method of calculating longitude at sea. The latter are modeled as compensation to inventors in return for donating their inventions to the public domain. Neither the economic literature nor the policy literature that led to the 2010 America COMPETES Reauthorization Act — which made prizes a prominent tool of government innovation policy — provides a satisfying justification for the use of prizes, nor does either literature address their operation. In this article, we address both of these problems. We use a case study of one canonical, high profile innovation prize — the Progressive Insurance Automotive X Prize — to explain how prizes function as institutional means to achieve exogenously defined innovation policy goals in the face of significant uncertainty and information asymmetries. Focusing on the structure and function of actual innovation prizes as an empirical matter enables us to make three theoretical contributions to the current understanding of prizes. First, we offer a stronger normative justification for prizes grounded in their status as a key institutional arrangement for solving a specified innovation problem. Second, we develop a model of innovation prize governance and then situate that model in the administrative state, as a species of “new governance” or “experimental” regulation. Third, we derive from those analyses a novel framework for choosing among prizes, patents, and grants, one in which the ultimate choice depends on a trade off between the efficacy and scalability of the institutional solution….(More)”

Big data, meet behavioral science


 at Brookings: “America’s community colleges offer the promise of a more affordable pathway to a bachelor’s degree. Students can pay substantially less for the first two years of college, transfer to a four-year college or university, and still earn their diploma in the same amount of time. At least in theory. Most community college students—80 percent of them—enter with the intention to transfer, but only 20 percent actually do so within five years of entering college. This divide represents a classic case of what behavioralists call an intention-action gap.

Why would so many students who enter community colleges intending to transfer fail to actually do so? Put yourself in the shoes of a 20-something community college student. You’ve worked hard for the past couple years, earning credits and paying a lot less in tuition than you would have if you had enrolled immediately in a four-year college or university. But now you want to transfer, so that you can complete your bachelor’s degree. How do you figure out where to go? Ideally you’d probably like to find a college that would take most of your credits, where you’re likely to graduate from, and where the degree is going to count for something in the labor market. A college advisor could probably help you figure this out,but at many community colleges there are at least 1,000 other students assigned to your advisor, so you might have a hard time getting a quality meeting.  Some states have articulation agreements between two- and four-year institutions that guarantee admission for students who complete certain course sequences and perform at a high enough level. But these agreements are often dense and inaccessible.

The combination of big data and behavioral insights has the potential to help students navigate these complex decisions and successfully follow through on their intentions. Big data analytic techniques allow us to identify concrete transfer pathways where students are positioned to succeed; behavioral insights ensure we communicate these options in a way that maximizes students’ engagement and responsiveness…..A growing body of innovative research has demonstrated that, by applying behavioral science insights to the way we communicate with students and families about the opportunities and resources available to them, we can help people navigate these complex decisions and experience better outcomes as a result. A combination of simplified information, reminders, and access to assistance have improved achievement and attainment up and down the education pipeline, nudging parents to practice early-literacy activities with their kids or check in with their high schoolers about missed assignments, andencouraging students to renew their financial aid for college….

These types of big data techniques are already being used in some education sectors. For instance, a growing number of colleges use predictive analytics to identify struggling students who need additional assistance, so faculty and administrators can intervene before the student drops out. But frequently there is insufficient attention, once the results of these predictive analyses are in hand, about how to communicate the information in a way that is likely to lead to behavior change among students or educators. And much of the predictive analytics work has been on the side of plugging leaks in the pipeline (e.g. preventing drop-outs from higher education), rather than on the side of proactively sending students and families personalized information about educational and career pathways where they are likely to flourish…(More)”

Accelerating Discovery with New Tools and Methods for Next Generation Social Science


DARPA: “The explosive growth of global digital connectivity has opened new possibilities for designing and conducting social science research. Once limited by practical constraints to experiments involving just a few dozen participants—often university students or other easily available groups—or to correlational studies of large datasets without any opportunity for determining causation, scientists can now engage thousands of diverse volunteers online and explore an expanded range of important topics and questions. If new tools and methods for harnessing virtual or alternate reality and massively distributed platforms could be developed and objectively validated, many of today’s most important and vexing challenges in social science—such as identifying the primary drivers of social cooperation, instability and resilience—might be made more tractable, with benefits for domains as broad as national security, public health, and economics.

To begin to assess the research opportunities provided by today’s web-connected world and advanced technologies, DARPA today launched its Next Generation Social Science (NGS2) program. The program aims to build and evaluate new methods and tools to advance rigorous, reproducible social science studies at scales necessary to develop and validate causal models of human social behaviors. The program will draw upon and build across a wide array of disciplines—including social sciences like sociology, economics, political science, anthropology, and psychology, as well as information and computer sciences, physics, biology and math.

As an initial focus, NGS2 will challenge researchers to develop and use these new tools and methods to identify causal mechanisms of “collective identity” formation—how a group of individuals becomes a unified whole, and how under certain circumstances that community breaks down into a chaotic mix of disconnected individuals.

“Social science has done a remarkable job of helping us understand ourselves as the highly social creatures we are, but the field has long acknowledged and rued some frustrating research limitations, including technical and logistical limits to experimentally studying large, representative populations and the challenges of replicating key studies to better understand the limits of our knowledge,” said Adam Russell, DARPA program manager. “As a result, it’s been difficult for social scientists to determine what variables matter most in explaining their observations of human social systems and to move from documenting correlation to identifying causation.”

On top of those methodological and analytic limitations, Russell said, the field is inherently challenged because of its subject matter: human beings, with all their complex variability and seeming unpredictability. “Physicists have joked about how much more difficult their field would be if atoms or electrons had personalities, but that’s exactly the situation faced by social scientists,” he said.

By developing and applying new methods and models to larger, more diverse, and more representative groups of individuals—such as through web-based global gaming and alternate reality platforms—NGS2 seeks to validate new tools that may empower social science in the same way that sophisticated telescopes and microscopes have helped advance astronomy and biology….(More)”

Revolutionizing Innovation: Users, Communities, and Open Innovation


Book edited by Dietmar Harhoff and Karim R. Lakhani: “The last two decades have witnessed an extraordinary growth of new models of managing and organizing the innovation process that emphasizes users over producers. Large parts of the knowledge economy now routinely rely on users, communities, and open innovation approaches to solve important technological and organizational problems. This view of innovation, pioneered by the economist Eric von Hippel, counters the dominant paradigm, which cast the profit-seeking incentives of firms as the main driver of technical change. In a series of influential writings, von Hippel and colleagues found empirical evidence that flatly contradicted the producer-centered model of innovation. Since then, the study of user-driven innovation has continued and expanded, with further empirical exploration of a distributed model of innovation that includes communities and platforms in a variety of contexts and with the development of theory to explain the economic underpinnings of this still emerging paradigm. This volume provides a comprehensive and multidisciplinary view of the field of user and open innovation, reflecting advances in the field over the last several decades.

The contributors—including many colleagues of Eric von Hippel—offer both theoretical and empirical perspectives from such diverse fields as economics, the history of science and technology, law, management, and policy. The empirical contexts for their studies range from household goods to financial services. After discussing the fundamentals of user innovation, the contributors cover communities and innovation; legal aspects of user and community innovation; new roles for user innovators; user interactions with firms; and user innovation in practice, describing experiments, toolkits, and crowdsourcing, and crowdfunding…(More)”

How Google Optimized Healthy Office Snacks


Zoe ChanceRavi DharMichelle Hatzis and Michiel Bakker at Harvard Business Review: “Employers need simple, low-cost ways of helping employees make healthy choices. The effects of poor health and obesity cost U.S. companies $225 billion every year, according to the Centers for Disease Control, and this number is quickly rising. Although some employer-sponsored wellness programs have yielded high returns — Johnson & Johnson reported a 170% return on wellness spending in the 2000s — the employee wellness industry as a whole has struggled to prove its value.

 

Wellness initiatives often fail because they rely on outdated methods of engagement, placing too much emphasis on providing information. Extensive evidence from behavioral economics has shown that information rarely succeeds in changing behavior or building new habits for fitness and food choices. Telling people why and how to improve their health fails to elicit behavior changes because behavior often diverges from intentions. This is particularly true for food choices because our self-control is taxed by any type of depletion, including hunger. And the necessity of making food decisions many times a day means we can’t devote much processing power to each choice, so our eating behaviors tend to be habit- and instinct-driven. With a clearer understanding of the influences on choice — context and impulsivity, for instance — companies can design environments that reinforce employees’ healthy choices, limit potential lapses, and save on health care costs.

Jointly, the Google Food Team and the Yale Center for Customer Insights have been studying how behavioral economics can improve employee health choices. We’ve run multiple field experiments to understand how small “tweaks” can nudge behavior toward desirable outcomes and yield outsized benefits. To guide these interventions, we distilled scattered findings from behavioral science into a simple framework, the four Ps of behavior change:

  • Process
  • Persuasion
  • Possibilities
  • Person

The framework helped us structure a portfolio of strategies for making healthy choices easier and more enticing and making unhealthy choices harder and less tempting. Below, we present a brief example of each point of intervention….(More)”

Design for policy and public services


The Centre for Public Impact: “Traditional approaches to policymaking have left policymakers and citizens looking for alternative solutions. Despite the best of intentions, the standard model of dispassionate expert analysis and subsequent implementation by a professional bureaucracy has, generally, led to siloed solutions and outcomes for citizens that fall short of what might be possible.

The discipline of design may well provide an answer to this problem by offering a collection of methods which allow civil servants to generate insights based on citizens’ needs, aspirations and behaviours. In doing so, it changes the view of citizens from seeing them as anonymous entities to complex humans with complex needs to match. The potential of this new approach is already becoming clear – just ask the medical teams and patients at Norway’s Oslo University Hospital. Women with a heightened risk of developing breast cancer had previously been forced to wait up to three months before receiving an appointment for examination and diagnosis. A redesign reduced this wait to just three days.

In-depth user research identified the principal issues and pinpointed the lack of information about the referral process as a critical problem. The designers also interviewed 40 hospital employees of all levels to find out about their daily schedules and processes. Governments have always drawn inspiration from fields such as sociology and economics. Design methods are not (yet) part of the policymaking canon, but such examples help explain why this may be about to change….(More)”Screen Shot 2016-03-07 at 8.52.52 AM

Private Data and Public Value: Governance, Green Consumption, and Sustainable Supply Chains


Book edited by Jarman, Holly and Luna-Reyes, Luis F: “This book investigates the ways in which these systems can promote public value by encouraging the disclosure and reuse of privately-held data in ways that support collective values such as environmental sustainability. Supported by funding from the National Science Foundation, the authors’ research team has been working on one such system, designed to enhance consumers ability to access information about the sustainability of the products that they buy and the supply chains that produce them. Pulled by rapidly developing technology and pushed by budget cuts, politicians and public managers are attempting to find ways to increase the public value of their actions. Policymakers are increasingly acknowledging the potential that lies in publicly disclosing more of the data that they hold, as well as incentivizing individuals and organizations to access, use, and combine it in new ways.  Due to technological advances which include smarter phones, better ways to track objects and people as they travel, and more efficient data processing, it is now possible to build systems which use shared, transparent data in creative ways. The book adds to the current conversation among academics and practitioners about how to promote public value through data disclosure, focusing particularly on the roles that governments, businesses and non-profit actors can play in this process, making it of interest to both scholars and policy-makers….(More)”

Citizen Science and the Flint Water Crisis


The Wilson Center’s Commons Lab: “In April 2014, the city of Flint, Michigan decided to switch its water supply source from the Detroit water system to a cheaper alternative, the Flint River. But in exchange for the cheaper price tag, the Flint residents paid a greater price with one of the worst public health crises of the past decade.

Despite concerns from Flint citizens about the quality of the water, the Michigan Department of Environmental Quality repeatedly attributed the problem to the plumbing system. It was 37-year-old mother of four, LeeAnne Walters who, after noticing physical and behavioral changes in her children and herself, set off a chain of events that exposed the national scandal. Eventually, with the support of Dr. Marc Edwards, an environmental engineering professor at Virginia Tech (VT), Walters discovered lead concentration levels of 13,200 parts per billion in her water, 880 times the maximum concentration allowed by law and more than twice the level the Environmental Protection Agency considers to be hazardous waste.

Citizen science emerged as an important piece of combating the Flint water crisis. Alarmed by the government’s neglect and the health issues spreading all across Flint, Edwards and Walters began the Flint Water Study, a collaboration between the Flint residents and research team from VT. Using citizen science, the VT researchers provided the Flint residents with kits to sample and test their homes’ drinking water and then analyzed the results to unearth the truth behind Flint’s water quality.

The citizen-driven project illustrates the capacity for nonprofessional scientists to use science in order to address problems that directly affect themselves and their community. While the VT team needed the Flint residents to provide water samples, the Flint residents in turn needed the VT team to conduct the analysis. In short, both parties achieved mutually beneficial results and the partnership helped expose the scandal. Surprisingly, the “traditional” problems associated with citizen science, including the inability to mobilize the local constituent base and the lack of collaboration between citizens and professional scientists, were not the obstacles in Flint….(More)”