Science is best when the data is an open book


 at the Conversation: “It was 1986, and the American space agency, NASA, was reeling from the loss of seven lives. The space shuttle Challenger had broken apart about one minute after its launch.

A Congressional commission was formed to report on the tragedy. The physicist Richard Feynman was one of its members.

NASA officials had testified to Congress that the chance of a shuttle failure was around 1 in 100,000. Feynman wanted to look beyond the official testimony to the numbers and data that backed it up.

After completing his investigation, Feynman summed up his findings in an appendix to the Commission’s official report, in which he declaredthat NASA officials had “fooled themselves” into thinking that the shuttle was safe.

After a launch, shuttle parts sometimes came back damaged or behaved in unexpected ways. In many of those cases, NASA came up with convenient explanations that minimised the importance of these red flags. The people at NASA badly wanted the shuttle to be safe, and this coloured their reasoning.

To Feynman, this sort of behaviour was not surprising. In his career as a physicist, Feynman had observed that not just engineers and managers, but also basic scientists have biases that can lead to self-deception.

Feynman believed that scientists should constantly remind themselves of their biases. “The first principle” of being a good researcher, according to Feynman, “is that you must not fool yourself, and you are the easiest person to fool”….In the official report to Congress, Feynman and his colleagues recommended an independent oversight group be established to provide a continuing analysis of risk that was less biased than could be provided by NASA itself. The agency needed input from people who didn’t have a stake in the shuttle being safe.

Individual scientists also need that kind of input. The system of science ought to be set up in such a way that researchers subscribing to different theories can give independent interpretations of the same data set.

This would help protect the scientific community from the tendency for individuals to fool themselves into seeing support for their theory that isn’t there.

To me it’s clear: researchers should routinely examine others’ raw data. But in many fields today there is no opportunity to do so.

Scientists communicate their findings to each other via journal articles. These articles provide summaries of the data, often with a good deal of detail, but in many fields the raw numbers aren’t shared. And the summaries can be artfully arranged to conceal contradictions and maximise the apparent support for the author’s theory.

Occasionally, an article is true to the data behind it, showing the warts and all. But we shouldn’t count on it. As the chemist Matthew Todd has said to me, that would be like expecting a real estate agent’s brochure for a property to show the property’s flaws. You wouldn’t buy a house without seeing it with your own eyes. It can be unwise to buy into a theory without seeing the unfiltered data.

Many scientific societies recognise this. For many years now, some of the journals they oversee have had a policy of requiring authors to provide the raw data when other researchers request it.

Unfortunately, this policy has failed spectacularly, at least in some areas of science. Studies have found that when one researcher requests the data behind an article, that article’s authors respond with the data in fewer than half of cases. This is a major deficiency in the system of science, an embarrassment really.

The well-intentioned policy of requiring that data be provided upon request has turned out to be a formula for unanswered emails, for excuses, and for delays. A data before request policy, however, can be effective.

A few journals have implemented this, requiring that data be posted online upon publication of the article…(More)”

Partnership Governance in Public Management


A Public Solutions Handbook y Seth A. Grossman, Marc Holzer: “The ability to create and sustain partnerships is a skill and a strategic capacity that utilizes the strengths and offsets the weaknesses of each actor. Partnerships between the public and private sectors allow each to enjoy the benefits of the other: the public sector benefits from increased entrepreneurship and the private sector utilizes public authority and processes to achieve economic and community revitalization. Partnership Governance in Public Management describes what partnership is in the public sector, as well as how it is managed, measured, and evaluated. Both a theoretical and practical text, this book is a what, why, and how examination of a key function of public management.Examining governing capacity, community building, downtown revitalization, and partnership governance through the lens of formalized public-private partnerships – specifically, how these partnerships are understood and sustained in our society – this book is essential reading for students and practitioners with an interest in partnership governance and public administration and management more broadly. Chapters explore partnering technologies as a way to bridge sectors, to produce results and a new sense of public purpose, and to form a stable foundation for governance to flourish….(More)”

Handbook of Digital Politics


Book edited by Stephen Coleman: “Politics continues to evolve in the digital era, spurred in part by the accelerating pace of technological development. This cutting-edge Handbook includes the very latest research on the relationship between digital information, communication technologies and politics.

Written by leading scholars in the field, the chapters explore in seven parts: theories of digital politics, government and policy, collective action and civic engagement, political talk, journalism, internet governance and new frontiers in digital politics research. The contributors focus on the politics behind the implementation of digital technologies in society today.

All students in the fields of politics, media and communication studies, journalism, science and sociology will find this book to be a useful resource in their studies. Political practitioners seeking digital strategies, as well as web and other digital practitioners wanting to know more about political applications for their work will also find this book to be of interest….(More)”

The deception that lurks in our data-driven world


Alexis C. Madrigal at Fusion: “…There’s this amazing book called Seeing Like a State, which shows how governments and other big institutions try to reduce the vast complexity of the world into a series of statistics that their leaders use to try to comprehend what’s happening.

The author, James C. Scott, opens the book with an extended anecdote about the Normalbaum. In the second half of the 18th century, Prussian rulers wanted to know how many “natural resources” they had in the tangled woods of the country. So, they started counting. And they came up with these huge tables that would let them calculate how many board-feet of wood they could pull from a given plot of forest. All the rest of the forest, everything it did for the people and the animals and general ecology of the place was discarded from the analysis.

The world proved too unruly. Their data wasn’t perfect.

But the world proved too unruly. Their data wasn’t perfect. So they started creating new forests, the Normalbaum, planting all the trees at the same time, and monoculturing them so that there were no trees in the forest that couldn’t be monetized for wood. “The fact is that forest science and geometry, backed by state power, had the capacity to transform the real, diverse, and chaotic old-growth forest into a new, more uniform forest that closely resembled the administrative grid of its techniques,” Scott wrote.

normal forrest plan

The spreadsheet became the world! They even planted the trees in rows, like a grid.

German foresters got very scientific with their fertilizer applications and management practices. And the scheme really worked—at least for a hundred years. Pretty much everyone across the world adopted their methods.

Then the forests started dying.

“In the German case, the negative biological and ultimately commercial consequences of the stripped-down forest became painfully obvious only after the second rotation of conifers had been planted,” Scott wrote.

The complex ecosystem that underpinned the growth of these trees through generations—all the microbial and inter-species relationships—were torn apart by the rigor of the Normalbaum. The nutrient cycles were broken. Resilience was lost. The hidden underpinnings of the world were revealed only when they were gone. The Germans, like they do, came up with a new word for what happened: Waldsterben, or forest death.

The hidden underpinnings of the world were revealed only when they were gone.

Sometimes, when I look out at our world—at the highest level—in which thin data have come to stand in for huge complex systems of human and biological relationships, I wonder if we’re currently deep in the Normalbaum phase of things, awaiting the moment when Waldsterbensets in.

Take the ad-supported digital media ecosystem. The idea is brilliant: capture data on people all over the web and then use what you know to show them relevant ads, ads they want to see. Not only that, but because it’s all tracked, unlike broadcast or print media, an advertiser can measure what they’re getting more precisely. And certainly the digital advertising market has grown, taking share from most other forms of media. The spreadsheet makes a ton of sense—which is one reason for the growth predictions that underpin the massive valuations of new media companies.

But scratch the surface, like Businessweek recently did, and the problems are obvious. A large percentage of the traffic to many stories and videos consists of software pretending to be human.

“The art is making the fake traffic look real, often by sprucing up websites with just enough content to make them appear authentic,” Businessweek says. “Programmatic ad-buying systems don’t necessarily differentiate between real users and bots, or between websites with fresh, original work, and Potemkin sites camouflaged with stock photos and cut-and-paste articles.”

Of course, that’s not what high-end media players are doing. But the cheap programmatic ads, fueled by fake traffic, drive down the pricesacross the digital media industry, making it harder to support good journalism. Meanwhile, users of many sites are rebelling against the business model by installing ad blockers.

The advertisers and ad-tech firms just wanted to capture user data to show them relevant ads. They just wanted to measure their ads more effectively. But placed into the real-world, the system that grew up around these desires has reshaped the media landscape in unpredictable ways.

We’ve deceived ourselves into thinking data is a camera, but it’s really an engine. Capturing data about something changes the way that something works. Even the mere collection of stats is not a neutral act, but a way of reshaping the thing itself….(More)”

Nudge 2.0


Philipp Hacker: “This essay is both a review of the excellent book “Nudge and the Law. A European Perspective”, edited by Alberto Alemanno and Anne-Lise Sibony, and an assessment of the major themes and challenges that the behavioural analysis of law will and should face in the immediate future.

The book makes important and novel contributions in a range of topics, both on a theoretical and a substantial level. Regarding theoretical issues, four themes stand out: First, it highlights the differences between the EU and the US nudging environments. Second, it questions the reliance on expertise in rulemaking. Third, it unveils behavioural trade-offs that have too long gone unnoticed in behavioural law and economics. And fourth, it discusses the requirement of the transparency of nudges and the related concept of autonomy. Furthermore, the different authors discuss the impact of behavioural regulation on a number of substantial fields of law: health and lifestyle regulation, privacy law, and the disclosure paradigm in private law.

This paper aims to take some of the book’s insights one step further in order to point at crucial challenges – and opportunities – for the future of the behavioural analysis of law. In the last years, the movement has gained tremendously in breadth and depth. It is now time to make it scientifically even more rigorous, e.g. by openly embracing empirical uncertainty and by moving beyond the neo-classical/behavioural dichotomy. Simultaneously, the field ought to discursively readjust its normative compass. Finally and perhaps most strikingly, however, the power of big data holds the promise of taking behavioural interventions to an entirely new level. If these challenges can be overcome, this paper argues, the intersection between law and behavioural sciences will remain one of the most fruitful approaches to legal analysis in Europe and beyond….(More)”

Digital Research Confidential


New book edited by Eszter Hargittai and Christian Sandvig: “The realm of the digital offers both new methods of research and new objects of study. Because the digital environment for scholarship is constantly evolving, researchers must sometimes improvise, change their plans, and adapt. These details are often left out of research write-ups, leaving newcomers to the field frustrated when their approaches do not work as expected. Digital Research Confidentialoffers scholars a chance to learn from their fellow researchers’ mistakes—and their successes.

The book—a follow-up to Eszter Hargittai’s widely read Research Confidential—presents behind-the-scenes, nuts-and-bolts stories of digital research projects, written by established and rising scholars. They discuss such challenges as archiving, Web crawling, crowdsourcing, and confidentiality. They do not shrink from specifics, describing such research hiccups as an ethnographic interview so emotionally draining that afterward the researcher retreated to a bathroom to cry, and the seemingly simple research question about Wikipedia that mushroomed into years of work on millions of data points. Digital Research Confidential will be an essential resource for scholars in every field….(More)”

Gamification and Sustainable Consumption: Overcoming the Limitations of Persuasive Technologies


Paper by Martina Z. Huber and Lorenz M. Hilty: “The current patterns of production and consumption in the industrialized world are not sustainable. The goods and services we consume cause resource extractions, greenhouse gas emissions and other environmental impacts that are already affecting the conditions of living on Earth. To support the transition toward sustainable consumption patterns, ICT applications that persuade consumers to change their behavior into a “green” direction have been developed in the field of Persuasive Technology (PT).

Such persuasive systems, however, have been criticized for two reasons. First, they are often based on the assumption that information (e.g., information on individual energy consumption) causes behavior change, or a change in awareness and attitude that then changes behavior. Second, PT approaches assume that the designer of the system starts from objective criteria for “sustainable” behavior and is able to operationalize them in the context of the application.

In this chapter, we are exploring the potential of gamification to overcome the limitations of persuasive systems. Gamification, the process of using game elements in a non-game context, opens up a broader design space for ICT applications created to support sustainable consumption. In particular, a gamification-based approach may give the user more autonomy in selecting goals and relating individual action to social interaction. The idea of gamification may also help designers to view the user’s actions in a broader context and to recognize the relevance of different motivational aspects of social interaction, such as competition and cooperation. Based on this discussion we define basic requirements to be used as guidance in gamificationbased motivation design for sustainable consumption….(More)”

This free online encyclopedia has achieved what Wikipedia can only dream of


Nikhil Sonnad at Quartz: “The Stanford Encyclopedia of Philosophy may be the most interesting website on the internet. Not because of the content—which includes fascinating entries on everything from ambiguity to zombies—but because of the site itself.

Its creators have solved one of the internet’s fundamental problems: How to provide authoritative, rigorously accurate knowledge, at no cost to readers. It’s something the encyclopedia, or SEP, has managed to do for two decades.

The internet is an information landfill. Somewhere in it—buried under piles of opinion, speculation, and misinformation—is virtually all of human knowledge. The story of the SEP shows that it is possible to create a less trashy internet.  But sorting through the trash is difficult work. Even when you have something you think is valuable, it often turns out to be a cheap knock-off.

The story of how the SEP is run, and how it came to be, shows that it is possible to create a less trashy internet—or at least a less trashy corner of it. A place where actual knowledge is sorted into a neat, separate pile instead of being thrown into the landfill. Where the world can go to learn everything that we know to be true. Something that would make humans a lot smarter than the internet we have today.

The impossible trinity of information

The online SEP has humble beginnings. Edward Zalta, a philosopher at Stanford’s Center for the Study of Language and Information, launched it way back in September 1995, with just two entries.

Philosophizing, pre-internet.(Flickr/Erik Drost—CC-BY-2.0)

That makes it positively ancient in internet years. Even Wikipedia is only 14. ….

John Perry, the director of the center, was the one who first suggested a dictionary of philosophical terms. But Zalta had bigger ideas. He and two co-authors later described the challenge in a 2002 paper (pdf, p. 1):

A fundamental problem faced by the general public and the members of an academic discipline in the information age is how to find the most authoritative, comprehensive, and up-to-date information about an important topic.

That paper is so old that it mentions “CD-ROMs” in the second sentence. But for all the years that have passed, the basic problem remains unsolved.  The requirements are an “impossible trinity”—like having your cake, eating it, and then bringing it to another party. The three requirements the authors list—”authoritative, comprehensive, and up-to-date”—are to information what the “impossible trinity” is to economics. You can only ever have one or two at once. It is like having your cake, eating it, and then bringing it to another party.

Yet if the goal is to share with people what is true, it is extremely important for a resource to have all of these things. It must be trusted. It must not leave anything out. And it must reflect the latest state of knowledge. Unfortunately, all of the other current ways of designing an encyclopedia very badly fail to meet at least one of these requirements.

Where other encyclopedias fall short

Book

Authoritative: √

Comprehensive: X

Up-to-date: X

Printed encyclopedias: still a thing(Princeton University Press)

Printed books are authoritative: Readers trust articles they know have been written and edited by experts. Books also produce a coherent overview of a subject, as the editors consider how each entry fits into the whole. But they become obsolete whenever new research comes out. Nor can a book (or even a set of volumes) be comprehensive, except perhaps for a very narrow discipline; there’s simply too much to print.

Crowdsourcing

Authoritative: X

Comprehensive: X

Up-to-date: √

A crowdsourced online encyclopedia has the virtue of timeliness. Thanks to Wikipedia’s vibrant community of non-experts, its entries on breaking-news events are often updated as they happen. But except perhaps in a few areas in which enough well-informed people care for errors to get weeded out, Wikipedia is not authoritative.  Basic mathematics entries on Wikipedia were a “a hot mess of error, arrogance, obscurity, and nonsense.”  One math professor reviewed basic mathematics entries and found them to be a “a hot mess of error, arrogance, obscurity, and nonsense.” Nor is it comprehensive: Though it has nearly 5 million articles in the English-language version alone, seemingly in every sphere of knowledge, fewer than 10,000 are “A-class” or better, the status awarded to articles considered “essentially complete.”

Speaking of holes, the SEP has a rather detailed entry on the topic of holes, and it rather nicely illustrates one of Wikipedia’s key shortcomings. Holes present a tricky philosophical problem, the SEP entry explains: A hole is nothing, but we refer to it as if it were something. (Achille Varzi, the author of the holes entry, was called upon in the US presidential election in 2000 toweigh in on the existential status of hanging chads.) If you ask Wikipedia for holes it gives you the young-adult novel Holes and the band Hole.

In other words, holes as philosophical notions are too abstract for a crowdsourced venue that favors clean, factual statements like a novel’s plot or a band’s discography. Wikipedia’s bottom-up model could never produce an entry on holes like the SEP’s.

Crowdsourcing + voting

Authoritative: ?

Comprehensive: X

Up-to-date: ?

A variation on the wiki model is question-and-answer sites like Quora (general interest) and StackOverflow (computer programming), on which users can pose questions and write answers. These are slightly more authoritative than Wikipedia, because users also vote answers up or down according to how helpful they find them; and because answers are given by single, specific users, who are encouraged to say why they’re qualified (“I’m a UI designer at Google,” say).

But while there are sometimes ways to check people’s accreditation, it’s largely self-reported and unverified. Moreover, these sites are far from comprehensive. Any given answer is only as complete as its writer decides or is able to make it. And the questions asked and answered tend to reflect the interests of the sites’ users, which in both Quora and StackOverflow’s cases skew heavily male, American, and techie.

Moreover, the sites aren’t up-to-date. While they may respond quickly to new events, answers that become outdated aren’t deleted or changed but stay there, burdening the site with a growing mass of stale information.

The Stanford solution

So is the impossible trinity just that—impossible? Not according to Zalta. He imagined a different model for the SEP: the “dynamic reference work.”

Dynamic reference work

Authoritative: √

Comprehensive: √

Up-to-date: √

To achieve authority, several dozen subject editors—responsible for broad areas like “ancient philosophy” or “formal epistemology”—identify topics in need of coverage, and invite qualified philosophers to write entries on them. If the invitation is accepted, the author sends an outline to the relevant subject editors.

 This is not somebody randomly deciding to answer a question on Quora. “An editor works with the author to get an optimal outline before the author begins to write,” says Susanna Siegel, subject editor for philosophy of mind. “Sometimes there is a lot of back and forth at this stage.” Editors may also reject entries. Zalta and Uri Nodelman, the SEP’s senior editor, say that this almost never happens. In the rare cases when it does, the reason is usually that an entry is overly biased. In short, this is not somebody randomly deciding to answer a question on Quora.

An executive editorial board—Zalta, Nodelman, and Colin Allen—works to make the SEP comprehensive….(More)”

Routledge International Handbook of Ignorance Studies


Book edited by Matthias Gross and Linsey McGoey: “Once treated as the absence of knowledge, ignorance today has become a highly influential topic in its own right, commanding growing attention across the natural and social sciences where a wide range of scholars have begun to explore the social life and political issues involved in the distribution and strategic use of not knowing. The field is growing fast and this handbook reflects this interdisciplinary field of study by drawing contributions from economics, sociology, history, philosophy, cultural studies, anthropology, feminist studies, and related fields in order to serve as a seminal guide to the political, legal and social uses of ignorance in social and political life….(More)”

Drones and Aerial Observation: New Technologies for Property Rights, Human Rights, and Global Development


New America Foundation: “Clear and secure rights to property—land, natural resources, and other goods and assets—are crucial to human prosperity. Most people lack such rights. That lack is in part a consequence of political and social breakdowns, and in part driven by informational deficits. Unmanned Aerial Vehicles (UAVs), also known as drones, are able to gather large amounts of information cheaply and efficiently by virtue of their aerial perspective, as can unpowered platforms like kites and balloons.

That information, in the form of images, maps, and other data, can be used by communities to improve the quality and character of their property rights. These same tools are also useful in other, related aspects of global development. Drone surveillance can help conservationists protect endangered wildlife and aid scientists in understanding the changing climate; drone imagery can be used by advocates and analysts to document and deter human rights violations; UAVs can be used by first responders to search for lost people or to evaluate the extent of damage after natural disasters like earthquakes or hurricanes.

This primer discusses the capabilities and limitations of unmanned aerial vehicles in advancing property rights, human rights and development more broadly. It contains both nuts-and-bolts advice to drone operators and policy guidance.

Click below to download the text of this primer, or on the corresponding link for a particular chapter…. (More)