Book edited by Jason Chilvers and Matthew Kearnes: “Changing relations between science and democracy – and controversies over issues such as climate change, energy transitions, genetically modified organisms and smart technologies – have led to a rapid rise in new forms of public participation and citizen engagement. While most existing approaches adopt fixed meanings of ‘participation’ and are consumed by questions of method or critiquing the possible limits of democratic engagement, this book offers new insights that rethink public engagements with science, innovation and environmental issues as diverse, emergent and in the making. Bringing together leading scholars on science and democracy, working between science and technology studies, political theory, geography, sociology and anthropology, the volume develops relational and co-productionist approaches to studying and intervening in spaces of participation. New empirical insights into the making, construction, circulation and effects of participation across cultures are illustrated through examples ranging from climate change and energy to nanotechnology and mundane technologies, from institutionalised deliberative processes to citizen-led innovation and activism, and from the global north to global south. This new way of seeing participation in science and democracy opens up alternative paths for reconfiguring and remaking participation in more experimental, reflexive, anticipatory and responsible ways….(More)”
Behavioural Science, Randomized Evaluations and the Transformation Of Public Policy: The Case of the UK Government
Chapter by Peter John: “Behaviour change policy conveys powerful image: groups of psychologists and scientists, maybe wearing white coats, messing with the minds of citizens, doing experiments on them without their consent, and seeking to manipulate their behaviours. Huddled in an office in the heart of Whitehall, or maybe working out of a windowless room in the White House, behavioural scientists are redesigning the messages and regulations that governments make, operating very far from the public’s view. The unsuspecting citizen becomes something akin to the subjects of science fiction novels, such as Huxley’s Brave New World or Zamyatin’s We. The emotional response to these developments is to cry out for a more humanistic form of public policy, a more participatory form of governance, and to base public policy on the common sense and good judgements of citizens and their elected representatives.
Of course, such an account is a massive stereotype, but something of this viewpoint has emerged as a backdrop to critical academic work on the use of behavioural science in government in what is described as the rise of the psychological state (Jones et al 2013a b), which might be seen to represent a step-change in use of psychological and other form of behavioural research to design public policies. Such a claim speaks more generally to the use of scientific ideas by government since the eighteenth century, which has been subject to a considerable amount of theoretical work in recent years, drawing on the work of Foucault, and which has developed into explorations of the theory and practice of governmentality (see Jones et al 2013:182-188).
With behaviour change, the ‘central concern has been to critically evaluate the broader ethical concerns of behavioural governance, which includes tracing geo-historical contingencies of knowledge mobilized in the legitimation of the behavior change agenda itself’ (190). This line of work presents a subtle set of arguments and claims that an empirical account, such as that presented in this chapter, cannot⎯nor should⎯challenge. Nonetheless, it is instructive to find out more about the phenomenon under study and to understand how the uses of behavioural ideas and randomized evaluations are limited and structured by the institutions and actors in the political process, which are following political and organizational ends. Of particular interest is the incremental and patchy nature of the diffusion of ideas, and how the use of behavioural sciences meshes with existing standard operating procedures and routines of bureaucracies. That said, behavioural sciences can make progress within the fragmented and decentralized policy process, and has the power to create innovations in public policies, often helped by articulate advocates of such measures.
The path of ideas in public policy is usually slow, one of gradual diffusion and small changes in operating assumptions, and this route is likely for the use of behavioural sciences. The implication of this line of argument is that agency as well as structure plays an important role in the adoption and diffusion of the ideas from the behavioural sciences. It implies a more limited and less uniform use of ideas and evidence than implied by the critical writers in this field, but one where public argument and debate play a central role….(More)”
Statistical objectivity is a cloak spun from political yarn
Angus Deaton at the Financial Times: “The word data means things that are “given”: baseline truths, not things that are manufactured, invented, tailored or spun. Especially not by politics or politicians. Yet this absolutist view can be a poor guide to using the numbers well. Statistics are far from politics-free; indeed, politics is encoded in their genes. This is ultimately a good thing.
We like to deal with facts, not factoids. We are scandalised when politicians try to censor numbers or twist them, and most statistical offices have protocols designed to prevent such abuse. Headline statistics often seem simple but typically have many moving parts. A clock has two hands and 12 numerals yet underneath there may be thousands of springs, cogs and wheels. Politics is not only about telling the time, or whether the clock is slow or fast, but also about how to design the cogs and wheels. Down in the works, even where the decisions are delegated to bureaucrats and statisticians, there is room for politics to masquerade as science. A veneer of apolitical objectivity can be an effective disguise for a political programme.
Just occasionally, however, the mask drops and the design of the cogs and wheels moves into the glare of frontline politics. Consumer price indexes are leading examples of this. Britain’s first consumer price index was based on spending patterns from 1904. Long before the second world war, these weights were grotesquely outdated. During the war, the cabinet was worried about a wage-price spiral and the Treasury committed to hold the now-irrelevant index below the astonishingly precise value of 201.5 (1914=100) through a policy of food subsidies. It would, for example, respond to an increase in the price of eggs by lowering the price of sugar. Reform of the index would have jeopardised the government’s ability to control it and was too politically risky. The index was not updated until 1947….
These examples show the role of politics needs to be understood, and built in to any careful interpretation of the data. We must always work from multiple sources, and look deep into the cogs and wheels. James Scott, the political scientist, noted that statistics are how the state sees. The state decides what it needs to see and how to see it. That politics infuses every part of this is a testament to the importance of the numbers; lives depend on what they show.
For global poverty or hunger statistics, there is no state and no one’s material wellbeing depends on them. Politicians are less likely to interfere with such data, but this also removes a vital source of monitoring and accountability. Politics is a danger to good data; but without politics data are unlikely to be good, or at least not for long….(More)”
Interdisciplinary Perspectives on Trust
Book edited by Shockley, E., Neal, T.M.S., PytlikZillig, L.M., and Bornstein, B.H.: “This timely collection explores trust research from many angles while ably demonstrating the potential of cross-discipline collaboration to deepen our understanding of institutional trust. Citing, among other things, current breakdowns of trust in prominent institutions, the book presents a multilevel model identifying universal aspects of trust as well as domain- and context-specific variations deserving further study. Contributors analyze similarities and differences in trust across public domains from politics and policing to medicine and science, and across languages and nations. Innovative strategies for measuring and assessing trust also shed new light on this essentially human behavior.
Highlights of the coverage:
- Consensus on conceptualizations and definitions of trust: are we there yet?
- Differentiating between trust and legitimacy in public attitudes towards legal authority.
- Examining the relationship between interpersonal and institutional trust in political and health care contexts.
- Trust as a multilevel phenomenon across contexts.
- Institutional trust across cultures.
- The “dark side” of institutional trust….(more)”
When Lobbyists Write Legislation, This Data Mining Tool Traces The Paper Trail
FastCoExist: “Most kids learn the grade school civics lesson about how a bill becomes a law. What those lessons usually neglect to show is how legislation today is often birthed on a lobbyist’s desk.
But even for expert researchers, journalists, and government transparency groups, tracing a bill’s lineage isn’t easy—especially at the state level. Last year alone, there were 70,000 state bills introduced in 50 states. It would take one person five weeks to even read them all. Groups that do track state legislation usually focus narrowly on a single topic, such as abortion, or perhaps a single lobby groups.
Computers can do much better. A prototype tool, presented in September at Bloomberg’sData for Good Exchange 2015 conference, mines the Sunlight Foundation’s database of more than 500,000 bills and 200,000 resolutions for the 50 states from 2007 to 2015. It also compares them to 1,500 pieces of “model legislation” written by a few lobbying groups that made their work available, such as the conservative group ALEC (American Legislative Exchange Council) and the liberal group the State Innovation Exchange(formerly called ALICE).
The results are interesting. In one example of the program in use, the team—all from the Data Science for Social Good fellowship program in Chicago—created a graphic (above) that presents the relative influence of ALEC and ALICE in different states. The thickness of each line in the graphic correlates to the percentage of bills introduced in each state that are modeled on either group’s legislation. So a relatively liberal state like New York is mostly ALICE bills, while a “swing” state like Illinois has a lot from both groups….
Along with researchers from the University of Chicago, Wikimedia Foundation, Microsoft Research, and Northwestern University, Walsh is also co-author of another paperpresented at the Bloomberg conference shows how data science can increase government transparency.
Walsh and these co-authors developed software that automatically identifies earmarks in U.S. Congressional bills, showing how representatives are benefiting their own states with pork barrel projects. They verified that it works by comparing it to the results of a massive effort from the U.S. Office of Management and Budget to analyze earmarks for a few limited years. Their results, extended back to 1995 in a public database, showed that there may be many more earmarks than anyone thought.
“Governments are making more data available. It’s something like a needle in a haystack problem, trying to extract all that information out,” says Walsh. “Both of these projects are really about shining light to these dark places where we don’t know what’s going on.”
The state legislation tracker data is available for download here, and the team is working on an expanded system that automatically downloads new state legislation so it can stay up to date…(More)”
Science is best when the data is an open book
It was 1986, and the American space agency, NASA, was reeling from the loss of seven lives. The space shuttle Challenger had broken apart about one minute after its launch.
A Congressional commission was formed to report on the tragedy. The physicist Richard Feynman was one of its members.
NASA officials had testified to Congress that the chance of a shuttle failure was around 1 in 100,000. Feynman wanted to look beyond the official testimony to the numbers and data that backed it up.
After completing his investigation, Feynman summed up his findings in an appendix to the Commission’s official report, in which he declaredthat NASA officials had “fooled themselves” into thinking that the shuttle was safe.
After a launch, shuttle parts sometimes came back damaged or behaved in unexpected ways. In many of those cases, NASA came up with convenient explanations that minimised the importance of these red flags. The people at NASA badly wanted the shuttle to be safe, and this coloured their reasoning.
To Feynman, this sort of behaviour was not surprising. In his career as a physicist, Feynman had observed that not just engineers and managers, but also basic scientists have biases that can lead to self-deception.
Feynman believed that scientists should constantly remind themselves of their biases. “The first principle” of being a good researcher, according to Feynman, “is that you must not fool yourself, and you are the easiest person to fool”….In the official report to Congress, Feynman and his colleagues recommended an independent oversight group be established to provide a continuing analysis of risk that was less biased than could be provided by NASA itself. The agency needed input from people who didn’t have a stake in the shuttle being safe.
Individual scientists also need that kind of input. The system of science ought to be set up in such a way that researchers subscribing to different theories can give independent interpretations of the same data set.
This would help protect the scientific community from the tendency for individuals to fool themselves into seeing support for their theory that isn’t there.
To me it’s clear: researchers should routinely examine others’ raw data. But in many fields today there is no opportunity to do so.
Scientists communicate their findings to each other via journal articles. These articles provide summaries of the data, often with a good deal of detail, but in many fields the raw numbers aren’t shared. And the summaries can be artfully arranged to conceal contradictions and maximise the apparent support for the author’s theory.
Occasionally, an article is true to the data behind it, showing the warts and all. But we shouldn’t count on it. As the chemist Matthew Todd has said to me, that would be like expecting a real estate agent’s brochure for a property to show the property’s flaws. You wouldn’t buy a house without seeing it with your own eyes. It can be unwise to buy into a theory without seeing the unfiltered data.
Many scientific societies recognise this. For many years now, some of the journals they oversee have had a policy of requiring authors to provide the raw data when other researchers request it.
Unfortunately, this policy has failed spectacularly, at least in some areas of science. Studies have found that when one researcher requests the data behind an article, that article’s authors respond with the data in fewer than half of cases. This is a major deficiency in the system of science, an embarrassment really.
The well-intentioned policy of requiring that data be provided upon request has turned out to be a formula for unanswered emails, for excuses, and for delays. A data before request policy, however, can be effective.
A few journals have implemented this, requiring that data be posted online upon publication of the article…(More)”
Advancing Open and Citizen-Centered Government
The White House: “Today, the United States released our third Open Government National Action Plan, announcing more than 40 new or expanded initiatives to advance the President’s commitment to an open and citizen-centered government….In the third Open Government National Action Plan, the Administration both broadens and deepens efforts to help government become more open and more citizen-centered. The plan includes new and impactful steps the Administration is taking to openly and collaboratively deliver government services and to support open government efforts across the country. These efforts prioritize a citizen-centric approach to government, including improved access to publicly available data to provide everyday Americans with the knowledge and tools necessary to make informed decisions.
One example is the College Scorecard, which shares data through application programming interfaces (APIs) to help students and families make informed choices about education. Open APIs help create an ecosystem around government data in which civil society can provide useful visual tools, making this data more accessible and commercial developers can enable even more value to be extracted to further empower students and their families. In addition to these newer approaches, the plan also highlights significant longstanding open government priorities such as access to information, fiscal transparency, and records management, and continues to push for greater progress in that work.
The plan also focuses on supporting implementation of the landmark 2030 Agenda for Sustainable Development, which sets out a vision and priorities for global development over the next 15 years and was adopted last month by 193 world leaders including President Obama. The plan includes commitments to harness open government and progress toward the Sustainable Development Goals (SDGs) both in the United States and globally, including in the areas of education, health, food security, climate resilience, science and innovation, justice and law enforcement. It also includes a commitment to take stock of existing U.S. government data that relates to the 17 SDGs, and to creating and using data to support progress toward the SDGs.
Some examples of open government efforts newly included in the plan:
- Promoting employment by unlocking workforce data, including training, skill, job, and wage listings.
- Enhancing transparency and participation by expanding available Federal services to theOpen311 platform currently available to cities, giving the public a seamless way to report problems and request assistance.
- Releasing public information from the electronically filed tax forms of nonprofit and charitable organizations (990 forms) as open, machine-readable data.
- Expanding access to justice through the White House Legal Aid Interagency Roundtable.
- Promoting open and accountable implementation of the Sustainable Development Goals….(More)”
Anyone can help with crowdsourcing future antibiotics
Springwise: “We’ve seen examples of researchers utilizing crowdsourcing to expand their datasets, such as a free mobile app where users help find data patterns in cancer research by playing games. Now a pop-up home lab is harnessing the power of citizen scientists to find future antibiotics in their backyards.
By developing a small home lab, UK-based Post/Biotics is encouraging anyone, including school children, to help find solutions to the growing antibiotics resistance crisis. Post/Biotics is a citizen’s science platform, which provides the toolkit, knowledge and science network so anyone can support antibiotic development. Participants can test samples of basically anything they find in natural areas, from soil to mushrooms, and if their sample has antibacterial properties, their tool will change color. They can then send results, along with a photo and GPS location to an online database. When the database notices a submission that may be interesting, it alerts researchers, who can then ask for samples. An open-source library of potential antimicrobials is then established, and users simultaneously benefit from learning how to conduct microbiology experiments.
Post/Biotics are using the power of an unlimited amount of citizen scientists to increase the research potential of antibiotic discovery….(More)”
Toward a manifesto for the ‘public understanding of big data’
Mike Michael and Deborah Lupton in Public Understanding of Science: “….we sketch a ‘manifesto’ for the ‘public understanding of big data’. On the one hand, this entails such public understanding of science and public engagement with science and technology–tinged questions as follows: How, when and where are people exposed to, or do they engage with, big data? Who are regarded as big data’s trustworthy sources, or credible commentators and critics? What are the mechanisms by which big data systems are opened to public scrutiny? On the other hand, big data generate many challenges for public understanding of science and public engagement with science and technology: How do we address publics that are simultaneously the informant, the informed and the information of big data? What counts as understanding of, or engagement with, big data, when big data themselves are multiplying, fluid and recursive? As part of our manifesto, we propose a range of empirical, conceptual and methodological exhortations. We also provide Appendix 1 that outlines three novel methods for addressing some of the issues raised in the article….(More)”
Big data problems we face today can be traced to the social ordering practices of the 19th century.
Hamish Robertson and Joanne Travaglia in LSE’s The Impact Blog: “This is not the first ‘big data’ era but the second. The first was the explosion in data collection that occurred from the early 19th century – Hacking’s ‘avalanche of numbers’, precisely situated between 1820 and 1840. This was an analogue big data era, different to our current digital one but characterized by some very similar problems and concerns. Contemporary problems of data analysis and control include a variety of accepted factors that make them ‘big’ and these generally include size, complexity and technology issues. We also suggest that digitisation is a central process in this second big data era, one that seems obvious but which has also appears to have reached a new threshold. Until a decade or so ago ‘big data’ looked just like a digital version of conventional analogue records and systems. Ones whose management had become normalised through statistical and mathematical analysis. Now however we see a level of concern and anxiety, similar to the concerns that were faced in the first big data era.
This situation brings with it a socio-political dimension of interest to us, one in which our understanding of people and our actions on individuals, groups and populations are deeply implicated. The collection of social data had a purpose – understanding and controlling the population in a time of significant social change. To achieve this, new kinds of information and new methods for generating knowledge were required. Many ideas, concepts and categories developed during that first data revolution remain intact today, some uncritically accepted more now than when they were first developed. In this piece we draw out some connections between these two data ‘revolutions’ and the implications for the politics of information in contemporary society. It is clear that many of the problems in this first big data age and, more specifically, their solutions persist down to the present big data era….Our question then is how do we go about re-writing the ideological inheritance of that first data revolution? Can we or will we unpack the ideological sequelae of that past revolution during this present one? The initial indicators are not good in that there is a pervasive assumption in this broad interdisciplinary field that reductive categories are both necessary and natural. Our social ordering practices have influenced our social epistemology. We run the risk in the social sciences of perpetuating the ideological victories of the first data revolution as we progress through the second. The need for critical analysis grows apace not just with the production of each new technique or technology but with the uncritical acceptance of the concepts, categories and assumptions that emerged from that first data revolution. That first data revolution proved to be a successful anti-revolutionary response to the numerous threats to social order posed by the incredible changes of the nineteenth century, rather than the Enlightenment emancipation that was promised. (More)”
This is part of a wider series on the Politics of Data. For more on this topic, also see Mark Carrigan’sPhilosophy of Data Science interview series and the Discover Society special issue on the Politics of Data (Science).