Manufacturing Consensus

Essay by M. Anthony Mills: “…Yet, the achievement of consensus within science, however rare and special, rarely translates into consensus in social and political contexts. Take nuclear physics, a well-established field of natural science if ever there were one, in which there is a high degree of consensus. But agreement on the physics of nuclear fission is not sufficient for answering such complex social, political, and economic questions as whether nuclear energy is a safe and viable alternative energy source, whether and where to build nuclear power plants, or how to dispose of nuclear waste. Expertise in nuclear physics and literacy in its consensus views is obviously important for answering such questions, but inadequate. That’s because answering them also requires drawing on various other kinds of technical expertise — from statistics to risk assessment to engineering to environmental science — within which there may or may not be disciplinary consensus, not to mention grappling with practical challenges and deep value disagreements and conflicting interests.

It is in these contexts — where multiple kinds of scientific expertise are necessary but not sufficient for solving controversial political problems — that the dependence of non-experts on scientific expertise becomes fraught, as our debates over pandemic policies amply demonstrate. Here scientific experts may disagree about the meaning, implications, or limits of what they know. As a result, their authority to say what they know becomes precarious, and the public may challenge or even reject it. To make matters worse, we usually do not have the luxury of a scientific consensus in such controversial contexts anyway, because political decisions often have to be made long before a scientific consensus can be reached — or because the sciences involved are those in which a consensus is simply not available, and may never be.

To be sure, scientific experts can and do weigh in on controversial political decisions. For instance, scientific institutions, such as the National Academies of Sciences, will sometimes issue “consensus reports” or similar documents on topics of social and political significance, such as risk assessment, climate change, and pandemic policies. These usually draw on existing bodies of knowledge from widely varied disciplines and take considerable time and effort to produce. Such documents can be quite helpful and are frequently used to aid policy and regulatory decision-making, although they are not always available when needed for making a decision.

Yet the kind of consensus expressed in these documents is importantly distinct from the kind we have been discussing so far, even though they are both often labeled as such. The difference is between what philosopher of science Stephen P. Turner calls a “scientific consensus” and a “consensus of scientists.” A scientific consensus, as described earlier, is a relatively stable paradigm that structures and organizes scientific research. By contrast, a consensus of scientists is an organized, professional opinion, created in response to an explicit political or social need, often an official government request…(More)”.

Slowed canonical progress in large fields of science

Paper by Johan S. G. Chu and James A. Evans: “The size of scientific fields may impede the rise of new ideas. Examining 1.8 billion citations among 90 million papers across 241 subjects, we find a deluge of papers does not lead to turnover of central ideas in a field, but rather to ossification of canon. Scholars in fields where many papers are published annually face difficulty getting published, read, and cited unless their work references already widely cited articles. New papers containing potentially important contributions cannot garner field-wide attention through gradual processes of diffusion. These findings suggest fundamental progress may be stymied if quantitative growth of scientific endeavors—in number of scientists, institutes, and papers—is not balanced by structures fostering disruptive scholarship and focusing attention on novel ideas…(More)”.

Expertise, ‘Publics’ and the Construction of Government Policy

Introduction to Special Issue of Discover Society about the role of expertise and professional knowledge in democracy by John Holmwood: “In the UK, the vexed nature of the issue was, perhaps, best illustrated by (then Justice Secretary) Michael Gove’s comment during the Brexit campaign that he thought, “the people of this country have had enough of experts.” The comment is oft cited, and derided, especially in the context of the Covid-19 pandemic, where the public has, or so it is argued, found a new respect for a science that can guide public policy and deliver solutions.

Yet, Michael Gove’s point was more nuanced than is usually credited. It wasn’t scientific advice that he claimed people were fed up with, but “experts with organisations with acronyms saying that they know what is best and getting it consistently wrong.” In other words, his complaint was about specific organised advocacy groups and their intervention in public debate and reporting in the media.

… the Government has consistently mobilised the claimed expert opinion of organisations in justification of their policies

Michael Gove’s extended comment was disingenuous. After all, the Brexit campaign, no less than the Remain campaign, drew upon arguments from think tanks and lobby groups. Moreover, since the referendum, the Government has consistently mobilised the claimed expert opinion of organisations in justification of their policies. Indeed, as Layla Aitlhadj and John Holmwood in this special issue argue, they have deliberately ‘managed’ civil society groups and supposedly independent reviews, such as that currently underway into the Prevent counter extremism policy.

In fact, there is nothing straightforward about the relationship between expertise and democracy as Stephen Turner (2003) has observed. The development of liberal democracy involves the rise of professional and expert knowledge which underpins the everyday governance of public institutions. At the same time, wider publics are asked to trust that knowledge even where it impinges directly upon their preferences; they are not in a position to evaluate it, except through the mediation of other experts. Elected politicians and governments, in turn, are dependent on expert knowledge to guide their policy choices, which are duly constrained by what is possible on the basis of technical judgements….(More)”

Seek diversity to solve complexity

Katrin Prager at Nature: “As a social scientist, I know that one person cannot solve a societal problem on their own — and even a group of very intelligent people will struggle to do it. But we can boost our chances of success if we ensure not only that the team members are intelligent, but also that the team itself is highly diverse.

By ‘diverse’ I mean demographic diversity encompassing things such as race, gender identity, class, ethnicity, career stage and age, and cognitive diversity, including differences in thoughts, insights, disciplines, perspectives, frames of reference and thinking styles. And the team needs to be purposely diverse instead of arbitrarily diverse.

In my work I focus on complex world problems, such as how to sustainably manage our natural resources and landscapes, and I’ve found that it helps to deliberately assemble diverse teams. This effort requires me to be aware of the different ways in which people can be diverse, and to reflect on my own preferences and biases. Sometimes the teams might not be as diverse as I’d like. But I’ve found that making the effort not only to encourage diversity, but also to foster better understanding between team members reaps dividends….(more)”

Be Skeptical of Thought Leaders

Book Review by Evan Selinger: “Corporations regularly advertise their commitment to “ethics.” They often profess to behave better than the law requires and sometimes may even claim to make the world a better place. Google, for example, trumpets its commitment to “responsibly” developing artificial intelligence and swears it follows lofty AI principles that include being “socially beneficial” and “accountable to people,” and that “avoid creating or reinforcing unfair bias.”

Google’s recent treatment of Timnit Gebru, the former co-leader of its ethical AI team, tells another story. After Gebru went through an antagonistic internal review process for a co-authored paper that explores social and environmental risks and expressed concern over justice issues within Google, the company didn’t congratulate her for a job well done. Instead, she and vocally supportive colleague Margaret Mitchell (the other co-leader) were “forced out.” Google’s behavior “perhaps irreversibly damaged” the company’s reputation. It was hard not to conclude that corporate values misalign with the public good.

Even as tech companies continue to display hypocrisy, there might still be good reasons to have high hopes for their behavior in the future. Suppose corporations can do better than ethics washingvirtue signaling, and making incremental improvements that don’t challenge aggressive plans for financial growth. If so, society desperately needs to know what it takes to bring about dramatic change. On paper, Susan Liautaud is the right person to turn to for help. She has impressive academic credentials (a PhD in Social Policy from the London School of Economics and a JD from Columbia University Law School), founded and manages an ethics consulting firm with an international reach, and teaches ethics courses at Stanford University.

In The Power of Ethics: How to Make Good Choices in a Complicated World, Liautaud pursues a laudable goal: democratize the essential practical steps for making responsible decisions in a confusing and complex world. While the book is pleasantly accessible, it has glaring faults. With so much high-quality critical journalistic coverage of technologies and tech companies, we should expect more from long-form analysis.

Although ethics is more widely associated with dour finger-waving than aspirational world-building, Liautaud mostly crafts an upbeat and hopeful narrative, albeit not so cheerful that she denies the obvious pervasiveness of shortsighted mistakes and blatant misconduct. The problem is that she insists ethical values and technological development pair nicely. Big Tech might be exerting increasing control over our lives, exhibiting an oversized influence on public welfare through incursions into politics, education, social communication, space travel, national defense, policing, and currency — but this doesn’t in the least quell her enthusiasm, which remains elevated enough throughout her book to affirm the power of the people. Hyperbolically, she declares, “No matter where you stand […] you have the opportunity to prevent the monopolization of ethics by rogue actors, corporate giants, and even well-intentioned scientists and innovators.”…(More)“.

Help us identify how data can make food healthier for us and the environment

The GovLab: “To make food production, distribution, and consumption healthier for people, animals, and the environment, we need to redesign today’s food systems. Data and data science can help us develop sustainable solutions — but only if we manage to define those questions that matter.

Globally, we are witnessing the damage that unsustainable farming practices have caused on the environment. At the same time, climate change is making our food systems more fragile, while the global population continues to rapidly increase. To feed everyone, we need to become more sustainable in our approach to producing, consuming, and disposing of food.

Policymakers and stakeholders need to work together to reimagine food systems and collectively make them more resilient, healthy, and inclusive.

Data will be integral to understanding where failures and vulnerabilities exist and what methods are needed to rectify them. Yet, the insights generated from data are only as good as the questions they seek to answer. To become smarter about current and future food systems using data, we need to ask the right questions first.

That’s where The 100 Questions Initiative comes in. It starts from the premise that to leverage data in a responsible and effective manner, data initiatives should be driven by demand, not supply. Working with a global cohort of experts, The 100 Questions seeks to map the most pressing and potentially impactful questions that data and data science can answer.

Today the Barilla Foundation, the Center for European Policy Studies, and The Governance Lab at NYU Tandon School of Engineering, are announcing the launch of the Food Systems Sustainability domain of The 100 Questions. We seek to identify the 10 most important questions that need to be answered to make food systems more sustainable…(More)”.

We Need to Reimagine the Modern Think Tank

Article by Emma Vadehra: “We are in the midst of a great realignment in policymaking. After an era-defining pandemic, which itself served as backdrop to a generations-in-the-making reckoning on racial injustice, the era of policy incrementalism is giving way to broad, grassroots demands for structural change. But elected officials are not the only ones who need to evolve. As the broader policy ecosystem adjusts to a post-2020 world, think tanks that aim to provide the intellectual backbone to policy movements—through research, data analysis, and evidence-based recommendation—need to change their approach as well.

Think tanks may be slower to adapt because of long-standing biases around what qualifies someone to be a policy “expert.” Traditionally, think tanks assess qualifications based on educational attainment and advanced degrees, which has often meant prioritizing academic credentials over lived or professional experience on the ground. These hiring preferences alone leave many people out of the debates that shape their lives: if think tanks expect a master’s degree for mid-level and senior research and policy positions, their pool of candidates will be limited to the 4 percent of Latinos and 7 percent of Black people with those degrees (lower than the rates among white people (10.5 percent) or Asian/Pacific Islanders (17 percent)). And in specific fields like Economics, from which many think tanks draw their experts, just 0.5 percent of doctoral degrees go to Black women each year.

Think tanks alone cannot change the larger cultural and societal forces that have historically limited access to certain fields. But they can change their own practices: namely, they can change how they assess expertise and who they recruit and cultivate as policy experts. In doing so, they can push the broader policy sector—including government and philanthropic donors—to do the same. Because while the next generation marches in the streets and runs for office, the public policy sector is not doing enough to diversify and support who develops, researches, enacts, and implements policy. And excluding impacted communities from the decision-making table makes our democracy less inclusive, responsive, and effective.

Two years ago, my colleagues and I at The Century Foundation, a 100-year-old think tank that has weathered many paradigm shifts in policymaking, launched an organization, Next100, to experiment with a new model for think tanks. Our mission was simple: policy by those with the most at stake, for those with the most at stake. We believed that proximity to the communities that policy looks to serve will make policy stronger, and we put muscle and resources behind the theory that those with lived experience are as much policy experts as anyone with a PhD from an Ivy League university. The pandemic and heightened calls for racial justice in the last year have only strengthened our belief in the need to thoughtfully democratize policy development. While it’s common understanding now that COVID-19 has surfaced and exacerbated profound historical inequities, not enough has been done to question why those inequities exist, or why they run so deep. How we make policy—and who makes it—is a big reason why….(More)”

Diverse Sources Database

About: “The Diverse Sources Database is NPR’s resource for journalists who believe in the value of diversity and share our goal to make public radio look and sound like America.

Originally called Source of the Week, the database launched in 2013 as a way help journalists at NPR and member stations expand the racial/ethnic diversity of the experts they tap for stories…(More)”.

How spooks are turning to superforecasting in the Cosmic Bazaar

The Economist: “Every morning for the past year, a group of British civil servants, diplomats, police officers and spies have woken up, logged onto a slick website and offered their best guess as to whether China will invade Taiwan by a particular date. Or whether Arctic sea ice will retrench by a certain amount. Or how far covid-19 infection rates will fall. These imponderables are part of Cosmic Bazaar, a forecasting tournament created by the British government to improve its intelligence analysis.

Since the website was launched in April 2020, more than 10,000 forecasts have been made by 1,300 forecasters, from 41 government departments and several allied countries. The site has around 200 regular forecasters, who must use only publicly available information to tackle the 30-40 questions that are live at any time. Cosmic Bazaar represents the gamification of intelligence. Users are ranked by a single, brutally simple measure: the accuracy of their predictions.

Forecasting tournaments like Cosmic Bazaar draw on a handful of basic ideas. One of them, as seen in this case, is the “wisdom of crowds”, a concept first illustrated by Francis Galton, a statistician, in 1907. Galton observed that in a contest to estimate the weight of an ox at a county fair, the median guess of nearly 800 people was accurate within 1% of the true figure.

Crowdsourcing, as this idea is now called, has been augmented by more recent research into whether and how people make good judgments. Experiments by Philip Tetlock of the University of Pennsylvania, and others, show that experts’ predictions are often no better than chance. Yet some people, dubbed “superforecasters”, often do make accurate predictions, largely because of the way they form judgments—such as having a commitment to revising predictions in light of new data, and being aware of typical human biases. Dr Tetlock’s ideas received publicity last year when Dominic Cummings, then an adviser to Boris Johnson, Britain’s prime minister, endorsed his book and hired a controversial superforecaster to work at Mr Johnson’s office in Downing Street….(More)”.

Lawmakers’ use of scientific evidence can be improved

Paper by D. Max Crowley et al: “This study is an experimental trial that demonstrates the potential for formal outreach strategies to change congressional use of research. Our results show that collaboration between policy and research communities can change policymakers’ value of science and result in legislation that appears to be more inclusive of research evidence. The findings of this study also demonstrated changes in researchers’ knowledge and motivation to engage with policymakers as well as their actual policy engagement behavior. Together, the observed changes in both policymakers and researchers randomized to receive an intervention for supporting legislative use of research evidence (i.e., the Research-to-Policy Collaboration model) provides support for the underlying theories around the social nature of research translation and evidence use….(More)”.