Randomistas vs. Contestistas


Excerpt by By Beth Simone Noveck: “Social scientists who either run experiments or conduct systematic reviews tend to be fervent proponents of the value of RCTs. But that evidentiary hierarchy—what some people call the “RCT industrial complex”—may actually lead us to discount workable solutions just because there is no accompanying RCT.

A trawl of the solution space shows that successful interventions developed by entrepreneurs in business, philanthropy, civil society, social enterprise, or business schools who promote and study open innovation, often by developing and designing competitions to source ideas, often come from more varied places. Uncovering these exciting social innovations lays bare the limitations of confining a definition of what works only to RCTs.

Many more entrepreneurial and innovative solutions are simply not tested with an RCT and are not the subject of academic study. As one public official said to me, you cannot saddle an entrepreneur with having to do a randomized controlled trial (RCT), which they do not have the time or know-how to do. They are busy helping real people, and we have to allow them “to get on with it.”

For example, MIT Solve, which describes itself as a marketplace for socially impactful innovation designed to identify lasting solutions to the world’s most pressing problems. It catalogs hundreds of innovations in use around the world, like Faircap, a chemical-free water filter used in Mozambique, or WheeLog!, an application that enables individuals and local governments to share accessibility information in Tokyo.

Research funding is also too limited (and too slow) for RCTs to assess every innovation in every domain. Many effective innovators do not have the time, resources, or know-how to partner with academic researchers to conduct a study, or they evaluate projects by some other means.

There are also significant limitations to RCTs. For a start, systematic evidence reviews are quite slow, frequently taking upward of two years, and despite published standards for review, there is a lack of transparency. Faster approaches are important. In addition, many solutions that have been tested with an RCT clearly do not work. Interestingly, the first RCT in an area tends to produce an inflated effect size….(More)”.

Academic Incentives and Research Impact: Developing Reward and Recognition Systems to Better People’s Lives


Report by Jonathan Grant: “…offers new strategies to increase the societal impact that health research can have on the community and critiques the existing academic reward structure that determines the career trajectories of so many academics—including, tenure, peer-review publication, citations, and grant funding, among others. The new assessment illustrates how these incentives can lead researchers to produce studies as an end-goal, rather than pursuing impact by applying the work in real world settings.

Dr. Grant also outlines new system-, institution-, and person-level changes to academic incentives that, if implemented, could make societal impact an integral part of the research process. Among the changes offered by Dr. Grant are tying a percentage of grant funding to the impact the research has on the community, breaking from the tenure model to incentivize ongoing development and quality research, and encouraging academics themselves to prioritize social impact when submitting or reviewing research and grant proposals…(More)”.

Data and Society: A Critical Introduction


Book by Anne Beaulieu and Sabina Leonelli: “Data and Society: A Critical Introduction investigates the growing importance of data as a technological, social, economic and scientific resource. It explains how data practices have come to underpin all aspects of human life and explores what this means for those directly involved in handling data. The book

  • fosters informed debate over the role of data in contemporary society
  • explains the significance of data as evidence beyond the “Big Data” hype
  • spans the technical, sociological, philosophical and ethical dimensions of data
  • provides guidance on how to use data responsibly
  • includes data stories that provide concrete cases and discussion questions.

Grounded in examples spanning genetics, sport and digital innovation, this book fosters insight into the deep interrelations between technical, social and ethical aspects of data work…(More)”.

Collective innovation is key to the lasting successes of democracies


Article by Kent Walker and Jared Cohen: “Democracies across the world have been through turbulent times in recent years, as polarization and gridlock have posed significant challenges to progress. The initial spread of COVID-19 spurred chaos at the global level, and governments scrambled to respond. With uncertainty and skepticism at an all-time high, few of us would have guessed a year ago that 66 percent of Americans would have received at least one vaccine dose by now. So what made that possible?

It turns out democracies, unlike their geopolitical competitors, have a secret weapon: collective innovation. The concept of collective innovation draws on democratic values of openness and pluralism. Free expression and free association allow for cooperation and scientific inquiry. Freedom to fail leaves room for risk-taking, while institutional checks and balances protect from state overreach.

Vaccine development and distribution offers a powerful case study. Within days of the coronavirus being first sequenced by Chinese researchers, research centers across the world had exchanged viral genome data through international data-sharing initiatives. The Organization for Economic Cooperation and Development found that 75 percent of COVID-19 research published after the outbreak relied on open data. In the United States and Europe, in universities and companies, scientists drew on open information, shared research, and debated alternative approaches to develop powerful vaccines in record-setting time.

Democracies’ self- and co-regulatory frameworks have played a critical role in advancing scientific and technological progress, leading to robust capital markets, talent-attracting immigration policies, world-class research institutions, and dynamic manufacturing sectors. The resulting world-leading productivity underpins democracies’ geopolitical influence….(More)”.

Manufacturing Consensus


Essay by M. Anthony Mills: “…Yet, the achievement of consensus within science, however rare and special, rarely translates into consensus in social and political contexts. Take nuclear physics, a well-established field of natural science if ever there were one, in which there is a high degree of consensus. But agreement on the physics of nuclear fission is not sufficient for answering such complex social, political, and economic questions as whether nuclear energy is a safe and viable alternative energy source, whether and where to build nuclear power plants, or how to dispose of nuclear waste. Expertise in nuclear physics and literacy in its consensus views is obviously important for answering such questions, but inadequate. That’s because answering them also requires drawing on various other kinds of technical expertise — from statistics to risk assessment to engineering to environmental science — within which there may or may not be disciplinary consensus, not to mention grappling with practical challenges and deep value disagreements and conflicting interests.

It is in these contexts — where multiple kinds of scientific expertise are necessary but not sufficient for solving controversial political problems — that the dependence of non-experts on scientific expertise becomes fraught, as our debates over pandemic policies amply demonstrate. Here scientific experts may disagree about the meaning, implications, or limits of what they know. As a result, their authority to say what they know becomes precarious, and the public may challenge or even reject it. To make matters worse, we usually do not have the luxury of a scientific consensus in such controversial contexts anyway, because political decisions often have to be made long before a scientific consensus can be reached — or because the sciences involved are those in which a consensus is simply not available, and may never be.

To be sure, scientific experts can and do weigh in on controversial political decisions. For instance, scientific institutions, such as the National Academies of Sciences, will sometimes issue “consensus reports” or similar documents on topics of social and political significance, such as risk assessment, climate change, and pandemic policies. These usually draw on existing bodies of knowledge from widely varied disciplines and take considerable time and effort to produce. Such documents can be quite helpful and are frequently used to aid policy and regulatory decision-making, although they are not always available when needed for making a decision.

Yet the kind of consensus expressed in these documents is importantly distinct from the kind we have been discussing so far, even though they are both often labeled as such. The difference is between what philosopher of science Stephen P. Turner calls a “scientific consensus” and a “consensus of scientists.” A scientific consensus, as described earlier, is a relatively stable paradigm that structures and organizes scientific research. By contrast, a consensus of scientists is an organized, professional opinion, created in response to an explicit political or social need, often an official government request…(More)”.

Open science, data sharing and solidarity: who benefits?


Report by Ciara Staunton et al: “Research, innovation, and progress in the life sciences are increasingly contingent on access to large quantities of data. This is one of the key premises behind the “open science” movement and the global calls for fostering the sharing of personal data, datasets, and research results. This paper reports on the outcomes of discussions by the panel “Open science, data sharing and solidarity: who benefits?” held at the 2021 Biennial conference of the International Society for the History, Philosophy, and Social Studies of Biology (ISHPSSB), and hosted by Cold Spring Harbor Laboratory (CSHL)….(More)”.

Thinking Clearly with Data: A Guide to Quantitative Reasoning and Analysis


Book by Ethan Bueno de Mesquita and Anthony Fowler: “An introduction to data science or statistics shouldn’t involve proving complex theorems or memorizing obscure terms and formulas, but that is exactly what most introductory quantitative textbooks emphasize. In contrast, Thinking Clearly with Data focuses, first and foremost, on critical thinking and conceptual understanding in order to teach students how to be better consumers and analysts of the kinds of quantitative information and arguments that they will encounter throughout their lives.

Among much else, the book teaches how to assess whether an observed relationship in data reflects a genuine relationship in the world and, if so, whether it is causal; how to make the most informative comparisons for answering questions; what questions to ask others who are making arguments using quantitative evidence; which statistics are particularly informative or misleading; how quantitative evidence should and shouldn’t influence decision-making; and how to make better decisions by using moral values as well as data. Filled with real-world examples, the book shows how its thinking tools apply to problems in a wide variety of subjects, including elections, civil conflict, crime, terrorism, financial crises, health care, sports, music, and space travel.

Above all else, Thinking Clearly with Data demonstrates why, despite the many benefits of our data-driven age, data can never be a substitute for thinking.

  • An ideal textbook for introductory quantitative methods courses in data science, statistics, political science, economics, psychology, sociology, public policy, and other fields
  • Introduces the basic toolkit of data analysis—including sampling, hypothesis testing, Bayesian inference, regression, experiments, instrumental variables, differences in differences, and regression discontinuity
  • Uses real-world examples and data from a wide variety of subjects
  • Includes practice questions and data exercises…(More)”.

AI Generates Hypotheses Human Scientists Have Not Thought Of


Robin Blades in Scientific American: “Electric vehicles have the potential to substantially reduce carbon emissions, but car companies are running out of materials to make batteries. One crucial component, nickel, is projected to cause supply shortages as early as the end of this year. Scientists recently discovered four new materials that could potentially help—and what may be even more intriguing is how they found these materials: the researchers relied on artificial intelligence to pick out useful chemicals from a list of more than 300 options. And they are not the only humans turning to A.I. for scientific inspiration.

Creating hypotheses has long been a purely human domain. Now, though, scientists are beginning to ask machine learning to produce original insights. They are designing neural networks (a type of machine-learning setup with a structure inspired by the human brain) that suggest new hypotheses based on patterns the networks find in data instead of relying on human assumptions. Many fields may soon turn to the muse of machine learning in an attempt to speed up the scientific process and reduce human biases.

In the case of new battery materials, scientists pursuing such tasks have typically relied on database search tools, modeling and their own intuition about chemicals to pick out useful compounds. Instead a team at the University of Liverpool in England used machine learning to streamline the creative process. The researchers developed a neural network that ranked chemical combinations by how likely they were to result in a useful new material. Then the scientists used these rankings to guide their experiments in the laboratory. They identified four promising candidates for battery materials without having to test everything on their list, saving them months of trial and error…(More)”.

Embrace Complexity Through Behavioral Planning


Article by Ruth Schmidt and Katelyn Stenger: “…Designing for complexity also requires questioning assumptions about how interventions work within systems. Being wary of three key assumptions about persistence, stability, and value can help behavioral designers recognize changes over time, complex system dynamics, and oversimplified definitions of success that may impact the effectiveness of interventions.

When behavioral designers overlook these assumptions, the solutions they recommend risk being short-sighted, nonstrategic, and destined to be reactive rather than proactive. Systematically confronting and planning for these projections, on the other hand, can help behavioral designers create and situate more resilient interventions within complex systems.

In a recent article, we explored why behavioral science is still learning to grapple with complexity, what it loses when it doesn’t, and what it could gain by doing so in a more strategic and systematic way. This approach—which we call “behavioral planning”—borrows from business strategy practices like scenario planning that play out assumptions about plausible future conditions to test how they might impact the business environment. The results are then used to inform “roughly right” directional decisions about how to move forward…(More)”

A Vision for the Future of Science Philanthropy


Article by Evan Michelson and Adam Falk: “If science is to accomplish all that society hopes it will in the years ahead, philanthropy will need to be an important contributor to those developments. It is therefore critical that philanthropic funders understand how to maximize science philanthropy’s contribution to the research enterprise. Given these stakes, what will science philanthropy need to get right in the coming years in order to have a positive impact on the scientific enterprise and to help move society toward greater collective well-being?

The answer, we argue, is that science philanthropies will increasingly need to serve a broader purpose. They certainly must continue to provide funding to promote new discoveries throughout the physical and social sciences. But they will also have to provide this support in a manner that takes account of the implications for society, shaping both the content of the research and the way it is pursued. To achieve this dual goal of positive scientific and societal impact, we identify four particular dimensions of the research enterprise that philanthropies will need to advance: seeding new fields of research, broadening participation in science, fostering new institutional practices, and deepening links between science and society. If funders attend assiduously to all these dimensions, we hope that when people look back 75 years from now, science philanthropy will have fully realized its extraordinary potential…(More)”.