Collective innovation is key to the lasting successes of democracies


Article by Kent Walker and Jared Cohen: “Democracies across the world have been through turbulent times in recent years, as polarization and gridlock have posed significant challenges to progress. The initial spread of COVID-19 spurred chaos at the global level, and governments scrambled to respond. With uncertainty and skepticism at an all-time high, few of us would have guessed a year ago that 66 percent of Americans would have received at least one vaccine dose by now. So what made that possible?

It turns out democracies, unlike their geopolitical competitors, have a secret weapon: collective innovation. The concept of collective innovation draws on democratic values of openness and pluralism. Free expression and free association allow for cooperation and scientific inquiry. Freedom to fail leaves room for risk-taking, while institutional checks and balances protect from state overreach.

Vaccine development and distribution offers a powerful case study. Within days of the coronavirus being first sequenced by Chinese researchers, research centers across the world had exchanged viral genome data through international data-sharing initiatives. The Organization for Economic Cooperation and Development found that 75 percent of COVID-19 research published after the outbreak relied on open data. In the United States and Europe, in universities and companies, scientists drew on open information, shared research, and debated alternative approaches to develop powerful vaccines in record-setting time.

Democracies’ self- and co-regulatory frameworks have played a critical role in advancing scientific and technological progress, leading to robust capital markets, talent-attracting immigration policies, world-class research institutions, and dynamic manufacturing sectors. The resulting world-leading productivity underpins democracies’ geopolitical influence….(More)”.

Manufacturing Consensus


Essay by M. Anthony Mills: “…Yet, the achievement of consensus within science, however rare and special, rarely translates into consensus in social and political contexts. Take nuclear physics, a well-established field of natural science if ever there were one, in which there is a high degree of consensus. But agreement on the physics of nuclear fission is not sufficient for answering such complex social, political, and economic questions as whether nuclear energy is a safe and viable alternative energy source, whether and where to build nuclear power plants, or how to dispose of nuclear waste. Expertise in nuclear physics and literacy in its consensus views is obviously important for answering such questions, but inadequate. That’s because answering them also requires drawing on various other kinds of technical expertise — from statistics to risk assessment to engineering to environmental science — within which there may or may not be disciplinary consensus, not to mention grappling with practical challenges and deep value disagreements and conflicting interests.

It is in these contexts — where multiple kinds of scientific expertise are necessary but not sufficient for solving controversial political problems — that the dependence of non-experts on scientific expertise becomes fraught, as our debates over pandemic policies amply demonstrate. Here scientific experts may disagree about the meaning, implications, or limits of what they know. As a result, their authority to say what they know becomes precarious, and the public may challenge or even reject it. To make matters worse, we usually do not have the luxury of a scientific consensus in such controversial contexts anyway, because political decisions often have to be made long before a scientific consensus can be reached — or because the sciences involved are those in which a consensus is simply not available, and may never be.

To be sure, scientific experts can and do weigh in on controversial political decisions. For instance, scientific institutions, such as the National Academies of Sciences, will sometimes issue “consensus reports” or similar documents on topics of social and political significance, such as risk assessment, climate change, and pandemic policies. These usually draw on existing bodies of knowledge from widely varied disciplines and take considerable time and effort to produce. Such documents can be quite helpful and are frequently used to aid policy and regulatory decision-making, although they are not always available when needed for making a decision.

Yet the kind of consensus expressed in these documents is importantly distinct from the kind we have been discussing so far, even though they are both often labeled as such. The difference is between what philosopher of science Stephen P. Turner calls a “scientific consensus” and a “consensus of scientists.” A scientific consensus, as described earlier, is a relatively stable paradigm that structures and organizes scientific research. By contrast, a consensus of scientists is an organized, professional opinion, created in response to an explicit political or social need, often an official government request…(More)”.

Open science, data sharing and solidarity: who benefits?


Report by Ciara Staunton et al: “Research, innovation, and progress in the life sciences are increasingly contingent on access to large quantities of data. This is one of the key premises behind the “open science” movement and the global calls for fostering the sharing of personal data, datasets, and research results. This paper reports on the outcomes of discussions by the panel “Open science, data sharing and solidarity: who benefits?” held at the 2021 Biennial conference of the International Society for the History, Philosophy, and Social Studies of Biology (ISHPSSB), and hosted by Cold Spring Harbor Laboratory (CSHL)….(More)”.

Thinking Clearly with Data: A Guide to Quantitative Reasoning and Analysis


Book by Ethan Bueno de Mesquita and Anthony Fowler: “An introduction to data science or statistics shouldn’t involve proving complex theorems or memorizing obscure terms and formulas, but that is exactly what most introductory quantitative textbooks emphasize. In contrast, Thinking Clearly with Data focuses, first and foremost, on critical thinking and conceptual understanding in order to teach students how to be better consumers and analysts of the kinds of quantitative information and arguments that they will encounter throughout their lives.

Among much else, the book teaches how to assess whether an observed relationship in data reflects a genuine relationship in the world and, if so, whether it is causal; how to make the most informative comparisons for answering questions; what questions to ask others who are making arguments using quantitative evidence; which statistics are particularly informative or misleading; how quantitative evidence should and shouldn’t influence decision-making; and how to make better decisions by using moral values as well as data. Filled with real-world examples, the book shows how its thinking tools apply to problems in a wide variety of subjects, including elections, civil conflict, crime, terrorism, financial crises, health care, sports, music, and space travel.

Above all else, Thinking Clearly with Data demonstrates why, despite the many benefits of our data-driven age, data can never be a substitute for thinking.

  • An ideal textbook for introductory quantitative methods courses in data science, statistics, political science, economics, psychology, sociology, public policy, and other fields
  • Introduces the basic toolkit of data analysis—including sampling, hypothesis testing, Bayesian inference, regression, experiments, instrumental variables, differences in differences, and regression discontinuity
  • Uses real-world examples and data from a wide variety of subjects
  • Includes practice questions and data exercises…(More)”.

AI Generates Hypotheses Human Scientists Have Not Thought Of


Robin Blades in Scientific American: “Electric vehicles have the potential to substantially reduce carbon emissions, but car companies are running out of materials to make batteries. One crucial component, nickel, is projected to cause supply shortages as early as the end of this year. Scientists recently discovered four new materials that could potentially help—and what may be even more intriguing is how they found these materials: the researchers relied on artificial intelligence to pick out useful chemicals from a list of more than 300 options. And they are not the only humans turning to A.I. for scientific inspiration.

Creating hypotheses has long been a purely human domain. Now, though, scientists are beginning to ask machine learning to produce original insights. They are designing neural networks (a type of machine-learning setup with a structure inspired by the human brain) that suggest new hypotheses based on patterns the networks find in data instead of relying on human assumptions. Many fields may soon turn to the muse of machine learning in an attempt to speed up the scientific process and reduce human biases.

In the case of new battery materials, scientists pursuing such tasks have typically relied on database search tools, modeling and their own intuition about chemicals to pick out useful compounds. Instead a team at the University of Liverpool in England used machine learning to streamline the creative process. The researchers developed a neural network that ranked chemical combinations by how likely they were to result in a useful new material. Then the scientists used these rankings to guide their experiments in the laboratory. They identified four promising candidates for battery materials without having to test everything on their list, saving them months of trial and error…(More)”.

Embrace Complexity Through Behavioral Planning


Article by Ruth Schmidt and Katelyn Stenger: “…Designing for complexity also requires questioning assumptions about how interventions work within systems. Being wary of three key assumptions about persistence, stability, and value can help behavioral designers recognize changes over time, complex system dynamics, and oversimplified definitions of success that may impact the effectiveness of interventions.

When behavioral designers overlook these assumptions, the solutions they recommend risk being short-sighted, nonstrategic, and destined to be reactive rather than proactive. Systematically confronting and planning for these projections, on the other hand, can help behavioral designers create and situate more resilient interventions within complex systems.

In a recent article, we explored why behavioral science is still learning to grapple with complexity, what it loses when it doesn’t, and what it could gain by doing so in a more strategic and systematic way. This approach—which we call “behavioral planning”—borrows from business strategy practices like scenario planning that play out assumptions about plausible future conditions to test how they might impact the business environment. The results are then used to inform “roughly right” directional decisions about how to move forward…(More)”

A Vision for the Future of Science Philanthropy


Article by Evan Michelson and Adam Falk: “If science is to accomplish all that society hopes it will in the years ahead, philanthropy will need to be an important contributor to those developments. It is therefore critical that philanthropic funders understand how to maximize science philanthropy’s contribution to the research enterprise. Given these stakes, what will science philanthropy need to get right in the coming years in order to have a positive impact on the scientific enterprise and to help move society toward greater collective well-being?

The answer, we argue, is that science philanthropies will increasingly need to serve a broader purpose. They certainly must continue to provide funding to promote new discoveries throughout the physical and social sciences. But they will also have to provide this support in a manner that takes account of the implications for society, shaping both the content of the research and the way it is pursued. To achieve this dual goal of positive scientific and societal impact, we identify four particular dimensions of the research enterprise that philanthropies will need to advance: seeding new fields of research, broadening participation in science, fostering new institutional practices, and deepening links between science and society. If funders attend assiduously to all these dimensions, we hope that when people look back 75 years from now, science philanthropy will have fully realized its extraordinary potential…(More)”.

The Cambridge Handbook of Commons Research Innovations


Book edited by Sheila R. Foster and Chrystie F. Swiney: “The commons theory, first articulated by Elinor Ostrom, is increasingly used as a framework to understand and rethink the management and governance of many kinds of shared resources. These resources can include natural and digital properties, cultural goods, knowledge and intellectual property, and housing and urban infrastructure, among many others. In a world of increasing scarcity and demand – from individuals, states, and markets – it is imperative to understand how best to induce cooperation among users of these resources in ways that advance sustainability, affordability, equity, and justice. This volume reflects this multifaceted and multidisciplinary field from a variety of perspectives, offering new applications and extensions of the commons theory, which is as diverse as the scholars who study it and is still developing in exciting ways…(More)”.

A Proposal for Researcher Access to Platform Data: The Platform Transparency and Accountability Act


Paper by Nathaniel Persily: “We should not need to wait for whistleblowers to blow their whistles, however, before we can understand what is actually happening on these extremely powerful digital platforms. Congress needs to act immediately to ensure that a steady stream of rigorous research reaches the public on the most pressing issues concerning digital technology. No one trusts the representations made by the platforms themselves, though, given their conflict of interest and understandable caution in releasing information that might spook shareholders. We need to develop an unprecedented system of corporate datasharing, mandated by government for independent research in the public interest.

This is easier said than done. Not only do the details matter, they are the only thing that matters. It is all well and good to call for “transparency” or “datasharing,” as an uncountable number of academics have, but the way government might setup this unprecedented regime will determine whether it can serve the grandiose purposes techcritics hope it will….(More)”.

Giant, free index to world’s research papers released online


Holly Else at Nature: “In a project that could unlock the world’s research papers for easier computerized analysis, an American technologist has released online a gigantic index of the words and short phrases contained in more than 100 million journal articles — including many paywalled papers.

The catalogue, which was released on 7 October and is free to use, holds tables of more than 355 billion words and sentence fragments listed next to the articles in which they appear. It is an effort to help scientists use software to glean insights from published work even if they have no legal access to the underlying papers, says its creator, Carl Malamud. He released the files under the auspices of Public Resource, a non-profit corporation in Sebastopol, California, that he founded.

Malamud says that because his index doesn’t contain the full text of articles, but only sentence snippets up to five words long, releasing it does not breach publishers’ copyright restrictions on the reuse of paywalled articles. However, one legal expert says that publishers might question the legality of how Malamud created the index in the first place.

Some researchers who have had early access to the index say it’s a major development in helping them to search the literature with software — a procedure known as text mining. Gitanjali Yadav, a computational biologist at the University of Cambridge, UK, who studies volatile organic compounds emitted by plants, says she aims to comb through Malamud’s index to produce analyses of the plant chemicals described in the world’s research papers. “There is no way for me — or anyone else — to experimentally analyse or measure the chemical fingerprint of each and every plant species on Earth. Much of the information we seek already exists, in published literature,” she says. But researchers are restricted by lack of access to many papers, Yadav adds….(More)”.