Manufacturing Consensus


Essay by M. Anthony Mills: “…Yet, the achievement of consensus within science, however rare and special, rarely translates into consensus in social and political contexts. Take nuclear physics, a well-established field of natural science if ever there were one, in which there is a high degree of consensus. But agreement on the physics of nuclear fission is not sufficient for answering such complex social, political, and economic questions as whether nuclear energy is a safe and viable alternative energy source, whether and where to build nuclear power plants, or how to dispose of nuclear waste. Expertise in nuclear physics and literacy in its consensus views is obviously important for answering such questions, but inadequate. That’s because answering them also requires drawing on various other kinds of technical expertise — from statistics to risk assessment to engineering to environmental science — within which there may or may not be disciplinary consensus, not to mention grappling with practical challenges and deep value disagreements and conflicting interests.

It is in these contexts — where multiple kinds of scientific expertise are necessary but not sufficient for solving controversial political problems — that the dependence of non-experts on scientific expertise becomes fraught, as our debates over pandemic policies amply demonstrate. Here scientific experts may disagree about the meaning, implications, or limits of what they know. As a result, their authority to say what they know becomes precarious, and the public may challenge or even reject it. To make matters worse, we usually do not have the luxury of a scientific consensus in such controversial contexts anyway, because political decisions often have to be made long before a scientific consensus can be reached — or because the sciences involved are those in which a consensus is simply not available, and may never be.

To be sure, scientific experts can and do weigh in on controversial political decisions. For instance, scientific institutions, such as the National Academies of Sciences, will sometimes issue “consensus reports” or similar documents on topics of social and political significance, such as risk assessment, climate change, and pandemic policies. These usually draw on existing bodies of knowledge from widely varied disciplines and take considerable time and effort to produce. Such documents can be quite helpful and are frequently used to aid policy and regulatory decision-making, although they are not always available when needed for making a decision.

Yet the kind of consensus expressed in these documents is importantly distinct from the kind we have been discussing so far, even though they are both often labeled as such. The difference is between what philosopher of science Stephen P. Turner calls a “scientific consensus” and a “consensus of scientists.” A scientific consensus, as described earlier, is a relatively stable paradigm that structures and organizes scientific research. By contrast, a consensus of scientists is an organized, professional opinion, created in response to an explicit political or social need, often an official government request…(More)”.

If We Can Report on the Problem, We Can Report on the Solution


David Bornstein and Tina Rosenberg in the New York Times: “After 11 years and roughly 600 columns, this is our last….

David Bornstein: Tina, in a decade reporting on solutions, what’s the most important thing you learned?

Tina Rosenberg: This is a strange lesson for a column about new ideas and innovation, but I learned that they’re overrated. The world (mostly) doesn’t need new inventions. It needs better distribution of what’s already out there.

Some of my favorite columns were about how to take old ideas or existing products and get them to new people. As one of our columns put it, “Ideas Help No One on a Shelf. Take Them to the World.” There are proven health strategies, for example, that never went anywhere until some folks dusted them off and decided to spread them. It’s not glamorous to copy another idea. But those copycats are making a big difference.

David: I totally agree. The opportunity to learn from other places is hugely undertapped.

I mean, in the United States alone, there are over 3,000 counties. The chance that any one of them is struggling with big problems — mental health, addiction, climate change, diabetes, Covid-19, you name it — is pretty much 100 percent. But the odds that any place is actually using one of the most effective approaches to deal with its problems is quite low.

As you know, I used to be a computer programmer, and I’m still a stats nerd. With so many issues, there are “positive deviants” — say, 2 percent or 3 percent of actors who are getting significantly better results than the norm. Finding those outliers, figuring out what they’re doing that’s different, and sharing the knowledge can really help. I saw this in my reporting on childhood traumachronic homelessness and hospital safety, to name a few areas….(More)”

Open science, data sharing and solidarity: who benefits?


Report by Ciara Staunton et al: “Research, innovation, and progress in the life sciences are increasingly contingent on access to large quantities of data. This is one of the key premises behind the “open science” movement and the global calls for fostering the sharing of personal data, datasets, and research results. This paper reports on the outcomes of discussions by the panel “Open science, data sharing and solidarity: who benefits?” held at the 2021 Biennial conference of the International Society for the History, Philosophy, and Social Studies of Biology (ISHPSSB), and hosted by Cold Spring Harbor Laboratory (CSHL)….(More)”.

Articulating the Role of Artificial Intelligence in Collective Intelligence: A Transactive Systems Framework


Paper by Pranav Gupta and Anita Williams Woolley: “Human society faces increasingly complex problems that require coordinated collective action. Artificial intelligence (AI) holds the potential to bring together the knowledge and associated action needed to find solutions at scale. In order to unleash the potential of human and AI systems, we need to understand the core functions of collective intelligence. To this end, we describe a socio-cognitive architecture that conceptualizes how boundedly rational individuals coordinate their cognitive resources and diverse goals to accomplish joint action. Our transactive systems framework articulates the inter-member processes underlying the emergence of collective memory, attention, and reasoning, which are fundamental to intelligence in any system. Much like the cognitive architectures that have guided the development of artificial intelligence, our transactive systems framework holds the potential to be formalized in computational terms to deepen our understanding of collective intelligence and pinpoint roles that AI can play in enhancing it….(More)”

Data protection in the context of covid-19. A short (hi)story of tracing applications


Book edited by Elise Poillot, Gabriele Lenzini, Giorgio Resta, and Vincenzo Zeno-Zencovich: “The volume presents the results of a research project  (named “Legafight”) funded by the Luxembourg Fond National de la Recherche in order to verify if and how digital tracing applications could be implemented in the Grand-Duchy in order to counter and abate the Covid-19 pandemic. This inevitably brought to a deep comparative overview of the various existing various models, starting from that of the European Union and those put into practice by Belgium, France, Germany, and Italy, with attention also to some Anglo-Saxon approaches (the UK and Australia). Not surprisingly the main issue which had to be tackled was that of the protection of the personal data collected through the tracing applications, their use by public health authorities and the trust laid in tracing procedures by citizens. Over the last 18 months tracing apps have registered a rise, a fall, and a sudden rebirth as mediums devoted not so much to collect data, but rather to distribute real time information which should allow informed decisions and be used as repositories of health certifications…(More)”.

AI Generates Hypotheses Human Scientists Have Not Thought Of


Robin Blades in Scientific American: “Electric vehicles have the potential to substantially reduce carbon emissions, but car companies are running out of materials to make batteries. One crucial component, nickel, is projected to cause supply shortages as early as the end of this year. Scientists recently discovered four new materials that could potentially help—and what may be even more intriguing is how they found these materials: the researchers relied on artificial intelligence to pick out useful chemicals from a list of more than 300 options. And they are not the only humans turning to A.I. for scientific inspiration.

Creating hypotheses has long been a purely human domain. Now, though, scientists are beginning to ask machine learning to produce original insights. They are designing neural networks (a type of machine-learning setup with a structure inspired by the human brain) that suggest new hypotheses based on patterns the networks find in data instead of relying on human assumptions. Many fields may soon turn to the muse of machine learning in an attempt to speed up the scientific process and reduce human biases.

In the case of new battery materials, scientists pursuing such tasks have typically relied on database search tools, modeling and their own intuition about chemicals to pick out useful compounds. Instead a team at the University of Liverpool in England used machine learning to streamline the creative process. The researchers developed a neural network that ranked chemical combinations by how likely they were to result in a useful new material. Then the scientists used these rankings to guide their experiments in the laboratory. They identified four promising candidates for battery materials without having to test everything on their list, saving them months of trial and error…(More)”.

22 Questions to Assess Responsible Data for Children (RD4C)


An Audit Tool by The GovLab and UNICEF: “Around the world and across domains, institutions are using data to improve service delivery for children. Data for and about children can, however, pose risks of misuse, such as unauthorized access or data breaches, as well as missed use of data that could have improved children’s lives if harnessed effectively. 

The RD4C Principles — Participatory; Professionally Accountable; People-Centric; Prevention of Harms Across the Data Life Cycle; Proportional; Protective of Children’s Rights; and Purpose-Driven — were developed by the GovLab and UNICEF to guide responsible data handling toward saving children’s lives, defending their rights, and helping them fulfill their potential from early childhood through adolescence. These principles were developed to act as a north star, guiding practitioners toward more responsible data practices.

Today, The GovLab and UNICEF, as part of the Responsible Data for Children initiative (RD4C), are pleased to launch a new tool that aims to put the principles into practice. 22 Questions to Assess Responsible Data for Children (RD4C) is an audit tool to help stakeholders involved in the administration of data systems that handle data for and about children align their practices with the RD4C Principles. 

The tool encourages users to reflect on their data handling practices and strategy by posing questions regarding: 

  • Why: the purpose and rationale for the data system;
  • What: the data handled through the system; 
  • Who: the stakeholders involved in the system’s use, including data subjects;
  • How: the presence of operations, policies, and procedures; and 
  • When and where: temporal and place-based considerations….(More)”.
6b8bb1de 5bb6 474d B91a 99add0d5e4cd

Climate Change and AI: Recommendations for Government


Press Release: “A new report, developed by the Centre for AI & Climate and Climate Change AI for the Global Partnership on AI (GPAI), calls for governments to recognise the potential for artificial intelligence (AI) to accelerate the transition to net zero, and to put in place the support needed to advance AI-for-climate solutions. The report is being presented at COP26 today.

The report, Climate Change and AI: Recommendations for Government, highlights 48 specific recommendations for how governments can both support the application of AI to climate challenges and address the climate-related risks that AI poses.

The report was commissioned by the Global Partnership on AI (GPAI), a partnership between 18 countries and the EU that brings together experts from across countries and sectors to help shape the development of AI.

AI is already being used to support climate action in a wide range of use cases, several of which the report highlights. These include:

  • National Grid ESO, which has used AI to double the accuracy of its forecasts of UK electricity demand. Radically improving forecasts of electricity demand and renewable energy generation will be critical in enabling greater proportions of renewable energy on electricity grids.
  • The UN Satellite Centre (UNOSAT), which has developed the FloodAI system that delivers high-frequency flood reports. FloodAI’s reports, which use a combination of satellite data and machine learning, have improved the response to climate-related disasters in Asia and Africa.
  • Climate TRACE, a global coalition of organizations, which has radically improved the transparency and accuracy of emissions monitoring by leveraging AI algorithms and data from more than 300 satellites and 11,000 sensors.

The authors also detail critical bottlenecks that are impeding faster adoption. To address these, the report calls for governments to:

  • Improve data ecosystems in sectors critical to climate transition, including the development of digital twins in e.g. the energy sector.
  • Increase support for research, innovation, and deployment through targeted funding, infrastructure, and improved market designs.
  • Make climate change a central consideration in AI strategies to shape the responsible development of AI as a whole.
  • Support greater international collaboration and capacity building to facilitate the development and governance of AI-for-climate solutions….(More)”.

Countries’ climate pledges built on flawed data


Article by Chris Mooney, Juliet Eilperin, Desmond Butler, John Muyskens, Anu Narayanswamy, and Naema Ahmed: “Across the world, many countries underreporttheir greenhouse gas emissions in their reports to the United Nations, a Washington Post investigation has found. An examination of 196 country reports reveals a giant gap between what nations declare their emissions to be versus the greenhouse gases they are sending into the atmosphere. The gap ranges from at least 8.5 billion to as high as 13.3 billion tons a year of underreported emissions — big enough to move the needle on how much the Earth will warm.

The plan to save the world from the worst of climate change is built on data. But the data the world is relying on is inaccurate.

“If we don’t know the state of emissions today, we don’t know whether we’re cutting emissions meaningfully and substantially,” said Rob Jackson, a professor at Stanford University and chair of the Global Carbon Project, a collaboration of hundreds of researchers. “The atmosphere ultimately is the truth. The atmosphere is what we care about. The concentration of methane and other greenhouse gases in the atmosphere is what’s affecting climate.”

At the low end, the gap is larger than the yearly emissions of the United States. At the high end, it approaches the emissions of China and comprises 23 percent of humanity’s total contribution to the planet’s warming, The Post found…

A new generation of sophisticated satellites that can measure greenhouse gases are now orbiting Earth, and they can detect massive methane leaks. Data from the International Energy Agency (IEA) lists Russia as the world’s top oil and gas methane emitter, but that’s not what Russia reports to the United Nations. Its official numbers fall millions of tons shy of what independent scientific analyses show, a Post investigation found. Many oil and gas producers in the Persian Gulf region, such as the United Arab Emirates and Qatar, also report very small levels of oil and gas methane emission that don’t line up with other scientific data sets.

“It’s hard to imagine how policymakers are going to pursue ambitious climate actions if they’re not getting the right data from national governments on how big the problem is,” said Glenn Hurowitz, chief executive of Mighty Earth, an environmental advocacy group….(More)”.

Why Are We Failing at AI Ethics?


Article by Anja Kaspersen and Wendell Wallach: “…Extremely troubling is the fact that the people who are most vulnerable to negative impacts from such rapid expansions of AI systems are often the least likely to be able to join the conversation about these systems, either because they have no or restricted digital access or their lack of digital literacy makes them ripe for exploitation.

Such vulnerable groups are often theoretically included in discussions, but not empowered to take a meaningful part in making decisions. This engineered inequity, alongside human biases, risks amplifying otherness through neglect, exclusion, misinformation, and disinformation.

Society should be deeply concerned that nowhere near enough substantive progress is being made to develop and scale actionable legal, ethical oversight while simultaneously addressing existing inequalities.

So, why hasn’t more been done? There are three main issues at play: 

First, many of the existing dialogues around the ethics of AI and governance are too narrow and fail to understand the subtleties and life cycles of AI systems and their impacts.

Often, these efforts focus only on the development and deployment stages of the technology life cycle, when many of the problems occur during the earlier stages of conceptualization, research, and design. Or they fail to comprehend when and if AI system operates at a level of maturity required to avoid failure in complex adaptive systems.

Or they focus on some aspects of ethics, while ignoring other aspects that are more fundamental and challenging. This is the problem known as “ethics washing” – creating a superficially reassuring but illusory sense that ethical issues are being adequately addressed, to justify pressing forward with systems that end up deepening current patterns.

Let’s be clear: every choice entails tradeoffs. “Ethics talk” is often about underscoring the various tradeoffs entailed in differing courses of action. Once a course has been selected, comprehensive ethical oversight is also about addressing the considerations not accommodated by the options selected, which is essential to any future verification effort. This vital part of the process is often a stumbling block for those trying to address the ethics of AI.

The second major issue is that to date all the talk about ethics is simply that: talk. 

We’ve yet to see these discussions translate into meaningful change in managing the ways in which AI systems are being embedded into various aspect of our lives….

A third issue at play is that discussions on AI and ethics are still largely confined to the ivory tower.

There is an urgent need for more informed public discourse and serious investment in civic education around the societal impact of the bio-digital revolution. This could help address the first two problems, but most of what the general public currently perceives about AI comes from sci-fi tropes and blockbuster movies.

A few examples of algorithmic bias have penetrated the public discourse. But the most headline-grabbing research on AI and ethics tends to focus on far-horizon existential risks. More effort needs to be invested in communicating to the public that, beyond the hypothetical risks of future AI, there are real and imminent risks posed by why and how we embed AI systems that currently shape everyone’s daily lives….(More)”.