Open science, data sharing and solidarity: who benefits?


Report by Ciara Staunton et al: “Research, innovation, and progress in the life sciences are increasingly contingent on access to large quantities of data. This is one of the key premises behind the “open science” movement and the global calls for fostering the sharing of personal data, datasets, and research results. This paper reports on the outcomes of discussions by the panel “Open science, data sharing and solidarity: who benefits?” held at the 2021 Biennial conference of the International Society for the History, Philosophy, and Social Studies of Biology (ISHPSSB), and hosted by Cold Spring Harbor Laboratory (CSHL)….(More)”.

Articulating the Role of Artificial Intelligence in Collective Intelligence: A Transactive Systems Framework


Paper by Pranav Gupta and Anita Williams Woolley: “Human society faces increasingly complex problems that require coordinated collective action. Artificial intelligence (AI) holds the potential to bring together the knowledge and associated action needed to find solutions at scale. In order to unleash the potential of human and AI systems, we need to understand the core functions of collective intelligence. To this end, we describe a socio-cognitive architecture that conceptualizes how boundedly rational individuals coordinate their cognitive resources and diverse goals to accomplish joint action. Our transactive systems framework articulates the inter-member processes underlying the emergence of collective memory, attention, and reasoning, which are fundamental to intelligence in any system. Much like the cognitive architectures that have guided the development of artificial intelligence, our transactive systems framework holds the potential to be formalized in computational terms to deepen our understanding of collective intelligence and pinpoint roles that AI can play in enhancing it….(More)”

Are we really so polarised?


Article by Dominic Packer and Jay Van Bavel: “In 2020, the match-making website OkCupid asked 5 million hopeful daters around the world: “Could you date someone who has strong political opinions that are the opposite of yours?” Sixty per cent said no, up from 53% a year before.

Scholars used to worry that societies might not be polarised enough. Without clear differences between political parties, they thought, citizens lack choices, and important issues don’t get deeply debated. Now this notion seems rather quaint as countries have fractured along political lines, reflected in everything from dating preferences to where people choose to live.

Sign up to our Inside Saturday newsletter for an exclusive behind the scenes look at the making of the magazine’s biggest features, as well as a curated list of our weekly highlights.

Just how stark has political polarisation become? Well, it depends on where you live and how you look at it. When social psychologists study relations between groups, they often find that whereas people like their own groups a great deal, they have fairly neutral feelings towards out-groups: “They’re fine, but we’re great!” This pattern used to describe relations between Democrats and Republicans in the US. In 1980, partisans reported feeling warm towards members of their own party and neutral towards people on the other side. However, while levels of in-party warmth have remained stable since then, feelings towards the out-party have plummeted.

The dynamics are similar in the UK, where the Brexit vote was deeply divisive. A 2019 study revealed that while UK citizens were not particularly identified with political parties, they held strong identities as remainers or leavers. Their perceptions were sharply partisan, with each side regarding its supporters as intelligent and honest, while viewing the other as selfish and close-minded. The consequences of hating political out-groups are many and varied. It can lead people to support corrupt politicians, because losing to the other side seems unbearable. It can make compromise impossible even when you have common political ground. In a pandemic, it can even lead people to disregard advice from health experts if they are embraced by opposing partisans.

The negativity that people feel towards political opponents is known to scientists as affective polarisation. It is emotional and identity-driven – “us” versus “them”. Importantly, this is distinct from another form of division known as ideological polarisation, which refers to differences in policy preferences. So do we disagree about the actual issues as much as our feelings about each other suggest?

Despite large differences in opinion between politicians and activists from different parties, there is often less polarisation among regular voters on matters of policy. When pushed for their thoughts about specific ideas or initiatives, citizens with different political affiliations often turn out to agree more than they disagree (or at least the differences are not as stark as they imagine).

More in Common, a research consortiumthat explores the drivers of social fracturing and polarisation, reports on areas of agreement between groups in societies. In the UK, for example, they have found that majorities of people across the political spectrum view hate speech as a problem, are proud of the NHS, and are concerned about climate change and inequality…(More)”.

‘Is it OK to …’: the bot that gives you an instant moral judgment


Article by Poppy Noor: “Corporal punishment, wearing fur, pineapple on pizza – moral dilemmas, are by their very nature, hard to solve. That’s why the same ethical questions are constantly resurfaced in TV, films and literature.

But what if AI could take away the brain work and answer ethical quandaries for us? Ask Delphi is a bot that’s been fed more than 1.7m examples of people’s ethical judgments on everyday questions and scenarios. If you pose an ethical quandary, it will tell you whether something is right, wrong, or indefensible.

Anyone can use Delphi. Users just put a question to the bot on its website, and see what it comes up with.

The AI is fed a vast number of scenarios – including ones from the popular Am I The Asshole sub-Reddit, where Reddit users post dilemmas from their personal lives and get an audience to judge who the asshole in the situation was.

Then, people are recruited from Mechanical Turk – a market place where researchers find paid participants for studies – to say whether they agree with the AI’s answers. Each answer is put to three arbiters, with the majority or average conclusion used to decide right from wrong. The process is selective – participants have to score well on a test to qualify to be a moral arbiter, and the researchers don’t recruit people who show signs of racism or sexism.

The arbitrators agree with the bot’s ethical judgments 92% of the time (although that could say as much about their ethics as it does the bot’s)…(More)”.

A Paradigm Shift in the Making: Designing a New WTO Transparency Mechanism That Fits the Current Era


Paper by Yaxuan Chen: “The rules-based multilateral trading system has been suffering from transparency challenges for decades. The theory of data technology provides a new perspective to assess the transparency provisions on their design, historic rationale, and evolution in light of the multilateral efforts for improvement since the General Agreement on Tariffs and Trade (GATT 1947). The development of frontier digital and data technologies, including mobile devices and sensors, new cryptographic technologies, cloud computing, and artificial intelligence, have completely changed the landscape of data collection, storage, processing, and analysis. In light of the new business models of international trade, trade administration, and governance, opportunities for addressing transparency challenges in the multilateral trading system have arisen.

While providing solutions to transparency problems of the past, data technology applications could trigger new transparency challenges in trade and governance. For instance, questions arise as to whether developing countries would be able to access or provide trade information with the same quantity, understandability, and timeliness as more developed countries. This is in addition to the emerging transparency expectations of the current era, with the pandemic as an immediate challenge and the rise of “real-time” economy in a broader context. For the multilateral trading system to stay relevant, innovations for a holistic global mechanism for supply chain transparency, the transformation of council and committee operations, a smart design for technical assistance to tackle the digital divide, automated and real-time dispute resolution options and further integration of inclusiveness and sustainability considerations into trade disciplines should be explored….(More)”.

22 Questions to Assess Responsible Data for Children (RD4C)


An Audit Tool by The GovLab and UNICEF: “Around the world and across domains, institutions are using data to improve service delivery for children. Data for and about children can, however, pose risks of misuse, such as unauthorized access or data breaches, as well as missed use of data that could have improved children’s lives if harnessed effectively. 

The RD4C Principles — Participatory; Professionally Accountable; People-Centric; Prevention of Harms Across the Data Life Cycle; Proportional; Protective of Children’s Rights; and Purpose-Driven — were developed by the GovLab and UNICEF to guide responsible data handling toward saving children’s lives, defending their rights, and helping them fulfill their potential from early childhood through adolescence. These principles were developed to act as a north star, guiding practitioners toward more responsible data practices.

Today, The GovLab and UNICEF, as part of the Responsible Data for Children initiative (RD4C), are pleased to launch a new tool that aims to put the principles into practice. 22 Questions to Assess Responsible Data for Children (RD4C) is an audit tool to help stakeholders involved in the administration of data systems that handle data for and about children align their practices with the RD4C Principles. 

The tool encourages users to reflect on their data handling practices and strategy by posing questions regarding: 

  • Why: the purpose and rationale for the data system;
  • What: the data handled through the system; 
  • Who: the stakeholders involved in the system’s use, including data subjects;
  • How: the presence of operations, policies, and procedures; and 
  • When and where: temporal and place-based considerations….(More)”.
6b8bb1de 5bb6 474d B91a 99add0d5e4cd

Climate Change and AI: Recommendations for Government


Press Release: “A new report, developed by the Centre for AI & Climate and Climate Change AI for the Global Partnership on AI (GPAI), calls for governments to recognise the potential for artificial intelligence (AI) to accelerate the transition to net zero, and to put in place the support needed to advance AI-for-climate solutions. The report is being presented at COP26 today.

The report, Climate Change and AI: Recommendations for Government, highlights 48 specific recommendations for how governments can both support the application of AI to climate challenges and address the climate-related risks that AI poses.

The report was commissioned by the Global Partnership on AI (GPAI), a partnership between 18 countries and the EU that brings together experts from across countries and sectors to help shape the development of AI.

AI is already being used to support climate action in a wide range of use cases, several of which the report highlights. These include:

  • National Grid ESO, which has used AI to double the accuracy of its forecasts of UK electricity demand. Radically improving forecasts of electricity demand and renewable energy generation will be critical in enabling greater proportions of renewable energy on electricity grids.
  • The UN Satellite Centre (UNOSAT), which has developed the FloodAI system that delivers high-frequency flood reports. FloodAI’s reports, which use a combination of satellite data and machine learning, have improved the response to climate-related disasters in Asia and Africa.
  • Climate TRACE, a global coalition of organizations, which has radically improved the transparency and accuracy of emissions monitoring by leveraging AI algorithms and data from more than 300 satellites and 11,000 sensors.

The authors also detail critical bottlenecks that are impeding faster adoption. To address these, the report calls for governments to:

  • Improve data ecosystems in sectors critical to climate transition, including the development of digital twins in e.g. the energy sector.
  • Increase support for research, innovation, and deployment through targeted funding, infrastructure, and improved market designs.
  • Make climate change a central consideration in AI strategies to shape the responsible development of AI as a whole.
  • Support greater international collaboration and capacity building to facilitate the development and governance of AI-for-climate solutions….(More)”.

Countries’ climate pledges built on flawed data


Article by Chris Mooney, Juliet Eilperin, Desmond Butler, John Muyskens, Anu Narayanswamy, and Naema Ahmed: “Across the world, many countries underreporttheir greenhouse gas emissions in their reports to the United Nations, a Washington Post investigation has found. An examination of 196 country reports reveals a giant gap between what nations declare their emissions to be versus the greenhouse gases they are sending into the atmosphere. The gap ranges from at least 8.5 billion to as high as 13.3 billion tons a year of underreported emissions — big enough to move the needle on how much the Earth will warm.

The plan to save the world from the worst of climate change is built on data. But the data the world is relying on is inaccurate.

“If we don’t know the state of emissions today, we don’t know whether we’re cutting emissions meaningfully and substantially,” said Rob Jackson, a professor at Stanford University and chair of the Global Carbon Project, a collaboration of hundreds of researchers. “The atmosphere ultimately is the truth. The atmosphere is what we care about. The concentration of methane and other greenhouse gases in the atmosphere is what’s affecting climate.”

At the low end, the gap is larger than the yearly emissions of the United States. At the high end, it approaches the emissions of China and comprises 23 percent of humanity’s total contribution to the planet’s warming, The Post found…

A new generation of sophisticated satellites that can measure greenhouse gases are now orbiting Earth, and they can detect massive methane leaks. Data from the International Energy Agency (IEA) lists Russia as the world’s top oil and gas methane emitter, but that’s not what Russia reports to the United Nations. Its official numbers fall millions of tons shy of what independent scientific analyses show, a Post investigation found. Many oil and gas producers in the Persian Gulf region, such as the United Arab Emirates and Qatar, also report very small levels of oil and gas methane emission that don’t line up with other scientific data sets.

“It’s hard to imagine how policymakers are going to pursue ambitious climate actions if they’re not getting the right data from national governments on how big the problem is,” said Glenn Hurowitz, chief executive of Mighty Earth, an environmental advocacy group….(More)”.

Why Are We Failing at AI Ethics?


Article by Anja Kaspersen and Wendell Wallach: “…Extremely troubling is the fact that the people who are most vulnerable to negative impacts from such rapid expansions of AI systems are often the least likely to be able to join the conversation about these systems, either because they have no or restricted digital access or their lack of digital literacy makes them ripe for exploitation.

Such vulnerable groups are often theoretically included in discussions, but not empowered to take a meaningful part in making decisions. This engineered inequity, alongside human biases, risks amplifying otherness through neglect, exclusion, misinformation, and disinformation.

Society should be deeply concerned that nowhere near enough substantive progress is being made to develop and scale actionable legal, ethical oversight while simultaneously addressing existing inequalities.

So, why hasn’t more been done? There are three main issues at play: 

First, many of the existing dialogues around the ethics of AI and governance are too narrow and fail to understand the subtleties and life cycles of AI systems and their impacts.

Often, these efforts focus only on the development and deployment stages of the technology life cycle, when many of the problems occur during the earlier stages of conceptualization, research, and design. Or they fail to comprehend when and if AI system operates at a level of maturity required to avoid failure in complex adaptive systems.

Or they focus on some aspects of ethics, while ignoring other aspects that are more fundamental and challenging. This is the problem known as “ethics washing” – creating a superficially reassuring but illusory sense that ethical issues are being adequately addressed, to justify pressing forward with systems that end up deepening current patterns.

Let’s be clear: every choice entails tradeoffs. “Ethics talk” is often about underscoring the various tradeoffs entailed in differing courses of action. Once a course has been selected, comprehensive ethical oversight is also about addressing the considerations not accommodated by the options selected, which is essential to any future verification effort. This vital part of the process is often a stumbling block for those trying to address the ethics of AI.

The second major issue is that to date all the talk about ethics is simply that: talk. 

We’ve yet to see these discussions translate into meaningful change in managing the ways in which AI systems are being embedded into various aspect of our lives….

A third issue at play is that discussions on AI and ethics are still largely confined to the ivory tower.

There is an urgent need for more informed public discourse and serious investment in civic education around the societal impact of the bio-digital revolution. This could help address the first two problems, but most of what the general public currently perceives about AI comes from sci-fi tropes and blockbuster movies.

A few examples of algorithmic bias have penetrated the public discourse. But the most headline-grabbing research on AI and ethics tends to focus on far-horizon existential risks. More effort needs to be invested in communicating to the public that, beyond the hypothetical risks of future AI, there are real and imminent risks posed by why and how we embed AI systems that currently shape everyone’s daily lives….(More)”.

A Vision for the Future of Science Philanthropy


Article by Evan Michelson and Adam Falk: “If science is to accomplish all that society hopes it will in the years ahead, philanthropy will need to be an important contributor to those developments. It is therefore critical that philanthropic funders understand how to maximize science philanthropy’s contribution to the research enterprise. Given these stakes, what will science philanthropy need to get right in the coming years in order to have a positive impact on the scientific enterprise and to help move society toward greater collective well-being?

The answer, we argue, is that science philanthropies will increasingly need to serve a broader purpose. They certainly must continue to provide funding to promote new discoveries throughout the physical and social sciences. But they will also have to provide this support in a manner that takes account of the implications for society, shaping both the content of the research and the way it is pursued. To achieve this dual goal of positive scientific and societal impact, we identify four particular dimensions of the research enterprise that philanthropies will need to advance: seeding new fields of research, broadening participation in science, fostering new institutional practices, and deepening links between science and society. If funders attend assiduously to all these dimensions, we hope that when people look back 75 years from now, science philanthropy will have fully realized its extraordinary potential…(More)”.