‘Is it OK to …’: the bot that gives you an instant moral judgment


Article by Poppy Noor: “Corporal punishment, wearing fur, pineapple on pizza – moral dilemmas, are by their very nature, hard to solve. That’s why the same ethical questions are constantly resurfaced in TV, films and literature.

But what if AI could take away the brain work and answer ethical quandaries for us? Ask Delphi is a bot that’s been fed more than 1.7m examples of people’s ethical judgments on everyday questions and scenarios. If you pose an ethical quandary, it will tell you whether something is right, wrong, or indefensible.

Anyone can use Delphi. Users just put a question to the bot on its website, and see what it comes up with.

The AI is fed a vast number of scenarios – including ones from the popular Am I The Asshole sub-Reddit, where Reddit users post dilemmas from their personal lives and get an audience to judge who the asshole in the situation was.

Then, people are recruited from Mechanical Turk – a market place where researchers find paid participants for studies – to say whether they agree with the AI’s answers. Each answer is put to three arbiters, with the majority or average conclusion used to decide right from wrong. The process is selective – participants have to score well on a test to qualify to be a moral arbiter, and the researchers don’t recruit people who show signs of racism or sexism.

The arbitrators agree with the bot’s ethical judgments 92% of the time (although that could say as much about their ethics as it does the bot’s)…(More)”.

A Paradigm Shift in the Making: Designing a New WTO Transparency Mechanism That Fits the Current Era


Paper by Yaxuan Chen: “The rules-based multilateral trading system has been suffering from transparency challenges for decades. The theory of data technology provides a new perspective to assess the transparency provisions on their design, historic rationale, and evolution in light of the multilateral efforts for improvement since the General Agreement on Tariffs and Trade (GATT 1947). The development of frontier digital and data technologies, including mobile devices and sensors, new cryptographic technologies, cloud computing, and artificial intelligence, have completely changed the landscape of data collection, storage, processing, and analysis. In light of the new business models of international trade, trade administration, and governance, opportunities for addressing transparency challenges in the multilateral trading system have arisen.

While providing solutions to transparency problems of the past, data technology applications could trigger new transparency challenges in trade and governance. For instance, questions arise as to whether developing countries would be able to access or provide trade information with the same quantity, understandability, and timeliness as more developed countries. This is in addition to the emerging transparency expectations of the current era, with the pandemic as an immediate challenge and the rise of “real-time” economy in a broader context. For the multilateral trading system to stay relevant, innovations for a holistic global mechanism for supply chain transparency, the transformation of council and committee operations, a smart design for technical assistance to tackle the digital divide, automated and real-time dispute resolution options and further integration of inclusiveness and sustainability considerations into trade disciplines should be explored….(More)”.

22 Questions to Assess Responsible Data for Children (RD4C)


An Audit Tool by The GovLab and UNICEF: “Around the world and across domains, institutions are using data to improve service delivery for children. Data for and about children can, however, pose risks of misuse, such as unauthorized access or data breaches, as well as missed use of data that could have improved children’s lives if harnessed effectively. 

The RD4C Principles — Participatory; Professionally Accountable; People-Centric; Prevention of Harms Across the Data Life Cycle; Proportional; Protective of Children’s Rights; and Purpose-Driven — were developed by the GovLab and UNICEF to guide responsible data handling toward saving children’s lives, defending their rights, and helping them fulfill their potential from early childhood through adolescence. These principles were developed to act as a north star, guiding practitioners toward more responsible data practices.

Today, The GovLab and UNICEF, as part of the Responsible Data for Children initiative (RD4C), are pleased to launch a new tool that aims to put the principles into practice. 22 Questions to Assess Responsible Data for Children (RD4C) is an audit tool to help stakeholders involved in the administration of data systems that handle data for and about children align their practices with the RD4C Principles. 

The tool encourages users to reflect on their data handling practices and strategy by posing questions regarding: 

  • Why: the purpose and rationale for the data system;
  • What: the data handled through the system; 
  • Who: the stakeholders involved in the system’s use, including data subjects;
  • How: the presence of operations, policies, and procedures; and 
  • When and where: temporal and place-based considerations….(More)”.
6b8bb1de 5bb6 474d B91a 99add0d5e4cd

Climate Change and AI: Recommendations for Government


Press Release: “A new report, developed by the Centre for AI & Climate and Climate Change AI for the Global Partnership on AI (GPAI), calls for governments to recognise the potential for artificial intelligence (AI) to accelerate the transition to net zero, and to put in place the support needed to advance AI-for-climate solutions. The report is being presented at COP26 today.

The report, Climate Change and AI: Recommendations for Government, highlights 48 specific recommendations for how governments can both support the application of AI to climate challenges and address the climate-related risks that AI poses.

The report was commissioned by the Global Partnership on AI (GPAI), a partnership between 18 countries and the EU that brings together experts from across countries and sectors to help shape the development of AI.

AI is already being used to support climate action in a wide range of use cases, several of which the report highlights. These include:

  • National Grid ESO, which has used AI to double the accuracy of its forecasts of UK electricity demand. Radically improving forecasts of electricity demand and renewable energy generation will be critical in enabling greater proportions of renewable energy on electricity grids.
  • The UN Satellite Centre (UNOSAT), which has developed the FloodAI system that delivers high-frequency flood reports. FloodAI’s reports, which use a combination of satellite data and machine learning, have improved the response to climate-related disasters in Asia and Africa.
  • Climate TRACE, a global coalition of organizations, which has radically improved the transparency and accuracy of emissions monitoring by leveraging AI algorithms and data from more than 300 satellites and 11,000 sensors.

The authors also detail critical bottlenecks that are impeding faster adoption. To address these, the report calls for governments to:

  • Improve data ecosystems in sectors critical to climate transition, including the development of digital twins in e.g. the energy sector.
  • Increase support for research, innovation, and deployment through targeted funding, infrastructure, and improved market designs.
  • Make climate change a central consideration in AI strategies to shape the responsible development of AI as a whole.
  • Support greater international collaboration and capacity building to facilitate the development and governance of AI-for-climate solutions….(More)”.

Countries’ climate pledges built on flawed data


Article by Chris Mooney, Juliet Eilperin, Desmond Butler, John Muyskens, Anu Narayanswamy, and Naema Ahmed: “Across the world, many countries underreporttheir greenhouse gas emissions in their reports to the United Nations, a Washington Post investigation has found. An examination of 196 country reports reveals a giant gap between what nations declare their emissions to be versus the greenhouse gases they are sending into the atmosphere. The gap ranges from at least 8.5 billion to as high as 13.3 billion tons a year of underreported emissions — big enough to move the needle on how much the Earth will warm.

The plan to save the world from the worst of climate change is built on data. But the data the world is relying on is inaccurate.

“If we don’t know the state of emissions today, we don’t know whether we’re cutting emissions meaningfully and substantially,” said Rob Jackson, a professor at Stanford University and chair of the Global Carbon Project, a collaboration of hundreds of researchers. “The atmosphere ultimately is the truth. The atmosphere is what we care about. The concentration of methane and other greenhouse gases in the atmosphere is what’s affecting climate.”

At the low end, the gap is larger than the yearly emissions of the United States. At the high end, it approaches the emissions of China and comprises 23 percent of humanity’s total contribution to the planet’s warming, The Post found…

A new generation of sophisticated satellites that can measure greenhouse gases are now orbiting Earth, and they can detect massive methane leaks. Data from the International Energy Agency (IEA) lists Russia as the world’s top oil and gas methane emitter, but that’s not what Russia reports to the United Nations. Its official numbers fall millions of tons shy of what independent scientific analyses show, a Post investigation found. Many oil and gas producers in the Persian Gulf region, such as the United Arab Emirates and Qatar, also report very small levels of oil and gas methane emission that don’t line up with other scientific data sets.

“It’s hard to imagine how policymakers are going to pursue ambitious climate actions if they’re not getting the right data from national governments on how big the problem is,” said Glenn Hurowitz, chief executive of Mighty Earth, an environmental advocacy group….(More)”.

Why Are We Failing at AI Ethics?


Article by Anja Kaspersen and Wendell Wallach: “…Extremely troubling is the fact that the people who are most vulnerable to negative impacts from such rapid expansions of AI systems are often the least likely to be able to join the conversation about these systems, either because they have no or restricted digital access or their lack of digital literacy makes them ripe for exploitation.

Such vulnerable groups are often theoretically included in discussions, but not empowered to take a meaningful part in making decisions. This engineered inequity, alongside human biases, risks amplifying otherness through neglect, exclusion, misinformation, and disinformation.

Society should be deeply concerned that nowhere near enough substantive progress is being made to develop and scale actionable legal, ethical oversight while simultaneously addressing existing inequalities.

So, why hasn’t more been done? There are three main issues at play: 

First, many of the existing dialogues around the ethics of AI and governance are too narrow and fail to understand the subtleties and life cycles of AI systems and their impacts.

Often, these efforts focus only on the development and deployment stages of the technology life cycle, when many of the problems occur during the earlier stages of conceptualization, research, and design. Or they fail to comprehend when and if AI system operates at a level of maturity required to avoid failure in complex adaptive systems.

Or they focus on some aspects of ethics, while ignoring other aspects that are more fundamental and challenging. This is the problem known as “ethics washing” – creating a superficially reassuring but illusory sense that ethical issues are being adequately addressed, to justify pressing forward with systems that end up deepening current patterns.

Let’s be clear: every choice entails tradeoffs. “Ethics talk” is often about underscoring the various tradeoffs entailed in differing courses of action. Once a course has been selected, comprehensive ethical oversight is also about addressing the considerations not accommodated by the options selected, which is essential to any future verification effort. This vital part of the process is often a stumbling block for those trying to address the ethics of AI.

The second major issue is that to date all the talk about ethics is simply that: talk. 

We’ve yet to see these discussions translate into meaningful change in managing the ways in which AI systems are being embedded into various aspect of our lives….

A third issue at play is that discussions on AI and ethics are still largely confined to the ivory tower.

There is an urgent need for more informed public discourse and serious investment in civic education around the societal impact of the bio-digital revolution. This could help address the first two problems, but most of what the general public currently perceives about AI comes from sci-fi tropes and blockbuster movies.

A few examples of algorithmic bias have penetrated the public discourse. But the most headline-grabbing research on AI and ethics tends to focus on far-horizon existential risks. More effort needs to be invested in communicating to the public that, beyond the hypothetical risks of future AI, there are real and imminent risks posed by why and how we embed AI systems that currently shape everyone’s daily lives….(More)”.

A Vision for the Future of Science Philanthropy


Article by Evan Michelson and Adam Falk: “If science is to accomplish all that society hopes it will in the years ahead, philanthropy will need to be an important contributor to those developments. It is therefore critical that philanthropic funders understand how to maximize science philanthropy’s contribution to the research enterprise. Given these stakes, what will science philanthropy need to get right in the coming years in order to have a positive impact on the scientific enterprise and to help move society toward greater collective well-being?

The answer, we argue, is that science philanthropies will increasingly need to serve a broader purpose. They certainly must continue to provide funding to promote new discoveries throughout the physical and social sciences. But they will also have to provide this support in a manner that takes account of the implications for society, shaping both the content of the research and the way it is pursued. To achieve this dual goal of positive scientific and societal impact, we identify four particular dimensions of the research enterprise that philanthropies will need to advance: seeding new fields of research, broadening participation in science, fostering new institutional practices, and deepening links between science and society. If funders attend assiduously to all these dimensions, we hope that when people look back 75 years from now, science philanthropy will have fully realized its extraordinary potential…(More)”.

What Collective Narcissism Does to Society


Essay by  Scott Barry Kaufman: “In 2005, the psychologist Agnieszka Golec de Zavala was researching extremist groups, trying to understand what leads people to commit acts of terrorist violence. She began to notice something that looked a lot like what the 20th-century scholars Theodor Adorno and Erich Fromm had referred to as “group narcissism”: Golec de Zavala defined it to me as “a belief that the exaggerated greatness of one’s group is not sufficiently recognized by others,” in which that thirst for recognition is never satiated. At first, she thought it was a fringe phenomenon, but important nonetheless. She developed the Collective Narcissism Scale to measure the severity of group-narcissistic beliefs, including statements such as “My group deserves special treatment” and “I insist upon my group getting the respect that is due to it” with which respondents rate their agreement.

Sixteen years later, Golec de Zavala is a professor at SWPS University, in Poland, and a lecturer at Goldsmiths, University of London, leading the study of group narcissism—and she’s realized that there’s nothing fringe about it. This thinking can happen in seemingly any kind of assemblage: a religious, political, gender, racial, or ethnic group, but also a sports team, club, or cult. Now, she said, she’s terrified at how widely she’s finding it manifested across the globe.

Collective narcissism is not simply tribalism. Humans are inherently tribal, and that’s not necessarily a bad thing. Having a healthy social identity can have an immensely positive impact on well-being. Collective narcissists, though, are often more focused on out-group prejudice than in-group loyalty. In its most extreme form, group narcissism can fuel political radicalism and potentially even violence. But in everyday settings, too, it can keep groups from listening to one another, and lead them to reduce people on the “other side” to one-dimensional characters. The best way to avoid that is by teaching people how to be proud of their group—without obsessing over recognition….(More)”.

Design for Social Innovation: Case Studies from Around the World


Book edited By Mariana Amatullo, Bryan Boyer, Jennifer May and Andrew Shea: “The United Nations, Australia Post, and governments in the UK, Finland, Taiwan, France, Brazil, and Israel are just a few of the organizations and groups utilizing design to drive social change. Grounded by a global survey in sectors as diverse as public health, urban planning, economic development, education, humanitarian response, cultural heritage, and civil rights, Design for Social Innovation captures these stories and more through 45 richly illustrated case studies from six continents.

From advocating to understanding and everything in between, these cases demonstrate how designers shape new products, services, and systems while transforming organizations and supporting individual growth.

How is this work similar or different around the world? How are designers building sustainable business practices with this work? Why are organizations investing in design capabilities? What evidence do we have of impact by design? Leading practitioners and educators, brought together in seven dynamic roundtable discussions, provide context to the case studies.

Design for Social Innovation is a must-have for professionals, organizations, and educators in design, philanthropy, social innovation, and entrepreneurship. This book marks the first attempt to define the contours of a global overview that showcases the cultural, economic, and organizational levers propelling design for social innovation forward today…(More)”

The Cambridge Handbook of Commons Research Innovations


Book edited by Sheila R. Foster and Chrystie F. Swiney: “The commons theory, first articulated by Elinor Ostrom, is increasingly used as a framework to understand and rethink the management and governance of many kinds of shared resources. These resources can include natural and digital properties, cultural goods, knowledge and intellectual property, and housing and urban infrastructure, among many others. In a world of increasing scarcity and demand – from individuals, states, and markets – it is imperative to understand how best to induce cooperation among users of these resources in ways that advance sustainability, affordability, equity, and justice. This volume reflects this multifaceted and multidisciplinary field from a variety of perspectives, offering new applications and extensions of the commons theory, which is as diverse as the scholars who study it and is still developing in exciting ways…(More)”.