Paper by Pranav Gupta and Anita Williams Woolley: “Human society faces increasingly complex problems that require coordinated collective action. Artificial intelligence (AI) holds the potential to bring together the knowledge and associated action needed to find solutions at scale. In order to unleash the potential of human and AI systems, we need to understand the core functions of collective intelligence. To this end, we describe a socio-cognitive architecture that conceptualizes how boundedly rational individuals coordinate their cognitive resources and diverse goals to accomplish joint action. Our transactive systems framework articulates the inter-member processes underlying the emergence of collective memory, attention, and reasoning, which are fundamental to intelligence in any system. Much like the cognitive architectures that have guided the development of artificial intelligence, our transactive systems framework holds the potential to be formalized in computational terms to deepen our understanding of collective intelligence and pinpoint roles that AI can play in enhancing it….(More)”
Data protection in the context of covid-19. A short (hi)story of tracing applications
Book edited by Elise Poillot, Gabriele Lenzini, Giorgio Resta, and Vincenzo Zeno-Zencovich: “The volume presents the results of a research project (named “Legafight”) funded by the Luxembourg Fond National de la Recherche in order to verify if and how digital tracing applications could be implemented in the Grand-Duchy in order to counter and abate the Covid-19 pandemic. This inevitably brought to a deep comparative overview of the various existing various models, starting from that of the European Union and those put into practice by Belgium, France, Germany, and Italy, with attention also to some Anglo-Saxon approaches (the UK and Australia). Not surprisingly the main issue which had to be tackled was that of the protection of the personal data collected through the tracing applications, their use by public health authorities and the trust laid in tracing procedures by citizens. Over the last 18 months tracing apps have registered a rise, a fall, and a sudden rebirth as mediums devoted not so much to collect data, but rather to distribute real time information which should allow informed decisions and be used as repositories of health certifications…(More)”.
AI Generates Hypotheses Human Scientists Have Not Thought Of
Robin Blades in Scientific American: “Electric vehicles have the potential to substantially reduce carbon emissions, but car companies are running out of materials to make batteries. One crucial component, nickel, is projected to cause supply shortages as early as the end of this year. Scientists recently discovered four new materials that could potentially help—and what may be even more intriguing is how they found these materials: the researchers relied on artificial intelligence to pick out useful chemicals from a list of more than 300 options. And they are not the only humans turning to A.I. for scientific inspiration.
Creating hypotheses has long been a purely human domain. Now, though, scientists are beginning to ask machine learning to produce original insights. They are designing neural networks (a type of machine-learning setup with a structure inspired by the human brain) that suggest new hypotheses based on patterns the networks find in data instead of relying on human assumptions. Many fields may soon turn to the muse of machine learning in an attempt to speed up the scientific process and reduce human biases.
In the case of new battery materials, scientists pursuing such tasks have typically relied on database search tools, modeling and their own intuition about chemicals to pick out useful compounds. Instead a team at the University of Liverpool in England used machine learning to streamline the creative process. The researchers developed a neural network that ranked chemical combinations by how likely they were to result in a useful new material. Then the scientists used these rankings to guide their experiments in the laboratory. They identified four promising candidates for battery materials without having to test everything on their list, saving them months of trial and error…(More)”.
22 Questions to Assess Responsible Data for Children (RD4C)
An Audit Tool by The GovLab and UNICEF: “Around the world and across domains, institutions are using data to improve service delivery for children. Data for and about children can, however, pose risks of misuse, such as unauthorized access or data breaches, as well as missed use of data that could have improved children’s lives if harnessed effectively.
The RD4C Principles — Participatory; Professionally Accountable; People-Centric; Prevention of Harms Across the Data Life Cycle; Proportional; Protective of Children’s Rights; and Purpose-Driven — were developed by the GovLab and UNICEF to guide responsible data handling toward saving children’s lives, defending their rights, and helping them fulfill their potential from early childhood through adolescence. These principles were developed to act as a north star, guiding practitioners toward more responsible data practices.
Today, The GovLab and UNICEF, as part of the Responsible Data for Children initiative (RD4C), are pleased to launch a new tool that aims to put the principles into practice. 22 Questions to Assess Responsible Data for Children (RD4C) is an audit tool to help stakeholders involved in the administration of data systems that handle data for and about children align their practices with the RD4C Principles.
The tool encourages users to reflect on their data handling practices and strategy by posing questions regarding:
- Why: the purpose and rationale for the data system;
- What: the data handled through the system;
- Who: the stakeholders involved in the system’s use, including data subjects;
- How: the presence of operations, policies, and procedures; and
- When and where: temporal and place-based considerations….(More)”.

Climate Change and AI: Recommendations for Government
Press Release: “A new report, developed by the Centre for AI & Climate and Climate Change AI for the Global Partnership on AI (GPAI), calls for governments to recognise the potential for artificial intelligence (AI) to accelerate the transition to net zero, and to put in place the support needed to advance AI-for-climate solutions. The report is being presented at COP26 today.
The report, Climate Change and AI: Recommendations for Government, highlights 48 specific recommendations for how governments can both support the application of AI to climate challenges and address the climate-related risks that AI poses.
The report was commissioned by the Global Partnership on AI (GPAI), a partnership between 18 countries and the EU that brings together experts from across countries and sectors to help shape the development of AI.
AI is already being used to support climate action in a wide range of use cases, several of which the report highlights. These include:
- National Grid ESO, which has used AI to double the accuracy of its forecasts of UK electricity demand. Radically improving forecasts of electricity demand and renewable energy generation will be critical in enabling greater proportions of renewable energy on electricity grids.
- The UN Satellite Centre (UNOSAT), which has developed the FloodAI system that delivers high-frequency flood reports. FloodAI’s reports, which use a combination of satellite data and machine learning, have improved the response to climate-related disasters in Asia and Africa.
- Climate TRACE, a global coalition of organizations, which has radically improved the transparency and accuracy of emissions monitoring by leveraging AI algorithms and data from more than 300 satellites and 11,000 sensors.
The authors also detail critical bottlenecks that are impeding faster adoption. To address these, the report calls for governments to:
- Improve data ecosystems in sectors critical to climate transition, including the development of digital twins in e.g. the energy sector.
- Increase support for research, innovation, and deployment through targeted funding, infrastructure, and improved market designs.
- Make climate change a central consideration in AI strategies to shape the responsible development of AI as a whole.
- Support greater international collaboration and capacity building to facilitate the development and governance of AI-for-climate solutions….(More)”.
Countries’ climate pledges built on flawed data
Article by Chris Mooney, Juliet Eilperin, Desmond Butler, John Muyskens, Anu Narayanswamy, and Naema Ahmed: “Across the world, many countries underreporttheir greenhouse gas emissions in their reports to the United Nations, a Washington Post investigation has found. An examination of 196 country reports reveals a giant gap between what nations declare their emissions to be versus the greenhouse gases they are sending into the atmosphere. The gap ranges from at least 8.5 billion to as high as 13.3 billion tons a year of underreported emissions — big enough to move the needle on how much the Earth will warm.
The plan to save the world from the worst of climate change is built on data. But the data the world is relying on is inaccurate.
“If we don’t know the state of emissions today, we don’t know whether we’re cutting emissions meaningfully and substantially,” said Rob Jackson, a professor at Stanford University and chair of the Global Carbon Project, a collaboration of hundreds of researchers. “The atmosphere ultimately is the truth. The atmosphere is what we care about. The concentration of methane and other greenhouse gases in the atmosphere is what’s affecting climate.”
At the low end, the gap is larger than the yearly emissions of the United States. At the high end, it approaches the emissions of China and comprises 23 percent of humanity’s total contribution to the planet’s warming, The Post found…
A new generation of sophisticated satellites that can measure greenhouse gases are now orbiting Earth, and they can detect massive methane leaks. Data from the International Energy Agency (IEA) lists Russia as the world’s top oil and gas methane emitter, but that’s not what Russia reports to the United Nations. Its official numbers fall millions of tons shy of what independent scientific analyses show, a Post investigation found. Many oil and gas producers in the Persian Gulf region, such as the United Arab Emirates and Qatar, also report very small levels of oil and gas methane emission that don’t line up with other scientific data sets.
“It’s hard to imagine how policymakers are going to pursue ambitious climate actions if they’re not getting the right data from national governments on how big the problem is,” said Glenn Hurowitz, chief executive of Mighty Earth, an environmental advocacy group….(More)”.
Why Are We Failing at AI Ethics?
Article by Anja Kaspersen and Wendell Wallach: “…Extremely troubling is the fact that the people who are most vulnerable to negative impacts from such rapid expansions of AI systems are often the least likely to be able to join the conversation about these systems, either because they have no or restricted digital access or their lack of digital literacy makes them ripe for exploitation.
Such vulnerable groups are often theoretically included in discussions, but not empowered to take a meaningful part in making decisions. This engineered inequity, alongside human biases, risks amplifying otherness through neglect, exclusion, misinformation, and disinformation.
Society should be deeply concerned that nowhere near enough substantive progress is being made to develop and scale actionable legal, ethical oversight while simultaneously addressing existing inequalities.
So, why hasn’t more been done? There are three main issues at play:
First, many of the existing dialogues around the ethics of AI and governance are too narrow and fail to understand the subtleties and life cycles of AI systems and their impacts.
Often, these efforts focus only on the development and deployment stages of the technology life cycle, when many of the problems occur during the earlier stages of conceptualization, research, and design. Or they fail to comprehend when and if AI system operates at a level of maturity required to avoid failure in complex adaptive systems.
Or they focus on some aspects of ethics, while ignoring other aspects that are more fundamental and challenging. This is the problem known as “ethics washing” – creating a superficially reassuring but illusory sense that ethical issues are being adequately addressed, to justify pressing forward with systems that end up deepening current patterns.
Let’s be clear: every choice entails tradeoffs. “Ethics talk” is often about underscoring the various tradeoffs entailed in differing courses of action. Once a course has been selected, comprehensive ethical oversight is also about addressing the considerations not accommodated by the options selected, which is essential to any future verification effort. This vital part of the process is often a stumbling block for those trying to address the ethics of AI.
The second major issue is that to date all the talk about ethics is simply that: talk.
We’ve yet to see these discussions translate into meaningful change in managing the ways in which AI systems are being embedded into various aspect of our lives….
A third issue at play is that discussions on AI and ethics are still largely confined to the ivory tower.
There is an urgent need for more informed public discourse and serious investment in civic education around the societal impact of the bio-digital revolution. This could help address the first two problems, but most of what the general public currently perceives about AI comes from sci-fi tropes and blockbuster movies.
A few examples of algorithmic bias have penetrated the public discourse. But the most headline-grabbing research on AI and ethics tends to focus on far-horizon existential risks. More effort needs to be invested in communicating to the public that, beyond the hypothetical risks of future AI, there are real and imminent risks posed by why and how we embed AI systems that currently shape everyone’s daily lives….(More)”.
A Vision for the Future of Science Philanthropy
Article by Evan Michelson and Adam Falk: “If science is to accomplish all that society hopes it will in the years ahead, philanthropy will need to be an important contributor to those developments. It is therefore critical that philanthropic funders understand how to maximize science philanthropy’s contribution to the research enterprise. Given these stakes, what will science philanthropy need to get right in the coming years in order to have a positive impact on the scientific enterprise and to help move society toward greater collective well-being?
The answer, we argue, is that science philanthropies will increasingly need to serve a broader purpose. They certainly must continue to provide funding to promote new discoveries throughout the physical and social sciences. But they will also have to provide this support in a manner that takes account of the implications for society, shaping both the content of the research and the way it is pursued. To achieve this dual goal of positive scientific and societal impact, we identify four particular dimensions of the research enterprise that philanthropies will need to advance: seeding new fields of research, broadening participation in science, fostering new institutional practices, and deepening links between science and society. If funders attend assiduously to all these dimensions, we hope that when people look back 75 years from now, science philanthropy will have fully realized its extraordinary potential…(More)”.
How behavioral science could get people back into public libraries
Article by Talib Visram: “In October, New York City’s three public library systems announced they would permanently drop fines on late book returns. Comprised of Brooklyn, Queens, and New York public libraries, the City’s system is the largest in the country to remove fines. It’s a reversal of a long-held policy intended to ensure shelves stayed stacked, but an outdated one that many major cities, including Chicago, San Francisco, and Dallas, had already scrapped without any discernible downsides. Though a source of revenue—in 2013, for instance, Brooklyn Public Library (BPL) racked up $1.9 million in late fees—the fee system also created a barrier to library access that disproportionately touched the low-income communities that most need the resources.
That’s just one thing Brooklyn’s library system has done to try to make its services more equitable. In 2017, well before the move to eliminate fines, BPL on its own embarked on a partnership with Nudge, a behavioral science lab at the University of West Virginia, to find ways to reduce barriers to access and increase engagement with the book collections. In the first-of-its-kind collaboration, the two tested behavioral science interventions via three separate pilots, all of which led to the library’s long-term implementation of successful techniques. Those involved in the project say the steps can be translated to other library systems, though it takes serious investment of time and resources….(More)”.
Design for Social Innovation: Case Studies from Around the World
Book edited By Mariana Amatullo, Bryan Boyer, Jennifer May and Andrew Shea: “The United Nations, Australia Post, and governments in the UK, Finland, Taiwan, France, Brazil, and Israel are just a few of the organizations and groups utilizing design to drive social change. Grounded by a global survey in sectors as diverse as public health, urban planning, economic development, education, humanitarian response, cultural heritage, and civil rights, Design for Social Innovation captures these stories and more through 45 richly illustrated case studies from six continents.
From advocating to understanding and everything in between, these cases demonstrate how designers shape new products, services, and systems while transforming organizations and supporting individual growth.
How is this work similar or different around the world? How are designers building sustainable business practices with this work? Why are organizations investing in design capabilities? What evidence do we have of impact by design? Leading practitioners and educators, brought together in seven dynamic roundtable discussions, provide context to the case studies.
Design for Social Innovation is a must-have for professionals, organizations, and educators in design, philanthropy, social innovation, and entrepreneurship. This book marks the first attempt to define the contours of a global overview that showcases the cultural, economic, and organizational levers propelling design for social innovation forward today…(More)”