22 Questions to Assess Responsible Data for Children (RD4C)


An Audit Tool by The GovLab and UNICEF: “Around the world and across domains, institutions are using data to improve service delivery for children. Data for and about children can, however, pose risks of misuse, such as unauthorized access or data breaches, as well as missed use of data that could have improved children’s lives if harnessed effectively. 

The RD4C Principles — Participatory; Professionally Accountable; People-Centric; Prevention of Harms Across the Data Life Cycle; Proportional; Protective of Children’s Rights; and Purpose-Driven — were developed by the GovLab and UNICEF to guide responsible data handling toward saving children’s lives, defending their rights, and helping them fulfill their potential from early childhood through adolescence. These principles were developed to act as a north star, guiding practitioners toward more responsible data practices.

Today, The GovLab and UNICEF, as part of the Responsible Data for Children initiative (RD4C), are pleased to launch a new tool that aims to put the principles into practice. 22 Questions to Assess Responsible Data for Children (RD4C) is an audit tool to help stakeholders involved in the administration of data systems that handle data for and about children align their practices with the RD4C Principles. 

The tool encourages users to reflect on their data handling practices and strategy by posing questions regarding: 

  • Why: the purpose and rationale for the data system;
  • What: the data handled through the system; 
  • Who: the stakeholders involved in the system’s use, including data subjects;
  • How: the presence of operations, policies, and procedures; and 
  • When and where: temporal and place-based considerations….(More)”.
6b8bb1de 5bb6 474d B91a 99add0d5e4cd

Climate Change and AI: Recommendations for Government


Press Release: “A new report, developed by the Centre for AI & Climate and Climate Change AI for the Global Partnership on AI (GPAI), calls for governments to recognise the potential for artificial intelligence (AI) to accelerate the transition to net zero, and to put in place the support needed to advance AI-for-climate solutions. The report is being presented at COP26 today.

The report, Climate Change and AI: Recommendations for Government, highlights 48 specific recommendations for how governments can both support the application of AI to climate challenges and address the climate-related risks that AI poses.

The report was commissioned by the Global Partnership on AI (GPAI), a partnership between 18 countries and the EU that brings together experts from across countries and sectors to help shape the development of AI.

AI is already being used to support climate action in a wide range of use cases, several of which the report highlights. These include:

  • National Grid ESO, which has used AI to double the accuracy of its forecasts of UK electricity demand. Radically improving forecasts of electricity demand and renewable energy generation will be critical in enabling greater proportions of renewable energy on electricity grids.
  • The UN Satellite Centre (UNOSAT), which has developed the FloodAI system that delivers high-frequency flood reports. FloodAI’s reports, which use a combination of satellite data and machine learning, have improved the response to climate-related disasters in Asia and Africa.
  • Climate TRACE, a global coalition of organizations, which has radically improved the transparency and accuracy of emissions monitoring by leveraging AI algorithms and data from more than 300 satellites and 11,000 sensors.

The authors also detail critical bottlenecks that are impeding faster adoption. To address these, the report calls for governments to:

  • Improve data ecosystems in sectors critical to climate transition, including the development of digital twins in e.g. the energy sector.
  • Increase support for research, innovation, and deployment through targeted funding, infrastructure, and improved market designs.
  • Make climate change a central consideration in AI strategies to shape the responsible development of AI as a whole.
  • Support greater international collaboration and capacity building to facilitate the development and governance of AI-for-climate solutions….(More)”.

Countries’ climate pledges built on flawed data


Article by Chris Mooney, Juliet Eilperin, Desmond Butler, John Muyskens, Anu Narayanswamy, and Naema Ahmed: “Across the world, many countries underreporttheir greenhouse gas emissions in their reports to the United Nations, a Washington Post investigation has found. An examination of 196 country reports reveals a giant gap between what nations declare their emissions to be versus the greenhouse gases they are sending into the atmosphere. The gap ranges from at least 8.5 billion to as high as 13.3 billion tons a year of underreported emissions — big enough to move the needle on how much the Earth will warm.

The plan to save the world from the worst of climate change is built on data. But the data the world is relying on is inaccurate.

“If we don’t know the state of emissions today, we don’t know whether we’re cutting emissions meaningfully and substantially,” said Rob Jackson, a professor at Stanford University and chair of the Global Carbon Project, a collaboration of hundreds of researchers. “The atmosphere ultimately is the truth. The atmosphere is what we care about. The concentration of methane and other greenhouse gases in the atmosphere is what’s affecting climate.”

At the low end, the gap is larger than the yearly emissions of the United States. At the high end, it approaches the emissions of China and comprises 23 percent of humanity’s total contribution to the planet’s warming, The Post found…

A new generation of sophisticated satellites that can measure greenhouse gases are now orbiting Earth, and they can detect massive methane leaks. Data from the International Energy Agency (IEA) lists Russia as the world’s top oil and gas methane emitter, but that’s not what Russia reports to the United Nations. Its official numbers fall millions of tons shy of what independent scientific analyses show, a Post investigation found. Many oil and gas producers in the Persian Gulf region, such as the United Arab Emirates and Qatar, also report very small levels of oil and gas methane emission that don’t line up with other scientific data sets.

“It’s hard to imagine how policymakers are going to pursue ambitious climate actions if they’re not getting the right data from national governments on how big the problem is,” said Glenn Hurowitz, chief executive of Mighty Earth, an environmental advocacy group….(More)”.

Why Are We Failing at AI Ethics?


Article by Anja Kaspersen and Wendell Wallach: “…Extremely troubling is the fact that the people who are most vulnerable to negative impacts from such rapid expansions of AI systems are often the least likely to be able to join the conversation about these systems, either because they have no or restricted digital access or their lack of digital literacy makes them ripe for exploitation.

Such vulnerable groups are often theoretically included in discussions, but not empowered to take a meaningful part in making decisions. This engineered inequity, alongside human biases, risks amplifying otherness through neglect, exclusion, misinformation, and disinformation.

Society should be deeply concerned that nowhere near enough substantive progress is being made to develop and scale actionable legal, ethical oversight while simultaneously addressing existing inequalities.

So, why hasn’t more been done? There are three main issues at play: 

First, many of the existing dialogues around the ethics of AI and governance are too narrow and fail to understand the subtleties and life cycles of AI systems and their impacts.

Often, these efforts focus only on the development and deployment stages of the technology life cycle, when many of the problems occur during the earlier stages of conceptualization, research, and design. Or they fail to comprehend when and if AI system operates at a level of maturity required to avoid failure in complex adaptive systems.

Or they focus on some aspects of ethics, while ignoring other aspects that are more fundamental and challenging. This is the problem known as “ethics washing” – creating a superficially reassuring but illusory sense that ethical issues are being adequately addressed, to justify pressing forward with systems that end up deepening current patterns.

Let’s be clear: every choice entails tradeoffs. “Ethics talk” is often about underscoring the various tradeoffs entailed in differing courses of action. Once a course has been selected, comprehensive ethical oversight is also about addressing the considerations not accommodated by the options selected, which is essential to any future verification effort. This vital part of the process is often a stumbling block for those trying to address the ethics of AI.

The second major issue is that to date all the talk about ethics is simply that: talk. 

We’ve yet to see these discussions translate into meaningful change in managing the ways in which AI systems are being embedded into various aspect of our lives….

A third issue at play is that discussions on AI and ethics are still largely confined to the ivory tower.

There is an urgent need for more informed public discourse and serious investment in civic education around the societal impact of the bio-digital revolution. This could help address the first two problems, but most of what the general public currently perceives about AI comes from sci-fi tropes and blockbuster movies.

A few examples of algorithmic bias have penetrated the public discourse. But the most headline-grabbing research on AI and ethics tends to focus on far-horizon existential risks. More effort needs to be invested in communicating to the public that, beyond the hypothetical risks of future AI, there are real and imminent risks posed by why and how we embed AI systems that currently shape everyone’s daily lives….(More)”.

A Vision for the Future of Science Philanthropy


Article by Evan Michelson and Adam Falk: “If science is to accomplish all that society hopes it will in the years ahead, philanthropy will need to be an important contributor to those developments. It is therefore critical that philanthropic funders understand how to maximize science philanthropy’s contribution to the research enterprise. Given these stakes, what will science philanthropy need to get right in the coming years in order to have a positive impact on the scientific enterprise and to help move society toward greater collective well-being?

The answer, we argue, is that science philanthropies will increasingly need to serve a broader purpose. They certainly must continue to provide funding to promote new discoveries throughout the physical and social sciences. But they will also have to provide this support in a manner that takes account of the implications for society, shaping both the content of the research and the way it is pursued. To achieve this dual goal of positive scientific and societal impact, we identify four particular dimensions of the research enterprise that philanthropies will need to advance: seeding new fields of research, broadening participation in science, fostering new institutional practices, and deepening links between science and society. If funders attend assiduously to all these dimensions, we hope that when people look back 75 years from now, science philanthropy will have fully realized its extraordinary potential…(More)”.

How behavioral science could get people back into public libraries


Article by Talib Visram: “In October, New York City’s three public library systems announced they would permanently drop fines on late book returns. Comprised of Brooklyn, Queens, and New York public libraries, the City’s system is the largest in the country to remove fines. It’s a reversal of a long-held policy intended to ensure shelves stayed stacked, but an outdated one that many major cities, including Chicago, San Francisco, and Dallas, had already scrapped without any discernible downsides. Though a source of revenue—in 2013, for instance, Brooklyn Public Library (BPL) racked up $1.9 million in late fees—the fee system also created a barrier to library access that disproportionately touched the low-income communities that most need the resources.

That’s just one thing Brooklyn’s library system has done to try to make its services more equitable. In 2017, well before the move to eliminate fines, BPL on its own embarked on a partnership with Nudge, a behavioral science lab at the University of West Virginia, to find ways to reduce barriers to access and increase engagement with the book collections. In the first-of-its-kind collaboration, the two tested behavioral science interventions via three separate pilots, all of which led to the library’s long-term implementation of successful techniques. Those involved in the project say the steps can be translated to other library systems, though it takes serious investment of time and resources….(More)”.

Not all data are created equal -Data sharing and privacy


Paper by Michiel Bijlsma, Carin van der Cruijsen and Nicole Jonker: “The COVID-19 pandemic has increased our online presence and unleashed a new discussion on sharing sensitive personal data. Upcoming European legislation will facilitate data sharing in several areas, following the lead of the revised payments directive (PSD2), which enables payments data sharing with third parties. However, little is known about what drives consumers’ preferences with different types of data, as preferences may differ according to the type of data, type of usage or type of firm using the data.

Using a discrete-choice survey approach among a representative group of Dutch consumers, we find that next to health data, people are hesitant to share their financial data on payments, wealth and pensions, compared to other types of consumer data. Second, consumers are especially cautious about sharing their data when they are not used anonymously. Third, consumers are more hesitant to share their data with BigTechs, webshops and insurers than they are with banks. Fourth, a financial reward can trigger data sharing by consumers. Last, we show that attitudes towards data usage depend on personal characteristics, consumers’ digital skills, online behaviour and their trust in the firms using the data…(More)”

Design for Social Innovation: Case Studies from Around the World


Book edited By Mariana Amatullo, Bryan Boyer, Jennifer May and Andrew Shea: “The United Nations, Australia Post, and governments in the UK, Finland, Taiwan, France, Brazil, and Israel are just a few of the organizations and groups utilizing design to drive social change. Grounded by a global survey in sectors as diverse as public health, urban planning, economic development, education, humanitarian response, cultural heritage, and civil rights, Design for Social Innovation captures these stories and more through 45 richly illustrated case studies from six continents.

From advocating to understanding and everything in between, these cases demonstrate how designers shape new products, services, and systems while transforming organizations and supporting individual growth.

How is this work similar or different around the world? How are designers building sustainable business practices with this work? Why are organizations investing in design capabilities? What evidence do we have of impact by design? Leading practitioners and educators, brought together in seven dynamic roundtable discussions, provide context to the case studies.

Design for Social Innovation is a must-have for professionals, organizations, and educators in design, philanthropy, social innovation, and entrepreneurship. This book marks the first attempt to define the contours of a global overview that showcases the cultural, economic, and organizational levers propelling design for social innovation forward today…(More)”

Do Awards Incentivize Non-Winners to Work Harder on CSR?


Article by Jiangyan Li, Juelin Yin, Wei Shi, And Xiwei Yi: “As corporate lists and awards that rank and recognize firms for superior social reputation have proliferated in recent years, the field of CSR is also replete with various types of awards given out to firms or CEOs, such as Fortune’s “Most Admired Companies” rankings and “Best 100 Companies to Work For” lists. Such awards serve to both reward and incentivize firms to become more dedicated to CSR. Prior research has primarily focused on the effects of awards on award-winning firms; however, the effectiveness and implications of such awards as incentives to non-winning firms remain understudied. Therefore, in the article of “Keeping up with the Joneses: Role of CSR Awards in Incentivizing Non-Winners’ CSR” published by Business & Society, we are curious about whether such CSR awards could successfully incentivize non-winning firms to catch up with their winning competitors.

Drawing on the awareness-motivation-capability (AMC) framework developed in the competitive dynamics literature, we use a sample of Chinese listed firms from 2009 to 2015 to investigate how competitors’ CSR award winning can influence focal firms’ CSR. The empirical results show that non-winning firms indeed improve their CSR after their competitors have won CSR awards. However, non-winning firms’ improvement in CSR may vary in different scenarios. For instance, media exposure can play an important informational role in reducing information asymmetries and inducing competitive actions among competitors, therefore, non-winning firms’ improvement in CSR is more salient when award-winning firms are more visible in the media. Meanwhile, when CSR award winners perform better financially, non-winners will be more motivated to respond to their competitors’ wins. Further, firms with a higher level of prior CSR are more capable of improving their CSR and therefore are more likely to respond to their competitors’ wins…(More)”.

We need to talk about techie tunnel vision


Article by Gillian Tett :”Last year, the powerful US data company Palantir filed documents for an initial public offering. Included was a remarkable letter to investors from Alex Karp, the CEO, that is worth remembering now.

“Our society has effectively outsourced the building of software that makes our world possible to a small group of engineers in an isolated corner of the country,” he wrote. “The question is whether we also want to outsource the adjudication of some of the most consequential moral and philosophical questions of our time.”

Karp added, “The engineering elite in Silicon Valley may know more than most about building software. But they do not know more about how society should be organized or what justice requires.” To put it more bluntly, techies might be brilliant and clever at what they do, but that doesn’t make them qualified to organise our lives. It was a striking statement from someone who is himself an ultra techie and whose company’s extensive military and intelligence links have sparked controversy

The good news is that people in his position are finally prepared to talk about it. The even better news is that there are experiments under way to combat techie tunnel vision. In Silicon Valley, for instance, Big Tech companies are hiring social scientists. Other innovation hubs show promising signs too. In Canberra, Genevieve Bell, a former vice-president at Intel, has launched a blended social and computer science AI institute. These initiatives aim to blend AI with what I call “anthropological intelligence” — a second type of “AI” that provides a sense of social context.

The bad news is that such initiatives remain modest, and there is still extreme information asymmetry between the engineers and everyone else. What is needed is an army of cultural translators who will fight our tendency to mentally outsource the issues to engineering elites. Maybe tech innovators such as Karp and Schmidt could use some of their vast wealth to fund this….(More)”.