Responsible Data for Children Goes Polyglot: New Translations of Principles & Resources Available


Responsible Data for Children Blog: “In 2018, UNICEF and The GovLab launched the Responsible Data for Children (RD4C) initiative with the aim of supporting organisations and practitioners in ensuring that the interest of children is put at the centre of any work involving data for and about them.

Since its inception, the RD4C initiative has aimed to be field-oriented, driven by the needs of both children and practitioners across sectors and contexts. It has done so by ensuring that actors from the data responsibility sphere are informed and engaged on the RD4C work.

We want them to know what responsible data for and about children entails, why it is important, and how they can realize it in their own work.

In this spirit, the RD4C initiative has started translating its resources into different languages. We would like anyone willing to enhance their responsible data handling practices for and about children to be equipped with resources they can understand. As a global effort, we want to guarantee anyone willing to share their expertise and contribute be given the opportunity to do it.

Importantly, we would like children around the world—including the most marginalised and vulnerable groups—to be aware of what they can expect from organisations handling data for and about them and to have the means to demand and enforce their rights.

Last month, we released the RD4C Video, which is now available in ArabicFrench and Spanish. Soon, the rest of the RD4C resources, such as our principlestools and case studies will be translated as well.”

The Privacy Elasticity of Behavior: Conceptualization and Application


Paper by Inbal Dekel, Rachel Cummings, Ori Heffetz & Katrina Ligett: “We propose and initiate the study of privacy elasticity—the responsiveness of economic variables to small changes in the level of privacy given to participants in an economic system. Individuals rarely experience either full privacy or a complete lack of privacy; we propose to use differential privacy—a computer-science theory increasingly adopted by industry and government—as a standardized means of quantifying continuous privacy changes. The resulting privacy measure implies a privacy-elasticity notion that is portable and comparable across contexts. We demonstrate the feasibility of this approach by estimating the privacy elasticity of public-good contributions in a lab experiment…(More)”.

Responsible by Design – Principles for the ethical use of behavioural science in government


OECD Report: “The use of behavioural insights (BI) in public policy has grown over the last decade, with the largest increase of new behavioural teams emerging in the last five years. More and more governments are turning to behavioural science – a multidisciplinary approach to policy making encompassing lessons from psychology, cognitive science, neuroscience, anthropology, economics and more. There are a wide variety of frameworks and resources currently available, such as the OECD BASIC framework, designed with the purpose of helping BI practitioners and government officials infusing behavioural science throughout the policy cycle.

Despite the availability of such frameworks, there are less resources available with the primary purpose of safeguarding the responsible use of behavioural science in government. Oftentimes, teams are left to establish their own ethical standards and practices, which has resulted in an uncoordinated mosaic of procedures guiding the international community interested in upholding ethical behavioural practices. Until now, few attempts have been made to standardize ethical principles for behavioural science in public policy, and to concisely gather and present international best practices.

In light of this, we developed the first-of-its-kind Good Practice Principles for the Ethical Use of Behavioural Science in Public Policy to advance the responsible use of BI in government…(More)”.

How Does the Public Sector Identify Problems It Tries to Solve with AI?


Article by Maia Levy Daniel: “A correct analysis of the implementation of AI in a particular field or process needs to start by identifying if there actually is a problem to be solved. For instance, in the case of job matching, the problem would be related to the levels of unemployment in the country, and presumably addressing imbalances in specific fields. Then, would AI be the best way to address this specific problem? Are there any alternatives? Is there any evidence that shows that AI would be a better tool? Building AI systems is expensive and the funds being used by the public sector come from taxpayers. Are there any alternatives that could be less expensive? 

Moreover, governments must understand from the outset that these systems could involve potential risks for civil and human rights. Thus, it should be justified in detail why the government might be choosing a more expensive or riskier option. A potential guide to follow is the one developed by the UK’s Office for Artificial Intelligence on how to use AI in the public sector. This guide includes a section specifically devoted to how to assess whether AI is the right solution to a problem.

AI is such a buzzword that it has become appealing for governments to use as a solution to any public problem, without even starting to look for available alternatives. Although automation could accelerate decision-making processes, speed should not be prioritized over quality or over human rights protection. As Daniel Susser argues in his recent paper, the speed at which automated decisions are reached has normative implications. By incorporating digital technologies in decision-making processes, temporal norms and values that govern them are impacted, disrupting prior norms, re-calibrating balanced trade-offs, or displacing automation’s costs. As Susser suggests, speed is not necessarily bad; however, “using computational tools to speed up (or slow down) certain decisions is not a ‘neutral’ adjustment without further explanations.” 

So, conducting a thorough diagnosis including the identification of the specific problem to address and the best way to address it is key to protecting citizens’ rights. And this is why transparency must be mandatory. As citizens, we have a right to know how these processes are being conceived and designed, the reasons governments choose to implement technologies, as well as the risks involved.

In addition, maybe a good way to ultimately approach the systemic problem and change the structure of incentives is to stop using the pretentious terms “artificial intelligence”, “AI”, and “machine learning”, as Emily Tucker, the Executive Director of the Center on Privacy & Technology at Georgetown Law Center announced the Center would do. As Tucker explained, these terms are confusing for the average person, and the way they are typically employed makes us think it’s a machine rather than human beings making the decisions. By removing marketing terms from the equation and giving more visibility to the humans involved, these technologies may not ultimately seem so exotic…(More)”.

Moral Expansiveness Around the World: The Role of Societal Factors Across 36 Countries


Paper by Kelly Kirkland et al: “What are the things that we think matter morally, and how do societal factors influence this? To date, research has explored several individual-level and historical factors that influence the size of our ‘moral circles.’ There has, however, been less attention focused on which societal factors play a role. We present the first multi-national exploration of moral expansiveness—that is, the size of people’s moral circles across countries. We found low generalized trust, greater perceptions of a breakdown in the social fabric of society, and greater perceived economic inequality were associated with smaller moral circles. Generalized trust also helped explain the effects of perceived inequality on lower levels of moral inclusiveness. Other inequality indicators (i.e., Gini coefficients) were, however, unrelated to moral expansiveness. These findings suggest that societal factors, especially those associated with generalized trust, may influence the size of our moral circles…(More)”.

Kids Included: Enabling meaningful child participation within companies in a digital era


Report by KidsKnowBest and The LEGO Group: “As the impact of digital technology on children’s lives continues to grow, there are mounting calls for businesses that engage with children to deliver meaningful child participation throughout the design and development of their operations. Engaging children in how you take decisions and in how you design your digital products and services can, if done responsibly, create substantial value for both businesses and children. However, it also presents a broad number of challenges that businesses will need to address.

This report is a practical tool intended for businesses that are embarking on a journey towards meaningful child participation and encountering the challenges that come with it. It brings together expert voices from across sectors, including those of children and young people, to reflect on the following questions:

  1. What is meaningful child participation?
  2. Why is it important for children and businesses in relation to the digital environment?
  3. What are the key challenges to achieving this?
  4. How can businesses overcome these challenges?

While the report’s contributors passionately believe in the importance of meaningful child participation, they also recognise that nobody has all the answers. As such, this report is not intended to be referenced as an exhaustive resource, and is intended to be used together with the many other valuable resources for businesses.
However, we do hope it will inspire and enable businesses to move towards a future where children’s beliefs and perspectives are central to the design and development of the digital world. Children are asking to be heard. It’s time for businesses to sit up, listen, and learn…(More)”.

An anthology of warm data


Intro to anthology by Nora Bateson: “…The difficulty is that the studied living system is rarely put back into its multi contextual life-ing where it is in constant change. What would information look like that could change and shift in the field? The vitality of any living system is in the relationships between the parts. The relational vitality is constantly changing.

Warm Data is information that is alive within the transcontextual relating of a living system.

We may find it convenient to ignore this world of slippery, shifty information and choose instead that information that can be handled and pinned down. Still, the swirly stuff is underlying absolutely everything that is known as “action,” “decision,” or “learning.” Warm Data is necessary if for no other reason than a reminder that whatever information is currently available in a living process, “it is not just that and nothing more.”There are more contexts constantly shifting all the time. Think of a family, how it stays the same, and how it changes over time—or a city, pond, or a religion. To maintain any coherence, those systems must continually reshape and do so in relation to one another…(More)”.

Mapping Urban Trees Across North America with the Auto Arborist Dataset


Google Blog: “Over four billion people live in cities around the globe, and while most people interact daily with others — at the grocery store, on public transit, at work — they may take for granted their frequent interactions with the diverse plants and animals that comprise fragile urban ecosystems. Trees in cities, called urban forests, provide critical benefits for public health and wellbeing and will prove integral to urban climate adaptation. They filter air and water, capture stormwater runoffsequester atmospheric carbon dioxide, and limit erosion and drought. Shade from urban trees reduces energy-expensive cooling costs and mitigates urban heat islands. In the US alone, urban forests cover 127M acres and produce ecosystem services valued at $18 billion. But as the climate changes these ecosystems are increasingly under threat.

Urban forest monitoring — measuring the size, health, and species distribution of trees in cities over time — allows researchers and policymakers to (1) quantify ecosystem services, including air quality improvement, carbon sequestration, and benefits to public health; (2) track damage from extreme weather events; and (3) target planting to improve robustness to climate change, disease and infestation.

However, many cities lack even basic data about the location and species of their trees. …

Today we introduce the Auto Arborist Dataset, a multiview urban tree classification dataset that, at ~2.6 million trees and >320 genera, is two orders of magnitude larger than those in prior work. To build the dataset, we pulled from public tree censuses from 23 North American cities (shown above) and merged these records with Street View and overhead RGB imagery. As the first urban forest dataset to cover multiple cities, we analyze in detail how forest models can generalize with respect to geographic distribution shifts, crucial to building systems that scale. We are releasing all 2.6M tree records publicly, along with aerial and ground-level imagery for 1M trees…(More)”

What AI Can Tell Us About Intelligence


Essay by Yann LeCun and Jacob Browning: “If there is one constant in the field of artificial intelligence it is exaggeration: there is always breathless hype and scornful naysaying. It is helpful to occasionally take stock of where we stand.

The dominant technique in contemporary AI is deep learning (DL) neural networks, massive self-learning algorithms which excel at discerning and utilizing patterns in data. Since their inception, critics have prematurely argued that neural networks had run into an insurmountable wall — and every time, it proved a temporary hurdle. In the 1960s, they could not solve non-linear functions. That changed in the 1980s with backpropagation, but the new wall was how difficult it was to train the systems. The 1990s saw a rise of simplifying programs and standardized architectures which made training more reliable, but the new problem was the lack of training data and computing power.

In 2012, when contemporary graphics cards could be trained on the massive ImageNet dataset, DL went mainstream, handily besting all competitors. But then critics spied a new problem: DL required too much hand-labelled data for training. The last few years have rendered this criticism moot, as self-supervised learning has resulted in incredibly impressive systems, such as GPT-3, which do not require labeled data.

Today’s seemingly insurmountable wall is symbolic reasoning, the capacity to manipulate symbols in the ways familiar from algebra or logic. As we learned as children, solving math problems involves a step-by-step manipulation of symbols according to strict rules (e.g., multiply the furthest right column, carry the extra value to the column to the left, etc.). Gary Marcus, author of “The Algebraic Mind”and co-author (with Ernie Davis) of “Rebooting AI,recently argued that DL is incapable of further progress because neural networks struggle with this kind of symbol manipulation. By contrast, many DL researchers are convinced that DL is already engaging in symbolic reasoning and will continue to improve at it.

At the heart of this debate are two different visions of the role of symbols in intelligence, both biological and mechanical: one holds that symbolic reasoning must be hard-coded from the outset and the other holds it can be learned through experience, by machines and humans alike. As such, the stakes are not just about the most practical way forward, but also how we should understand human intelligence — and, thus, how we should pursue human-level artificial intelligence…(More)”.

Non-human humanitarianism: when ‘AI for good’ can be harmful


Paper by Mirca Madianou: “Artificial intelligence (AI) applications have been introduced in humanitarian operations in order to help with the significant challenges the sector is facing. This article focuses on chatbots which have been proposed as an efficient method to improve communication with, and accountability to affected communities. Chatbots, together with other humanitarian AI applications such as biometrics, satellite imaging, predictive modelling and data visualisations, are often understood as part of the wider phenomenon of ‘AI for social good’. The article develops a decolonial critique of humanitarianism and critical algorithm studies which focuses on the power asymmetries underpinning both humanitarianism and AI. The article asks whether chatbots, as exemplars of ‘AI for good’, reproduce inequalities in the global context. Drawing on a mixed methods study that includes interviews with seven groups of stakeholders, the analysis observes that humanitarian chatbots do not fulfil claims such as ‘intelligence’. Yet AI applications still have powerful consequences. Apart from the risks associated with misinformation and data safeguarding, chatbots reduce communication to its barest instrumental forms which creates disconnects between affected communities and aid agencies. This disconnect is compounded by the extraction of value from data and experimentation with untested technologies. By reflecting the values of their designers and by asserting Eurocentric values in their programmed interactions, chatbots reproduce the coloniality of power. The article concludes that ‘AI for good’ is an ‘enchantment of technology’ that reworks the colonial legacies of humanitarianism whilst also occluding the power dynamics at play…(More)”.