The effects of AI on the working lives of women


Report by Clementine Collett, Gina Neff and Livia Gouvea: “Globally, studies show that women in the labor force are paid less, hold fewer senior positions and participate less in science, technology, engineering and mathematics (STEM) fields. A 2019 UNESCO report found that women represent only 29% of science R&D positions globally and are already 25% less likely than men to know how to leverage digital technology for basic uses.

As the use and development of Artificial Intelligence (AI) continues to mature, its time to ask: What will tomorrows labor market look like for women? Are we effectively harnessing the power of AI to narrow gender equality gaps, or are we letting these gaps perpetuate, or even worse, widen?

This collaboration between UNESCO, the Inter-American Development Bank (IDB) and the Organisation for Economic Co-operation and Development (OECD) examines the effects of the use of AI on the working lives of women. By closely following the major stages of the workforce lifecycle from job requirements, to hiring to career progression and upskilling within the workplace – this joint report is a thorough introduction to issues related gender and AI and hopes to foster important conversations about womens equality in the future of work…(More)”

An intro to AI, made for students


Reena Jana at Google: “Adorable, operatic blobs. A global, online guessing game. Scribbles that transform into works of art. These may not sound like they’re part of a curriculum, but learning the basics of how artificial intelligence (AI) works doesn’t have to be complicated, super-technical or boring.

To celebrate Digital Learning Day, we’re releasing a new lesson from Applied Digital Skills, Google’s free, online, video-based curriculum (and part of the larger Grow with Google initiative). “Discover AI in Daily Life” was designed with middle and high school students in mind, and dives into how AI is built, and how it helps people every day.

AI for anyone — and everyone

“Twenty or 30 years ago, students might have learned basic typing skills in school,” says Dr. Patrick Gage Kelley, a Google Trust and Safety user experience researcher who co-created (and narrates) the “Discover AI in Daily Life” lesson. “Today, ‘AI literacy’ is a key skill. It’s important that students everywhere, from all backgrounds, are given the opportunity to learn about AI.”

“Discover AI in Daily Life” begins with the basics. You’ll find simple, non-technical explanations of how a machine can “learn” from patterns in data, and why it’s important to train AI responsibly and avoid unfair bias.

First-hand experiences with AI

“By encouraging students to engage directly with everyday tools and experiment with them, they get a first-hand experience of the potential uses and limitations of AI,” says Dr. Annica Voneche, the lesson’s learning designer. “Those experiences can then be tied to a more theoretical explanation of the technology behind it, in a way that makes the often abstract concepts behind AI tangible.”…(More)”.

OECD Framework for the Classification of AI systems


OECD Digital Economy Paper: “As artificial intelligence (AI) integrates all sectors at a rapid pace, different AI systems bring different benefits and risks. In comparing virtual assistants, self-driving vehicles and video recommendations for children, it is easy to see that the benefits and risks of each are very different. Their specificities will require different approaches to policy making and governance. To help policy makers, regulators, legislators and others characterise AI systems deployed in specific contexts, the OECD has developed a user-friendly tool to evaluate AI systems from a policy perspective. It can be applied to the widest range of AI systems across the following dimensions: People & Planet; Economic Context; Data & Input; AI model; and Task & Output. Each of the framework’s dimensions has a subset of properties and attributes to define and assess policy implications and to guide an innovative and trustworthy approach to AI as outlined in the OECD AI Principles….(More)”.

Algorithm vs. Algorithm


Paper by Cary Coglianese and Alicia Lai: “Critics raise alarm bells about governmental use of digital algorithms, charging that they are too complex, inscrutable, and prone to bias. A realistic assessment of digital algorithms, though, must acknowledge that government is already driven by algorithms of arguably greater complexity and potential for abuse: the algorithms implicit in human decision-making. The human brain operates algorithmically through complex neural networks. And when humans make collective decisions, they operate via algorithms too—those reflected in legislative, judicial, and administrative processes. Yet these human algorithms undeniably fail and are far from transparent.

On an individual level, human decision-making suffers from memory limitations, fatigue, cognitive biases, and racial prejudices, among other problems. On an organizational level, humans succumb to groupthink and free-riding, along with other collective dysfunctionalities. As a result, human decisions will in some cases prove far more problematic than their digital counterparts. Digital algorithms, such as machine learning, can improve governmental performance by facilitating outcomes that are more accurate, timely, and consistent. Still, when deciding whether to deploy digital algorithms to perform tasks currently completed by humans, public officials should proceed with care on a case-by-case basis. They should consider both whether a particular use would satisfy the basic preconditions for successful machine learning and whether it would in fact lead to demonstrable improvements over the status quo. The question about the future of public administration is not whether digital algorithms are perfect. Rather, it is a question about what will work better: human algorithms or digital ones….(More)”.

Effective and Trustworthy Implementation of AI Soft Law Governance


Introduction by Carlos Ignacio Gutierrez, Gary E. Marchant and Katina Michael: “This double special issue (together with the IEEE Technology and Society Magazine, Dec 2021) is dedicated to examining the governance of artificial intelligence (AI) through soft law. This kind of law is considered “soft” as opposed to “hard” because it comes in the form of governance programs whose goal is to create substantive expectations that are not directly enforceable by government [1], [2]. Soft law materializes out of necessity to enable a technological innovation to thrive and not be hampered by disparate heterogeneous practices that may negatively impact its trajectory, causing a premature “valley of death” exit scenario [3]. Soft laws are meant to be “just in time” to grant industry fundamental guidance when dealing with complex socio-technical assemblages that may have significant socio-legal implications upon diffusion into the market. Anticipatory governance is closely connected with soft law, in that intended and unintended consequences of a new technology may well be anticipated and proactively addressed [4].

Soft law’s role in governance is to influence the implementation of new technologies whose inception into society have outpaced hard law. Its usage is not meant to diminish the need for regulations, but rather be considered an interim solution when the roll-out of a new technology is happening rapidly, resisting the urge to create reactive and premature laws that may well take too long to enter legislation in a given state. Mutual agreement and conformance toward common goals and technical protocols through soft law among industry representatives, associated government agencies, auxiliary service providers, and other stakeholders, can lead to positive gains. Including the potential for societal acceptance of a new technology, especially where there are adequate provisions to safeguard the customer and the general public…(More)”.

Relational Artificial Intelligence


Paper by Virginia Dignum: “The impact of Artificial Intelligence does not depend only on fundamental research and technological developments, but for a large part on how these systems are introduced into society and used in everyday situations. Even though AI is traditionally associated with rational decision-making, understanding and shaping the societal impact of AI in all its facets requires a relational perspective. A rational approach to AI, where computational algorithms drive decision-making independent of human intervention, insights and emotions, has shown to result in bias and exclusion, laying bare societal vulnerabilities and insecurities. A relational approach, that focus on the relational nature of things, is needed to deal with the ethical, legal, societal, cultural, and environmental implications of AI. A relational approach to AI recognises that objective and rational reasoning cannot does not always result in the ‘right’ way to proceed because what is ‘right’ depends on the dynamics of the situation in which the decision is taken, and that rather than solving ethical problems the focus of design and use of AI must be on asking the ethical question. In this position paper, I start with a general discussion of current conceptualisations of AI followed by an overview of existing approaches to governance and responsible development and use of AI. Then, I reflect over what should be the bases of a social paradigm for AI and how this should be embedded in relational, feminist and non-Western philosophies, in particular the Ubuntu philosophy….(More)”.

Society won’t trust A.I. until business earns that trust


Article by François Candelon, Rodolphe Charme di Carlo and Steven D. Mills: “…The concept of a social license—which was born when the mining industry, and other resource extractors, faced opposition to projects worldwide—differs from the other rules governing A.I.’s use. Academics such as Leeora Black and John Morrison, in the book The Social License: How to Keep Your Organization Legitimate,define the social license as “the negotiation of equitable impacts and benefits in relation to its stakeholders over the near and longer term. It can range from the informal, such as an implicit contract, to the formal, like a community benefit agreement.” 

The social license isn’t a document like a government permit; it’s a form of acceptance that companies must gain through consistent and trustworthy behavior as well as stakeholder interactions. Thus, a social license for A.I. will be a socially constructed perception that a company has secured the right to use the technology for specific purposes in the markets in which it operates. 

Companies cannot award themselves social licenses; they will have to win them by proving they can be trusted. As Morrison argued in 2014, akin to the capability to dig a mine, the fact that an A.I.-powered solution is technologically feasible doesn’t mean that society will find its use morally and ethically acceptable. And losing the social license will have dire consequences, as natural resource companies, such as Shell and BP, have learned in the past…(More)”

The Political Philosophy of AI: An Introduction


Book by Mark Coeckelbergh: “Political issues people care about such as racism, climate change, and democracy take on new urgency and meaning in the light of technological developments such as AI. How can we talk about the politics of AI while moving beyond mere warnings and easy accusations?

This is the first accessible introduction to the political challenges related to AI. Using political philosophy as a unique lens through which to explore key debates in the area, the book shows how various political issues are already impacted by emerging AI technologies: from justice and discrimination to democracy and surveillance. Revealing the inherently political nature of technology, it offers a rich conceptual toolbox that can guide efforts to deal with the challenges raised by what turns out to be not only artificial intelligence but also artificial power.

This timely and original book will appeal to students and scholars in philosophy of technology and political philosophy, as well as tech developers, innovation leaders, policy makers, and anyone interested in the impact of technology on society…(More)”.

The Use Of Digitalisation and Artificial Intelligence in Migration Management


Joint EMN-OECD inform: “…In view of the dynamic nature of the migration policy landscape and in the context of the new Pact on Migration and Asylum, this series explores existing trends, innovative methods and approaches in migration management and will be used as a basis for further policy reflection at EU level. 

This inform builds on trends identified in the EMN-OECD series on migration management informs on COVID-19 in the migration area. Its scope includes EU Member States, EMN observer countries as well as OECD countries. This inform aims to explore the role of new digital technologies in the management of migration and asylum. It focuses on a number of specific areas in migration, acquisition of citizenship, asylum procedures and border control management where digital technologies may be used (e.g. digitalisation of application processes, use of video conferencing for remote interviews, use of artificial intelligence (AI) to assist decision making processes, use of blockchain technology). It also considers the implications of using these types of technologies on fundamental rights…(More)”.

Artificial Intelligence Bias and Discrimination: Will We Pull the Arc of the Moral Universe Towards Justice?


Paper by Emile Loza de Siles: “In 1968, the Reverend Martin Luther King Jr. foresaw the inevitability of society’s eventual triumph over the deep racism of his time and the stain that continues to cast its destructive oppressive pall today. From the pulpit of the nation’s church, Dr King said, “We shall overcome because the arc of the moral universe is long but it bends toward justice”. More than 40 years later, Eric Holder, the first African American United States Attorney General, agreed, but only if people acting with conviction exert to pull that arc towards justice.

With artificial intelligence (AI) bias and discrimination rampant, the need to pull the moral arc towards algorithmic justice is urgent. This article offers empowering clarity by conceptually bifurcating AI bias problems into AI bias engineering and organisational AI governance problems, revealing proven legal development pathways to protect against the corrosive harms of AI bias and discrimination…(More)”.