The state of AI in 2021


McKinsey Global Survey on AI: “..indicate that AI adoption continues to grow and that the benefits remain significant— though in the COVID-19 pandemic’s first year, they were felt more strongly on the cost-savings front than the top line. As AI’s use in business becomes more common, the tools and best practices to make the most out of AI have also become more sophisticated. We looked at the practices of the companies seeing the biggest earnings boost from AI and found that they are not only following more of both the core and advanced practices, including machine-learning operations (MLOps), that underpin success but also spending more efficiently on AI and taking more advantage of cloud technologies. Additionally, they are more likely than other organizations to engage in a range of activities to mitigate their AI-related risks—an area that continues to be a shortcoming for many companies’ AI efforts…(More)”.

The role of artificial intelligence in disinformation


Paper by Noémi Bontridder and Yves Poullet: “Artificial intelligence (AI) systems are playing an overarching role in the disinformation phenomenon our world is currently facing. Such systems boost the problem not only by increasing opportunities to create realistic AI-generated fake content, but also, and essentially, by facilitating the dissemination of disinformation to a targeted audience and at scale by malicious stakeholders. This situation entails multiple ethical and human rights concerns, in particular regarding human dignity, autonomy, democracy, and peace. In reaction, other AI systems are developed to detect and moderate disinformation online. Such systems do not escape from ethical and human rights concerns either, especially regarding freedom of expression and information. Having originally started with ascending co-regulation, the European Union (EU) is now heading toward descending co-regulation of the phenomenon. In particular, the Digital Services Act proposal provides for transparency obligations and external audit for very large online platforms’ recommender systems and content moderation. While with this proposal, the Commission focusses on the regulation of content considered as problematic, the EU Parliament and the EU Council call for enhancing access to trustworthy content. In light of our study, we stress that the disinformation problem is mainly caused by the business model of the web that is based on advertising revenues, and that adapting this model would reduce the problem considerably. We also observe that while AI systems are inappropriate to moderate disinformation content online, and even to detect such content, they may be more appropriate to counter the manipulation of the digital ecosystem….(More)”.

Human-Centered AI


Book by Ben Shneiderman: “The remarkable progress in algorithms for machine and deep learning have opened the doors to new opportunities, and some dark possibilities. However, a bright future awaits those who build on their working methods by including HCAI strategies of design and testing. As many technology companies and thought leaders have argued, the goal is not to replace people, but to empower them by making design choices that give humans control over technology.

In Human-Centered AI, Professor Ben Shneiderman offers an optimistic realist’s guide to how artificial intelligence can be used to augment and enhance humans’ lives. This project bridges the gap between ethical considerations and practical realities to offer a road map for successful, reliable systems. Digital cameras, communications services, and navigation apps are just the beginning. Shneiderman shows how future applications will support health and wellness, improve education, accelerate business, and connect people in reliable, safe, and trustworthy ways that respect human values, rights, justice, and dignity…(More)”.

Group Backed by Top Companies Moves to Combat A.I. Bias in Hiring


Steve Lohr at The New York Times: “Artificial intelligence software is increasingly used by human resources departments to screen résumés, conduct video interviews and assess a job seeker’s mental agility.

Now, some of the largest corporations in America are joining an effort to prevent that technology from delivering biased results that could perpetuate or even worsen past discrimination.

The Data & Trust Alliance, announced on Wednesday, has signed up major employers across a variety of industries, including CVS Health, Deloitte, General Motors, Humana, IBM, Mastercard, Meta (Facebook’s parent company), Nike and Walmart.

The corporate group is not a lobbying organization or a think tank. Instead, it has developed an evaluation and scoring system for artificial intelligence software.

The Data & Trust Alliance, tapping corporate and outside experts, has devised a 55-question evaluation, which covers 13 topics, and a scoring system. The goal is to detect and combat algorithmic bias.“This is not just adopting principles, but actually implementing something concrete,” said Kenneth Chenault, co-chairman of the group and a former chief executive of American Express, which has agreed to adopt the anti-bias tool kit…(More)”.

Contracting and Contract Law in the Age of Artificial Intelligence



Book edited by Martin Ebers, Cristina Poncibò, and Mimi Zou: “This book provides original, diverse, and timely insights into the nature, scope, and implications of Artificial Intelligence (AI), especially machine learning and natural language processing, in relation to contracting practices and contract law. The chapters feature unique, critical, and in-depth analysis of a range of topical issues, including how the use of AI in contracting affects key principles of contract law (from formation to remedies), the implications for autonomy, consent, and information asymmetries in contracting, and how AI is shaping contracting practices and the laws relating to specific types of contracts and sectors.

The contributors represent an interdisciplinary team of lawyers, computer scientists, economists, political scientists, and linguists from academia, legal practice, policy, and the technology sector. The chapters not only engage with salient theories from different disciplines, but also examine current and potential real-world applications and implications of AI in contracting and explore feasible legal, policy, and technological responses to address the challenges presented by AI in this field.

The book covers major common and civil law jurisdictions, including the EU, Italy, Germany, UK, US, and China. It should be read by anyone interested in the complex and fast-evolving relationship between AI, contract law, and related areas of law such as business, commercial, consumer, competition, and data protection laws….(More)”.

UNESCO member states adopt the first ever global agreement on the Ethics of Artificial Intelligence


Press Release: “In 2018, Audrey Azoulay, Director-General of UNESCO, launched an ambitious project: to give the world an ethical framework for the use of artificial intelligence. Three years later, thanks to the mobilization of hundreds of experts from around the world and intense international negotiations, the 193 UNESCO’s member states have just officially adopted this ethical framework….

The Recommendation aims to realize the advantages AI brings to society and reduce the risks it entails. It ensures that digital transformations promote human rights and contribute to the achievement of the Sustainable Development Goals, addressing issues around transparency, accountability and privacy, with action-oriented policy chapters on data governance, education, culture, labour, healthcare and the economy. 

  1. Protecting data 

The Recommendation calls for action beyond what tech firms and governments are doing to guarantee individuals more protection by ensuring transparency, agency and control over their personal data. It states that individuals should all be able to access or even erase records of their personal data. It also includes actions to improve data protection and an individual’s knowledge of, and right to control, their own data. It also increases the ability of regulatory bodies around the world to enforce this.

  1. Banning social scoring and mass surveillance

The Recommendation explicitly bans the use of AI systems for social scoring and mass surveillance. These types of technologies are very invasive, they infringe on human rights and fundamental freedoms, and they are used in a broad way. The Recommendation stresses that when developing regulatory frameworks, Member States should consider that ultimate responsibility and accountability must always lie with humans and that AI technologies should not be given legal personality themselves. 

  1. Helping to monitor and evaluate

The Recommendation also sets the ground for tools that will assist in its implementation. Ethical Impact Assessment is intended to help countries and companies developing and deploying AI systems to assess the impact of those systems on individuals, on society and on the environment. Readiness Assessment Methodology helps Member States to assess how ready they are in terms of legal and technical infrastructure. This tool will assist in enhancing the institutional capacity of countries and recommend appropriate measures to be taken in order to ensure that ethics are implemented in practice. In addition, the Recommendation encourages Member States to consider adding the role of an independent AI Ethics Officer or some other mechanism to oversee auditing and continuous monitoring efforts. 

  1. Protecting the environment

The Recommendation emphasises that AI actors should favour data, energy and resource-efficient AI methods that will help ensure that AI becomes a more prominent tool in the fight against climate change and on tackling environmental issues. The Recommendation asks governments to assess the direct and indirect environmental impact throughout the AI system life cycle. This includes its carbon footprint, energy consumption and the environmental impact of raw material extraction for supporting the manufacturing of AI technologies. It also aims at reducing the environmental impact of AI systems and data infrastructures. It incentivizes governments to invest in green tech, and if there are disproportionate negative impact of AI systems on the environment, the Recommendation instruct that they should not be used….(More)”.

UK government publishes pioneering standard for algorithmic transparency


UK Government Press Release: “The UK government has today launched one of the world’s first national standards for algorithmic transparency.

This move delivers on commitments made in the National AI Strategy and National Data Strategy, and strengthens the UK’s position as a global leader in trustworthy AI.

In its landmark review into bias in algorithmic decision-making, the Centre for Data Ethics and Innovation (CDEI) recommended that the UK government should place a mandatory transparency obligation on public sector organisations using algorithms to support significant decisions affecting individuals….

The Cabinet Office’s Central Digital and Data Office (CDDO) has worked closely with the CDEI to design the standard. It also consulted experts from across civil society and academia, as well as the public. The standard is organised into two tiers. The first includes a short description of the algorithmic tool, including how and why it is being used, while the second includes more detailed information about how the tool works, the dataset/s that have been used to train the model and the level of human oversight. The standard will help teams be meaningfully transparent about the way in which algorithmic tools are being used to support decisions, especially in cases where they might have a legal or economic impact on individuals.

The standard will be piloted by several government departments and public sector bodies in the coming months. Following the piloting phase, CDDO will review the standard based on feedback gathered and seek formal endorsement from the Data Standards Authority in 2022…(More)”.

Surveillance, Companionship, and Entertainment: The Ancient History of Intelligent Machines


Essay by E.R. Truitt: “Robots have histories that extend far back into the past. Artificial servants, autonomous killing machines, surveillance systems, and sex robots all find expression from the human imagination in works and contexts beyond Ovid (43 BCE to 17 CE) and the story of Pygmalion in cultures across Eurasia and North Africa. This long history of our human-machine relationships also reminds us that our aspirations, fears, and fantasies about emergent technologies are not new, even as the circumstances in which they appear differ widely. Situating these objects, and the desires that create them, within deeper and broader contexts of time and space reveals continuities and divergences that, in turn, provide opportunities to critique and question contemporary ideas and desires about robots and artificial intelligence (AI).

As early as 3,000 years ago we encounter interest in intelligent machines and AI that perform different servile functions. In the works of Homer (c. eighth century BCE) we find Hephaestus, the Greek god of smithing and craft, using automatic bellows to execute simple, repetitive labor. Golden handmaidens, endowed with characteristics of movement, perception, judgment, and speech, assist him in his work. In his “Odyssey,” Homer recounts how the ships of the Phaeacians perfectly obey their human captains, detecting and avoiding obstacles or threats, and moving “at the speed of thought.” Several centuries later, around 400 BCE, we meet Talos, the giant bronze sentry, created by Hephaestus, that patrolled the shores of Crete. These examples from the ancient world all have in common their subservient role; they exist to serve the desires of other, more powerful beings — either gods or humans — and even if they have sentience, they lack autonomy. Thousands of years before Karel Čapek introduced the term “robot” to refer to artificial slaves, we find them in Homer….(More)”.

Conceptualizing AI literacy: An exploratory review


Paper by Davy Tsz KitNg, Jac Ka LokLeung, Samuel K.W.Chu, and Maggie QiaoShen: “Artificial Intelligence (AI) has spread across industries (e.g., business, science, art, education) to enhance user experience, improve work efficiency, and create many future job opportunities. However, public understanding of AI technologies and how to define AI literacy is under-explored. This vision poses upcoming challenges for our next generation to learn about AI. On this note, an exploratory review was conducted to conceptualize the newly emerging concept “AI literacy”, in search for a sound theoretical foundation to define, teach and evaluate AI literacy. Grounded in literature on 30 existing peer-reviewed articles, this review proposed four aspects (i.e., know and understand, use, evaluate, and ethical issues) for fostering AI literacy based on the adaptation of classic literacies. This study sheds light on the consolidated definition, teaching, and ethical concerns on AI literacy, establishing the groundwork for future research such as competency development and assessment criteria on AI literacy….(More)”.

Automating Decision-making in Migration Policy: A Navigation Guide


Report by Astrid Ziebarth and Jessica Bither: “Algorithmic-driven or automated decision-making models (ADM) and programs are increasingly used by public administrations to assist human decision-making processes in public policy—including migration and refugee policy. These systems are often presented as a neutral, technological fix to make policy and systems more efficient. However, migration policymakers and stakeholders often do not understand exactly how these systems operate. As a result, the implications of adopting ADM technology are still unclear, and sometimes not considered. In fact, automated decision-making systems are never neutral, nor is their employment inevitable. To make sense of their function and decide whether or how to use them in migration policy will require consideration of the specific context in which ADM systems are being employed.

Three concrete use cases at core nodes of migration policy in which automated decision-making is already either being developed or tested are examined: visa application processes, placement matching to improve integration outcomes, and forecasting models to assist for planning and preparedness related to human mobility or displacement. All cases raise the same categories of questions: from the data employed, to the motivation behind using a given system, to the action triggered by models. The nuances of each case demonstrate why it is crucial to understand these systems within a bigger socio-technological context and provide categories and questions that can help policymakers understand the most important implications of any new system, including both technical consideration (related to accuracy, data questions, or bias) as well as contextual questions (what are we optimizing for?).

Stakeholders working in the migration and refugee policy space must make more direct links to current discussions surrounding governance, regulation of AI, and digital rights more broadly. We suggest some first points of entry toward this goal. Specifically, for next steps stakeholders should:

  1. Bridge migration policy with developments in digital rights and tech regulation
  2. Adapt emerging policy tools on ADM to migration space
  3. Create new spaces for exchange between migration policymakers, tech regulators, technologists, and civil society
  4. Include discussion on the use of ADM systems in international migration fora
  5. Increase the number of technologists or bilinguals working in migration policy
  6. Link tech and migration policy to bigger questions of foreign policy and geopolitics…(More)”.