Our Common AI Future – A Geopolitical Analysis and Road Map, for AI Driven Sustainable Development, Science and Data Diplomacy


(Open Access) Book by Francesco Lapenta: “The premise of this concise but thorough book is that the future, while uncertain and open, is not arbitrary, but the result of a complex series of competing decisions, actors, and events that began in the past, have reached a certain configuration in the present, and will continue to develop into the future. These past and present conditions constitute the basis and origin of future developments that have the potential to shape into a variety of different possible, probable, undesirable or desirable future scenarios. The realisation that these future scenarios cannot be totally arbitrary gives scope to the study of the past, indispensable to fully understand the facts and actors and forces that contributed to the formation of the present, and how certain systems, or dominant models, came to be established (I). The relative openness of future scenarios gives scope to the study of what competing forces and models might exist, their early formation, actors, and initiatives (II) and how they may act as catalysts for alternative theories, models (III and IV) and actions that can influence our future and change its path (V)…

The analyses in the book, which are loosely divided into three phases, move from the past to the present, and begin with identifying best practices and some of the key initiatives that have attempted to achieve these global collaborative goals over the last few decades. Then, moving forward, they describe a roadmap to a possible future based on already existing and developing theories, initiatives, and tools that could underpin these global collaborative efforts in the specific areas of AI and Sustainable Development. In the Road Map for AI Driven Sustainable Development, the analyses identify and stand on the shoulders of a number of past and current global initiatives that have worked for decades to lay the groundwork for this alternative evolutionary and collaborative model. The title of this book directs, acknowledges, and encourages readers to engage with one of these pivotal efforts, the “Our Common Future” report, the Brundtland’s commission report which was published in 1987 by the World Commission on Environment and Development (WCED). Building on the report’s ambitious humanistic and socioeconomic landscape and ambitions, the analyses investigate a variety of existing and developing best practices that could lead to, or inspire, a shared scientific collaborative model for AI development. Based on the understanding that, despite political rivalry and competition, governments should collaborate on at least two fundamental issues: One, to establish a set of global “Red Lines” to prohibit the development and use of AIs in specific applications that might pose an ethical or existential threat to humanity and the planet. And two, create a set of “Green Zones” for scientific diplomacy and cooperation in order to capitalize on the opportunities that the impending AIs era may represent in confronting major collective challenges such as the health and climate crises, the energy crisis, and the sustainable development goals identified in the report and developed by other subsequent global initiatives…(More)”.

If AI Is Predicting Your Future, Are You Still Free?


Essay by Carissa Veliz” “…Today, prediction is mostly done through machine learning algorithms that use statistics to fill in the blanks of the unknown. Text algorithms use enormous language databases to predict the most plausible ending to a string of words. Game algorithms use data from past games to predict the best possible next move. And algorithms that are applied to human behavior use historical data to infer our future: what we are going to buy, whether we are planning to change jobs, whether we are going to get sick, whether we are going to commit a crime or crash our car. Under such a model, insurance is no longer about pooling risk from large sets of people. Rather, predictions have become individualized, and you are increasingly paying your own way, according to your personal risk scores—which raises a new set of ethical concerns.

An important characteristic of predictions is that they do not describe reality. Forecasting is about the future, not the present, and the future is something that has yet to become real. A prediction is a guess, and all sorts of subjective assessments and biases regarding risk and values are built into it. There can be forecasts that are more or less accurate, to be sure, but the relationship between probability and actuality is much more tenuous and ethically problematic than some assume.

Institutions today, however, often try to pass off predictions as if they were a model of objective reality. And even when AI’s forecasts are merely probabilistic, they are often interpreted as deterministic in practice—partly because human beings are bad at understanding probability and partly because the incentives around avoiding risk end up reinforcing the prediction. (For example, if someone is predicted to be 75 percent likely to be a bad employee, companies will not want to take the risk of hiring them when they have candidates with a lower risk score)…(More)”.

The state of AI in 2021


McKinsey Global Survey on AI: “..indicate that AI adoption continues to grow and that the benefits remain significant— though in the COVID-19 pandemic’s first year, they were felt more strongly on the cost-savings front than the top line. As AI’s use in business becomes more common, the tools and best practices to make the most out of AI have also become more sophisticated. We looked at the practices of the companies seeing the biggest earnings boost from AI and found that they are not only following more of both the core and advanced practices, including machine-learning operations (MLOps), that underpin success but also spending more efficiently on AI and taking more advantage of cloud technologies. Additionally, they are more likely than other organizations to engage in a range of activities to mitigate their AI-related risks—an area that continues to be a shortcoming for many companies’ AI efforts…(More)”.

The role of artificial intelligence in disinformation


Paper by Noémi Bontridder and Yves Poullet: “Artificial intelligence (AI) systems are playing an overarching role in the disinformation phenomenon our world is currently facing. Such systems boost the problem not only by increasing opportunities to create realistic AI-generated fake content, but also, and essentially, by facilitating the dissemination of disinformation to a targeted audience and at scale by malicious stakeholders. This situation entails multiple ethical and human rights concerns, in particular regarding human dignity, autonomy, democracy, and peace. In reaction, other AI systems are developed to detect and moderate disinformation online. Such systems do not escape from ethical and human rights concerns either, especially regarding freedom of expression and information. Having originally started with ascending co-regulation, the European Union (EU) is now heading toward descending co-regulation of the phenomenon. In particular, the Digital Services Act proposal provides for transparency obligations and external audit for very large online platforms’ recommender systems and content moderation. While with this proposal, the Commission focusses on the regulation of content considered as problematic, the EU Parliament and the EU Council call for enhancing access to trustworthy content. In light of our study, we stress that the disinformation problem is mainly caused by the business model of the web that is based on advertising revenues, and that adapting this model would reduce the problem considerably. We also observe that while AI systems are inappropriate to moderate disinformation content online, and even to detect such content, they may be more appropriate to counter the manipulation of the digital ecosystem….(More)”.

Human-Centered AI


Book by Ben Shneiderman: “The remarkable progress in algorithms for machine and deep learning have opened the doors to new opportunities, and some dark possibilities. However, a bright future awaits those who build on their working methods by including HCAI strategies of design and testing. As many technology companies and thought leaders have argued, the goal is not to replace people, but to empower them by making design choices that give humans control over technology.

In Human-Centered AI, Professor Ben Shneiderman offers an optimistic realist’s guide to how artificial intelligence can be used to augment and enhance humans’ lives. This project bridges the gap between ethical considerations and practical realities to offer a road map for successful, reliable systems. Digital cameras, communications services, and navigation apps are just the beginning. Shneiderman shows how future applications will support health and wellness, improve education, accelerate business, and connect people in reliable, safe, and trustworthy ways that respect human values, rights, justice, and dignity…(More)”.

Group Backed by Top Companies Moves to Combat A.I. Bias in Hiring


Steve Lohr at The New York Times: “Artificial intelligence software is increasingly used by human resources departments to screen résumés, conduct video interviews and assess a job seeker’s mental agility.

Now, some of the largest corporations in America are joining an effort to prevent that technology from delivering biased results that could perpetuate or even worsen past discrimination.

The Data & Trust Alliance, announced on Wednesday, has signed up major employers across a variety of industries, including CVS Health, Deloitte, General Motors, Humana, IBM, Mastercard, Meta (Facebook’s parent company), Nike and Walmart.

The corporate group is not a lobbying organization or a think tank. Instead, it has developed an evaluation and scoring system for artificial intelligence software.

The Data & Trust Alliance, tapping corporate and outside experts, has devised a 55-question evaluation, which covers 13 topics, and a scoring system. The goal is to detect and combat algorithmic bias.“This is not just adopting principles, but actually implementing something concrete,” said Kenneth Chenault, co-chairman of the group and a former chief executive of American Express, which has agreed to adopt the anti-bias tool kit…(More)”.

Contracting and Contract Law in the Age of Artificial Intelligence



Book edited by Martin Ebers, Cristina Poncibò, and Mimi Zou: “This book provides original, diverse, and timely insights into the nature, scope, and implications of Artificial Intelligence (AI), especially machine learning and natural language processing, in relation to contracting practices and contract law. The chapters feature unique, critical, and in-depth analysis of a range of topical issues, including how the use of AI in contracting affects key principles of contract law (from formation to remedies), the implications for autonomy, consent, and information asymmetries in contracting, and how AI is shaping contracting practices and the laws relating to specific types of contracts and sectors.

The contributors represent an interdisciplinary team of lawyers, computer scientists, economists, political scientists, and linguists from academia, legal practice, policy, and the technology sector. The chapters not only engage with salient theories from different disciplines, but also examine current and potential real-world applications and implications of AI in contracting and explore feasible legal, policy, and technological responses to address the challenges presented by AI in this field.

The book covers major common and civil law jurisdictions, including the EU, Italy, Germany, UK, US, and China. It should be read by anyone interested in the complex and fast-evolving relationship between AI, contract law, and related areas of law such as business, commercial, consumer, competition, and data protection laws….(More)”.

UNESCO member states adopt the first ever global agreement on the Ethics of Artificial Intelligence


Press Release: “In 2018, Audrey Azoulay, Director-General of UNESCO, launched an ambitious project: to give the world an ethical framework for the use of artificial intelligence. Three years later, thanks to the mobilization of hundreds of experts from around the world and intense international negotiations, the 193 UNESCO’s member states have just officially adopted this ethical framework….

The Recommendation aims to realize the advantages AI brings to society and reduce the risks it entails. It ensures that digital transformations promote human rights and contribute to the achievement of the Sustainable Development Goals, addressing issues around transparency, accountability and privacy, with action-oriented policy chapters on data governance, education, culture, labour, healthcare and the economy. 

  1. Protecting data 

The Recommendation calls for action beyond what tech firms and governments are doing to guarantee individuals more protection by ensuring transparency, agency and control over their personal data. It states that individuals should all be able to access or even erase records of their personal data. It also includes actions to improve data protection and an individual’s knowledge of, and right to control, their own data. It also increases the ability of regulatory bodies around the world to enforce this.

  1. Banning social scoring and mass surveillance

The Recommendation explicitly bans the use of AI systems for social scoring and mass surveillance. These types of technologies are very invasive, they infringe on human rights and fundamental freedoms, and they are used in a broad way. The Recommendation stresses that when developing regulatory frameworks, Member States should consider that ultimate responsibility and accountability must always lie with humans and that AI technologies should not be given legal personality themselves. 

  1. Helping to monitor and evaluate

The Recommendation also sets the ground for tools that will assist in its implementation. Ethical Impact Assessment is intended to help countries and companies developing and deploying AI systems to assess the impact of those systems on individuals, on society and on the environment. Readiness Assessment Methodology helps Member States to assess how ready they are in terms of legal and technical infrastructure. This tool will assist in enhancing the institutional capacity of countries and recommend appropriate measures to be taken in order to ensure that ethics are implemented in practice. In addition, the Recommendation encourages Member States to consider adding the role of an independent AI Ethics Officer or some other mechanism to oversee auditing and continuous monitoring efforts. 

  1. Protecting the environment

The Recommendation emphasises that AI actors should favour data, energy and resource-efficient AI methods that will help ensure that AI becomes a more prominent tool in the fight against climate change and on tackling environmental issues. The Recommendation asks governments to assess the direct and indirect environmental impact throughout the AI system life cycle. This includes its carbon footprint, energy consumption and the environmental impact of raw material extraction for supporting the manufacturing of AI technologies. It also aims at reducing the environmental impact of AI systems and data infrastructures. It incentivizes governments to invest in green tech, and if there are disproportionate negative impact of AI systems on the environment, the Recommendation instruct that they should not be used….(More)”.

UK government publishes pioneering standard for algorithmic transparency


UK Government Press Release: “The UK government has today launched one of the world’s first national standards for algorithmic transparency.

This move delivers on commitments made in the National AI Strategy and National Data Strategy, and strengthens the UK’s position as a global leader in trustworthy AI.

In its landmark review into bias in algorithmic decision-making, the Centre for Data Ethics and Innovation (CDEI) recommended that the UK government should place a mandatory transparency obligation on public sector organisations using algorithms to support significant decisions affecting individuals….

The Cabinet Office’s Central Digital and Data Office (CDDO) has worked closely with the CDEI to design the standard. It also consulted experts from across civil society and academia, as well as the public. The standard is organised into two tiers. The first includes a short description of the algorithmic tool, including how and why it is being used, while the second includes more detailed information about how the tool works, the dataset/s that have been used to train the model and the level of human oversight. The standard will help teams be meaningfully transparent about the way in which algorithmic tools are being used to support decisions, especially in cases where they might have a legal or economic impact on individuals.

The standard will be piloted by several government departments and public sector bodies in the coming months. Following the piloting phase, CDDO will review the standard based on feedback gathered and seek formal endorsement from the Data Standards Authority in 2022…(More)”.

Surveillance, Companionship, and Entertainment: The Ancient History of Intelligent Machines


Essay by E.R. Truitt: “Robots have histories that extend far back into the past. Artificial servants, autonomous killing machines, surveillance systems, and sex robots all find expression from the human imagination in works and contexts beyond Ovid (43 BCE to 17 CE) and the story of Pygmalion in cultures across Eurasia and North Africa. This long history of our human-machine relationships also reminds us that our aspirations, fears, and fantasies about emergent technologies are not new, even as the circumstances in which they appear differ widely. Situating these objects, and the desires that create them, within deeper and broader contexts of time and space reveals continuities and divergences that, in turn, provide opportunities to critique and question contemporary ideas and desires about robots and artificial intelligence (AI).

As early as 3,000 years ago we encounter interest in intelligent machines and AI that perform different servile functions. In the works of Homer (c. eighth century BCE) we find Hephaestus, the Greek god of smithing and craft, using automatic bellows to execute simple, repetitive labor. Golden handmaidens, endowed with characteristics of movement, perception, judgment, and speech, assist him in his work. In his “Odyssey,” Homer recounts how the ships of the Phaeacians perfectly obey their human captains, detecting and avoiding obstacles or threats, and moving “at the speed of thought.” Several centuries later, around 400 BCE, we meet Talos, the giant bronze sentry, created by Hephaestus, that patrolled the shores of Crete. These examples from the ancient world all have in common their subservient role; they exist to serve the desires of other, more powerful beings — either gods or humans — and even if they have sentience, they lack autonomy. Thousands of years before Karel Čapek introduced the term “robot” to refer to artificial slaves, we find them in Homer….(More)”.