Contracting and Contract Law in the Age of Artificial Intelligence



Book edited by Martin Ebers, Cristina Poncibò, and Mimi Zou: “This book provides original, diverse, and timely insights into the nature, scope, and implications of Artificial Intelligence (AI), especially machine learning and natural language processing, in relation to contracting practices and contract law. The chapters feature unique, critical, and in-depth analysis of a range of topical issues, including how the use of AI in contracting affects key principles of contract law (from formation to remedies), the implications for autonomy, consent, and information asymmetries in contracting, and how AI is shaping contracting practices and the laws relating to specific types of contracts and sectors.

The contributors represent an interdisciplinary team of lawyers, computer scientists, economists, political scientists, and linguists from academia, legal practice, policy, and the technology sector. The chapters not only engage with salient theories from different disciplines, but also examine current and potential real-world applications and implications of AI in contracting and explore feasible legal, policy, and technological responses to address the challenges presented by AI in this field.

The book covers major common and civil law jurisdictions, including the EU, Italy, Germany, UK, US, and China. It should be read by anyone interested in the complex and fast-evolving relationship between AI, contract law, and related areas of law such as business, commercial, consumer, competition, and data protection laws….(More)”.

UNESCO member states adopt the first ever global agreement on the Ethics of Artificial Intelligence


Press Release: “In 2018, Audrey Azoulay, Director-General of UNESCO, launched an ambitious project: to give the world an ethical framework for the use of artificial intelligence. Three years later, thanks to the mobilization of hundreds of experts from around the world and intense international negotiations, the 193 UNESCO’s member states have just officially adopted this ethical framework….

The Recommendation aims to realize the advantages AI brings to society and reduce the risks it entails. It ensures that digital transformations promote human rights and contribute to the achievement of the Sustainable Development Goals, addressing issues around transparency, accountability and privacy, with action-oriented policy chapters on data governance, education, culture, labour, healthcare and the economy. 

  1. Protecting data 

The Recommendation calls for action beyond what tech firms and governments are doing to guarantee individuals more protection by ensuring transparency, agency and control over their personal data. It states that individuals should all be able to access or even erase records of their personal data. It also includes actions to improve data protection and an individual’s knowledge of, and right to control, their own data. It also increases the ability of regulatory bodies around the world to enforce this.

  1. Banning social scoring and mass surveillance

The Recommendation explicitly bans the use of AI systems for social scoring and mass surveillance. These types of technologies are very invasive, they infringe on human rights and fundamental freedoms, and they are used in a broad way. The Recommendation stresses that when developing regulatory frameworks, Member States should consider that ultimate responsibility and accountability must always lie with humans and that AI technologies should not be given legal personality themselves. 

  1. Helping to monitor and evaluate

The Recommendation also sets the ground for tools that will assist in its implementation. Ethical Impact Assessment is intended to help countries and companies developing and deploying AI systems to assess the impact of those systems on individuals, on society and on the environment. Readiness Assessment Methodology helps Member States to assess how ready they are in terms of legal and technical infrastructure. This tool will assist in enhancing the institutional capacity of countries and recommend appropriate measures to be taken in order to ensure that ethics are implemented in practice. In addition, the Recommendation encourages Member States to consider adding the role of an independent AI Ethics Officer or some other mechanism to oversee auditing and continuous monitoring efforts. 

  1. Protecting the environment

The Recommendation emphasises that AI actors should favour data, energy and resource-efficient AI methods that will help ensure that AI becomes a more prominent tool in the fight against climate change and on tackling environmental issues. The Recommendation asks governments to assess the direct and indirect environmental impact throughout the AI system life cycle. This includes its carbon footprint, energy consumption and the environmental impact of raw material extraction for supporting the manufacturing of AI technologies. It also aims at reducing the environmental impact of AI systems and data infrastructures. It incentivizes governments to invest in green tech, and if there are disproportionate negative impact of AI systems on the environment, the Recommendation instruct that they should not be used….(More)”.

UK government publishes pioneering standard for algorithmic transparency


UK Government Press Release: “The UK government has today launched one of the world’s first national standards for algorithmic transparency.

This move delivers on commitments made in the National AI Strategy and National Data Strategy, and strengthens the UK’s position as a global leader in trustworthy AI.

In its landmark review into bias in algorithmic decision-making, the Centre for Data Ethics and Innovation (CDEI) recommended that the UK government should place a mandatory transparency obligation on public sector organisations using algorithms to support significant decisions affecting individuals….

The Cabinet Office’s Central Digital and Data Office (CDDO) has worked closely with the CDEI to design the standard. It also consulted experts from across civil society and academia, as well as the public. The standard is organised into two tiers. The first includes a short description of the algorithmic tool, including how and why it is being used, while the second includes more detailed information about how the tool works, the dataset/s that have been used to train the model and the level of human oversight. The standard will help teams be meaningfully transparent about the way in which algorithmic tools are being used to support decisions, especially in cases where they might have a legal or economic impact on individuals.

The standard will be piloted by several government departments and public sector bodies in the coming months. Following the piloting phase, CDDO will review the standard based on feedback gathered and seek formal endorsement from the Data Standards Authority in 2022…(More)”.

Surveillance, Companionship, and Entertainment: The Ancient History of Intelligent Machines


Essay by E.R. Truitt: “Robots have histories that extend far back into the past. Artificial servants, autonomous killing machines, surveillance systems, and sex robots all find expression from the human imagination in works and contexts beyond Ovid (43 BCE to 17 CE) and the story of Pygmalion in cultures across Eurasia and North Africa. This long history of our human-machine relationships also reminds us that our aspirations, fears, and fantasies about emergent technologies are not new, even as the circumstances in which they appear differ widely. Situating these objects, and the desires that create them, within deeper and broader contexts of time and space reveals continuities and divergences that, in turn, provide opportunities to critique and question contemporary ideas and desires about robots and artificial intelligence (AI).

As early as 3,000 years ago we encounter interest in intelligent machines and AI that perform different servile functions. In the works of Homer (c. eighth century BCE) we find Hephaestus, the Greek god of smithing and craft, using automatic bellows to execute simple, repetitive labor. Golden handmaidens, endowed with characteristics of movement, perception, judgment, and speech, assist him in his work. In his “Odyssey,” Homer recounts how the ships of the Phaeacians perfectly obey their human captains, detecting and avoiding obstacles or threats, and moving “at the speed of thought.” Several centuries later, around 400 BCE, we meet Talos, the giant bronze sentry, created by Hephaestus, that patrolled the shores of Crete. These examples from the ancient world all have in common their subservient role; they exist to serve the desires of other, more powerful beings — either gods or humans — and even if they have sentience, they lack autonomy. Thousands of years before Karel Čapek introduced the term “robot” to refer to artificial slaves, we find them in Homer….(More)”.

Conceptualizing AI literacy: An exploratory review


Paper by Davy Tsz KitNg, Jac Ka LokLeung, Samuel K.W.Chu, and Maggie QiaoShen: “Artificial Intelligence (AI) has spread across industries (e.g., business, science, art, education) to enhance user experience, improve work efficiency, and create many future job opportunities. However, public understanding of AI technologies and how to define AI literacy is under-explored. This vision poses upcoming challenges for our next generation to learn about AI. On this note, an exploratory review was conducted to conceptualize the newly emerging concept “AI literacy”, in search for a sound theoretical foundation to define, teach and evaluate AI literacy. Grounded in literature on 30 existing peer-reviewed articles, this review proposed four aspects (i.e., know and understand, use, evaluate, and ethical issues) for fostering AI literacy based on the adaptation of classic literacies. This study sheds light on the consolidated definition, teaching, and ethical concerns on AI literacy, establishing the groundwork for future research such as competency development and assessment criteria on AI literacy….(More)”.

Automating Decision-making in Migration Policy: A Navigation Guide


Report by Astrid Ziebarth and Jessica Bither: “Algorithmic-driven or automated decision-making models (ADM) and programs are increasingly used by public administrations to assist human decision-making processes in public policy—including migration and refugee policy. These systems are often presented as a neutral, technological fix to make policy and systems more efficient. However, migration policymakers and stakeholders often do not understand exactly how these systems operate. As a result, the implications of adopting ADM technology are still unclear, and sometimes not considered. In fact, automated decision-making systems are never neutral, nor is their employment inevitable. To make sense of their function and decide whether or how to use them in migration policy will require consideration of the specific context in which ADM systems are being employed.

Three concrete use cases at core nodes of migration policy in which automated decision-making is already either being developed or tested are examined: visa application processes, placement matching to improve integration outcomes, and forecasting models to assist for planning and preparedness related to human mobility or displacement. All cases raise the same categories of questions: from the data employed, to the motivation behind using a given system, to the action triggered by models. The nuances of each case demonstrate why it is crucial to understand these systems within a bigger socio-technological context and provide categories and questions that can help policymakers understand the most important implications of any new system, including both technical consideration (related to accuracy, data questions, or bias) as well as contextual questions (what are we optimizing for?).

Stakeholders working in the migration and refugee policy space must make more direct links to current discussions surrounding governance, regulation of AI, and digital rights more broadly. We suggest some first points of entry toward this goal. Specifically, for next steps stakeholders should:

  1. Bridge migration policy with developments in digital rights and tech regulation
  2. Adapt emerging policy tools on ADM to migration space
  3. Create new spaces for exchange between migration policymakers, tech regulators, technologists, and civil society
  4. Include discussion on the use of ADM systems in international migration fora
  5. Increase the number of technologists or bilinguals working in migration policy
  6. Link tech and migration policy to bigger questions of foreign policy and geopolitics…(More)”.

New York City passed a bill requiring ‘bias audits’ of AI hiring tech


Kate Kaye at Protocol: “Let the AI auditing vendor brigade begin. A year since it was introduced, New York City Council passed a bill earlier this week requiring companies that sell AI technologies for hiring to obtain audits assessing the potential of those products to discriminate against job candidates. The bill requiring “bias audits” passed with overwhelming support in a 38-4 vote.

The bill is intended to weed out the use of tools that enable already unlawful employment discrimination in New York City. If signed into law, it will require providers of automated employment decision tools to have those systems evaluated each year by an audit service and provide the results to companies using those systems.

AI for recruitment can include software that uses machine learning to sift through resumes and help make hiring decisions, systems that attempt to decipher the sentiments of a job candidate, or even tech involving games to pick up on subtle clues about someone’s hiring worthiness. The NYC bill attempts to encompass the full gamut of AI by covering everything from old-school decision trees to more complex systems operating through neural networks.

The legislation calls on companies using automated decision tools for recruitment not only to tell job candidates when they’re being used, but to tell them what information the technology used to evaluate their suitability for a job.

The bill, however, fails to go into detail on what constitutes a bias audit other than to define one as “an impartial evaluation” that involves testing. And it already has critics who say it was rushed into passage and doesn’t address discrimination related to disability or age…(More)”.

AI-tocracy


Paper by Martin Beraja, Andrew Kao, David Y. Yang & Noam Yuchtman: “Can frontier innovation be sustained under autocracy? We argue that innovation and autocracy can be mutually reinforcing when: (i) the new technology bolsters the autocrat’s power; and (ii) the autocrat’s demand for the technology stimulates further innovation in applications beyond those benefiting it directly. We test for such a mutually reinforcing relationship in the context of facial recognition AI in China. To do so, we gather comprehensive data on AI firms and government procurement contracts, as well as on social unrest across China during the last decade. We first show that autocrats benefit from AI: local unrest leads to greater government procurement of facial recognition AI, and increased AI procurement suppresses subsequent unrest. We then show that AI innovation benefits from autocrats’ suppression of unrest: the contracted AI firms innovate more both for the government and commercial markets. Taken together, these results suggest the possibility of sustained AI innovation under the Chinese regime: AI innovation entrenches the regime, and the regime’s investment in AI for political control stimulates further frontier innovation….(More)”.

‘Is it OK to …’: the bot that gives you an instant moral judgment


Article by Poppy Noor: “Corporal punishment, wearing fur, pineapple on pizza – moral dilemmas, are by their very nature, hard to solve. That’s why the same ethical questions are constantly resurfaced in TV, films and literature.

But what if AI could take away the brain work and answer ethical quandaries for us? Ask Delphi is a bot that’s been fed more than 1.7m examples of people’s ethical judgments on everyday questions and scenarios. If you pose an ethical quandary, it will tell you whether something is right, wrong, or indefensible.

Anyone can use Delphi. Users just put a question to the bot on its website, and see what it comes up with.

The AI is fed a vast number of scenarios – including ones from the popular Am I The Asshole sub-Reddit, where Reddit users post dilemmas from their personal lives and get an audience to judge who the asshole in the situation was.

Then, people are recruited from Mechanical Turk – a market place where researchers find paid participants for studies – to say whether they agree with the AI’s answers. Each answer is put to three arbiters, with the majority or average conclusion used to decide right from wrong. The process is selective – participants have to score well on a test to qualify to be a moral arbiter, and the researchers don’t recruit people who show signs of racism or sexism.

The arbitrators agree with the bot’s ethical judgments 92% of the time (although that could say as much about their ethics as it does the bot’s)…(More)”.

AI Generates Hypotheses Human Scientists Have Not Thought Of


Robin Blades in Scientific American: “Electric vehicles have the potential to substantially reduce carbon emissions, but car companies are running out of materials to make batteries. One crucial component, nickel, is projected to cause supply shortages as early as the end of this year. Scientists recently discovered four new materials that could potentially help—and what may be even more intriguing is how they found these materials: the researchers relied on artificial intelligence to pick out useful chemicals from a list of more than 300 options. And they are not the only humans turning to A.I. for scientific inspiration.

Creating hypotheses has long been a purely human domain. Now, though, scientists are beginning to ask machine learning to produce original insights. They are designing neural networks (a type of machine-learning setup with a structure inspired by the human brain) that suggest new hypotheses based on patterns the networks find in data instead of relying on human assumptions. Many fields may soon turn to the muse of machine learning in an attempt to speed up the scientific process and reduce human biases.

In the case of new battery materials, scientists pursuing such tasks have typically relied on database search tools, modeling and their own intuition about chemicals to pick out useful compounds. Instead a team at the University of Liverpool in England used machine learning to streamline the creative process. The researchers developed a neural network that ranked chemical combinations by how likely they were to result in a useful new material. Then the scientists used these rankings to guide their experiments in the laboratory. They identified four promising candidates for battery materials without having to test everything on their list, saving them months of trial and error…(More)”.