Generative AI: Navigating Intellectual Property


Factsheet by WIPO: “Generative artificial intelligence (AI) tools are rapidly being adopted by many businesses and organizations for the purpose of content generation. Such tools represent both a substantial opportunity to assist business operations and a significant legal risk due to current uncertainties, including intellectual property (IP) questions.

Many organizations are seeking to put guidance in place to help their employees mitigate these risks. While each business situation and legal context will be unique, the following Guiding Principles and Checklist are intended to assist organizations in understanding the IP risks, asking the right questions, and considering potential safeguards…(More)”.

Commission welcomes final agreement on EU Digital Identity Wallet


Press Release: “The Commission welcomes the final agreement reached today by the European Parliament and the Council of the EU at the final trilogue on the Regulation introducing European Digital Identity Wallets. This concludes the co-legislators’ work implementing the results of the provisional political agreement reached on 29 June 2023 on a legal framework for an EU Digital Identity, the first trusted and secure digital identity framework for all Europeans.

This marks an important step towards the Digital Decade 2030 targets on the digitalisation of public services. All EU citizens will be offered the possibility to have an EU Digital Identity Wallet to access public and private online services in full security and protection of personal data all over Europe.

In addition to public services, Very Large Online Platforms designated under the Digital Services Act (including services such as Amazon, Booking.com or Facebook) and private services that are legally required to authenticate their users will have to accept the EU Digital Identity Wallet for logging into their online services. In addition, the wallets’ features and common specifications will make it attractive for all private service providers to accept them for their services, thus creating new business opportunities. The Wallet will also facilitate service providers’ compliance with various regulatory requirements.

In addition to securely storing their digital identity, the Wallet will allow users to open bank accounts, make payments and hold digital documents, such as a mobile Driving Licence, a medical prescription, a professional certificate or a travel ticket. The Wallet will offer a user-friendly and practical alternative to online identification guaranteed by EU law. The Wallet will fully respect the user’s choice whether or not to share personal data, it will offer the highest degree of security certified independently to the same standards, and relevant parts of its code will be published open source to exclude any possibility of misuse, illegal tracking, tracing or government interception.

The legislative discussions have strengthened the ambition of the regulation in a number of areas important for citizens. The Wallet will contain a dashboard of all transactions accessible to its holder, offer the possibility to report alleged violations of data protection, and allow interaction between wallets. Moreover, citizens will be able to onboard the wallet with existing national eID schemes and benefit from free eSignatures for non-professional use…(More)”.

Future Law, Ethics, and Smart Technologies


Book edited by John-Stewart Gordon: “This interdisciplinary textbook serves as a solid introduction to the future of legal education against the background of the widespread use of AI written by colleagues from different disciplines, e.g. law, philosophy/ethics, economy, and computer science, whose common interest concerns AI and its impact on legal and ethical issues. The book provides, first, a general overview of the effects of AI on major disciplines such as ethics, law, economy, political science, and healthcare. Secondly, it offers a comprehensive analysis of major key issues concerning law: (a) AI decision-making, (b) rights, status, and responsibility, (c) regulation and standardisation, and (d) education…(More)”.

We need a much more sophisticated debate about AI


Article by Jamie Susskind: “Twentieth-century ways of thinking will not help us deal with the huge regulatory challenges the technology poses…The public debate around artificial intelligence sometimes seems to be playing out in two alternate realities.

In one, AI is regarded as a remarkable but potentially dangerous step forward in human affairs, necessitating new and careful forms of governance. This is the view of more than a thousand eminent individuals from academia, politics, and the tech industry who this week used an open letter to call for a six-month moratorium on the training of certain AI systems. AI labs, they claimed, are “locked in an out-of-control race to develop and deploy ever more powerful digital minds”. Such systems could “pose profound risks to society and humanity”. 

On the same day as the open letter, but in a parallel universe, the UK government decided that the country’s principal aim should be to turbocharge innovation. The white paper on AI governance had little to say about mitigating existential risk, but lots to say about economic growth. It proposed the lightest of regulatory touches and warned against “unnecessary burdens that could stifle innovation”. In short: you can’t spell “laissez-faire” without “AI”. 

The difference between these perspectives is profound. If the open letter is taken at face value, the UK government’s approach is not just wrong, but irresponsible. And yet both viewpoints are held by reasonable people who know their onions. They reflect an abiding political disagreement which is rising to the top of the agenda.

But despite this divergence there are four ways of thinking about AI that ought to be acceptable to both sides.

First, it is usually unhelpful to debate the merits of regulation by reference to a particular crisis (Cambridge Analytica), technology (GPT-4), person (Musk), or company (Meta). Each carries its own problems and passions. A sound regulatory system will be built on assumptions that are sufficiently general in scope that they will not immediately be superseded by the next big thing. Look at the signal, not the noise…(More)”.

The Socio-Legal Lab: An Experiential Approach to Research on Law in Action


Guide by Siddharth Peter de Souza and Lisa Hahn: “..interactive workbook for socio-legal research projects. It employs the idea of a “lab” as a space for interactive and experiential learning. As an introductory book, it addresses researchers of all levels who are beginning to explore interdisciplinary research on law and are looking for guidance on how to do so. Likewise, the book can be used by teachers and peer groups to experiment with teaching and thinking about law in action through lab-based learning…

The book covers themes and questions that may arise during a socio-legal research project. This starts with examining what research and interdisciplinarity mean and in which forms they can be practiced. After an overview of the research process, we will discuss how research in action is often unpredictable and messy. Thus, the practical and ethical challenges of doing research will be discussed along with processes of knowledge production and assumptions that we have as researchers. 

Conducting a socio-legal research project further requires an overview of the theoretical landscape. We will introduce general debates about the nature, functions, and effects of law in society. Further, common dichotomies in socio-legal research such as “law” and “the social” or “qualitative” and “quantitative” research will be explored, along with suggested ways on how to bridge them. 

Turning to the application side of socio-legal research, the book delves deeper into questions of data on law and society, where to collect it and how to deal with it in a reflexive manner. It discusses different methods of qualitative socio-legal research and offers ways in which they can be experienced through exercises and simulations. In the research process, generating research results is followed by publishing and communicating them. We will explore different ways to ensure the outreach and impact of one’s research by communicating results through journals, blogs or social media. Finally, the book also discusses academia as a social space and the value of creating and using networks and peer groups for mutual support.

Overall, the workbook is designed to accompany and inspire researchers on their way through a socio-legal research project and to empower the reader into thinking more creatively about their methods, while at the same time demystifying them…(More)”.

Legal Dynamism


Paper by Sandy Pentland and Robert Mahari: “Shortly after the start of the French Revolution, Thomas Jefferson wrote a now famous letter to James Madison. He argued that no society could make a perpetual constitution, or indeed a perpetual law, that binds future generations. Every law ought to expire after nineteen years. Jefferson’s argument rested on the view that it is fundamentally unjust for people in the present to create laws for those in the future, but his argument is also appealing from a purely pragmatic perspective. As the state of the world changes, laws become outdated, and forcing future generations to abide by outdated laws is unjust and inefficient.

Today, the law appears to be at the cusp of its own revolution. Longer than most other disciplines, it has resisted technical transformation. Increasingly, however, computational approaches are finding their way into the creation and implementation of law and the field of computational law is rapidly expanding. One of the most exciting promises of computational law is the idea of legal dynamism: the concept that a law, by means of computational tools, can be expressed not as a static rule statement but rather as a dynamic object that includes system performance goals, metrics for success, and the ability to adapt the law in response to its performance…

The image of laws as algorithms goes back to at least the 1980s when the application of expert systems to legal reasoning was first explored. Whether applied by a machine learning system or a human, legal algorithms rely on inputs from society and produce outputs that affect social behavior and that are intended to produce social outcomes. As such, it appears that legal algorithms are akin to other human-machine systems and so the law may benefit from insights from the general study of these systems. Various design frameworks for human-machine systems have been proposed, many of which focus on the importance of measuring system performance and iterative redesign. In our view, these frameworks can also be applied to the design of legal systems.

A basic design framework consists of five components..(More)”.

Towards an international data governance framework


Paper by Steve MacFeely et al: “The CCSA argued that a Global Data Compact (GDC) could provide a framework to ensure that data are safeguarded as a global public good and as a resource to achieve equitable and sustainable development. This compact, by promoting common objectives, would help avoid fragmentation where each country or region adopts their own approach to data collection, storage, and use. A coordinated approach would give individuals and enterprises confidence that data relevant to them carries protections and obligations no matter where they are collected or used…

The universal principles and standards should set out the elements of responsible and ethical handling and sharing of data and data products. The compact should also move beyond simply establishing ethical principles and create a global architecture that includes standards and incentives for compliance. Such an architecture could be the foundation for rethinking the data economy, promoting open data, encouraging data exchange, fostering innovation and facilitating international trade. It should build upon the existing canon of international human rights and other conventions, laws and treaties that set out useful principles and compliance mechanisms.

Such a compact will require a new type of global architecture. Modern data ecosystems are not controlled by states alone, so any Compact, Geneva Convention, Commons, or Bretton Woods type agreement will require a multitude of stakeholders and signatories – states, civil society, and the private sector at the very least. This would be very different to any international agreement that currently exists. Therefore, to support a GDC, a new global institution or platform may be needed to bring together the many data communities and ecosystems, that comprise not only national governments, private sector and civil society but also participants in specific fields, such as artificial intelligence, digital and IT services. Participants would maintain and update data standards, oversee accountability frameworks, and support mechanisms to facilitate the exchange and responsible use of data. The proposed Global Digital Compact which has been proposed as part of Our Common Agenda will also need to address the challenges of bringing many different constituencies together and may point the way…(More)”

AI-powered cameras to enforce bus lanes


Article by Chris Teale: “New York’s Metropolitan Transportation Authority will use an automated camera system to ensure bus lanes in New York City are free from illegally parked vehicles.

The MTA is partnering with Hayden AI to deploy Automated Bus Lane Enforcement camera systems to 300 buses, which will be mounted on the interior of the windshield and powered by artificial intelligence. The agency has the option to add the cameras to 200 more buses if it chooses.

Chris Carson, Hayden AI’s CEO and co-founder, said when the cameras detect an encroachment on a bus lane, they use real-time automated license plate recognition and edge computing to compile a packet of evidence that includes the time, date and location of the offense, as well as a brief video that shows the violator’s license plate. 

That information is encrypted and sent securely to the cloud, where MTA officials can access and analyze it for violations. If there is no encroachment on a bus lane, the cameras do not record anything…

An MTA spokesperson said the agency will also use data from the system to identify locations that have the highest instances of vehicles blocking bus lanes. New York City has 140 miles of bus lanes and has plans to build 150 more miles in the next four years, but congestion and lane violations from other road users slows the speed of the buses. The city already uses cameras and police patrols to attempt to enforce proper bus lane use…(More)”.

The Truth in Fake News: How Disinformation Laws Are Reframing the Concepts of Truth and Accuracy on Digital Platforms


Paper by Paolo Cavaliere: “The European Union’s (EU) strategy to address the spread of disinformation, and most notably the Code of Practice on Disinformation and the forthcoming Digital Services Act, tasks digital platforms with a range of actions to minimise the distribution of issue-based and political adverts that are verifiably false or misleading. This article discusses the implications of the EU’s approach with a focus on its categorical approach, specifically what it means to conceptualise disinformation as a form of advertisement and by what standards digital platforms are expected to assess the truthful or misleading nature of the content they distribute because of this categorisation. The analysis will show how the emerging EU anti-disinformation framework marks a departure from the European Court of Human Rights’ consolidated standards of review for public interest and commercial speech and the tests utilised to assess their accuracy….(More)”.

Beyond Data: Human Rights, Ethical and Social Impact Assessment in AI


Open Access book by Alessandro Mantelero: “…focuses on the impact of Artificial Intelligence (AI) on individuals and society from a legal perspective, providing a comprehensive risk-based methodological framework to address it. Building on the limitations of data protection in dealing with the challenges of AI, the author proposes an integrated approach to risk assessment that focuses on human rights and encompasses contextual social and ethical values.

The core of the analysis concerns the assessment methodology and the role of experts in steering the design of AI products and services by business and public bodies in the direction of human rights and societal values.

Taking into account the ongoing debate on AI regulation, the proposed assessment model also bridges the gap between risk-based provisions and their real-world implementation.

The central focus of the book on human rights and societal values in AI and the proposed solutions will make it of interest to legal scholars, AI developers and providers, policy makers and regulators…(More)”.