Regulatory Insights on Artificial Intelligence


Book edited by Mark Findlay, Jolyon Ford, Josephine Seah, and Dilan Thampapillai: “This provocative book investigates the relationship between law and artificial intelligence (AI) governance, and the need for new and innovative approaches to regulating AI and big data in ways that go beyond market concerns alone and look to sustainability and social good.
 
Taking a multidisciplinary approach, the contributors demonstrate the interplay between various research methods, and policy motivations, to show that law-based regulation and governance of AI is vital to efforts at ensuring justice, trust in administrative and contractual processes, and inclusive social cohesion in our increasingly technologically-driven societies. The book provides valuable insights on the new challenges posed by a rapid reliance on AI and big data, from data protection regimes around sensitive personal data, to blockchain and smart contracts, platform data reuse, IP rights and limitations, and many other crucial concerns for law’s interventions. The book also engages with concerns about the ‘surveillance society’, for example regarding contact tracing technology used during the Covid-19 pandemic.
 
The analytical approach provided will make this an excellent resource for scholars and educators, legal practitioners (from constitutional law to contract law) and policy makers within regulation and governance. The empirical case studies will also be of great interest to scholars of technology law and public policy. The regulatory community will find this collection offers an influential case for law’s relevance in giving institutional enforceability to ethics and principled design…(More)”.

Artificial intelligence is breaking patent law


Article by Alexandra George & Toby Walsh: “In 2020, a machine-learning algorithm helped researchers to develop a potent antibiotic that works against many pathogens (see Nature https://doi.org/ggm2p4; 2020). Artificial intelligence (AI) is also being used to aid vaccine development, drug design, materials discovery, space technology and ship design. Within a few years, numerous inventions could involve AI. This is creating one of the biggest threats patent systems have faced.

Patent law is based on the assumption that inventors are human; it currently struggles to deal with an inventor that is a machine. Courts around the world are wrestling with this problem now as patent applications naming an AI system as the inventor have been lodged in more than 100 countries1. Several groups are conducting public consultations on AI and intellectual property (IP) law, including in the United States, United Kingdom and Europe.

If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge. Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions.

Rather than forcing old patent laws to accommodate new technology, we propose that national governments design bespoke IP law — AI-IP — that protects AI-generated inventions. Nations should also create an international treaty to ensure that these laws follow standardized principles, and that any disputes can be resolved efficiently. Researchers need to inform both steps….(More)”.

The Frontlines of Artificial Intelligence Ethics


Book edited by Andrew J. Hampton, and Jeanine A. DeFalco: “This foundational text examines the intersection of AI, psychology, and ethics, laying the groundwork for the importance of ethical considerations in the design and implementation of technologically supported education, decision support, and leadership training.

AI already affects our lives profoundly, in ways both mundane and sensational, obvious and opaque. Much academic and industrial effort has considered the implications of this AI revolution from technical and economic perspectives, but the more personal, humanistic impact of these changes has often been relegated to anecdotal evidence in service to a broader frame of reference. Offering a unique perspective on the emerging social relationships between people and AI agents and systems, Hampton and DeFalco present cutting-edge research from leading academics, professionals, and policy standards advocates on the psychological impact of the AI revolution. Structured into three parts, the book explores the history of data science, technology in education, and combatting machine learning bias, as well as future directions for the emerging field, bringing the research into the active consideration of those in positions of authority.

Exploring how AI can support expert, creative, and ethical decision making in both people and virtual human agents, this is essential reading for students, researchers, and professionals in AI, psychology, ethics, engineering education, and leadership, particularly military leadership…(More)”.

How the Pandemic Made Algorithms Go Haywire


Article by Ravi Parikh and Amol Navathe: “Algorithms have always had some trouble getting things right—hence the fact that ads often follow you around the internet for something you’ve already purchased.

But since COVID upended our lives, more of these algorithms have misfired, harming millions of Americans and widening existing financial and health disparities facing marginalized groups. At times, this was because we humans weren’t using the algorithms correctly. More often it was because COVID changed life in a way that made the algorithms malfunction.

Take, for instance, an algorithm used by dozens of hospitals in the U.S. to identify patients with sepsis—a life-threatening consequence of infection. It was supposed to help doctors speed up transfer to the intensive care unit. But starting in spring of 2020, the patients that showed up to the hospital suddenly changed due to COVID. Many of the variables that went into the algorithm—oxygen levels, age, comorbid conditions—were completely different during the pandemic. So the algorithm couldn’t effectively discern sicker from healthier patients, and consequently it flagged more than twice as many patients as “sick” even though hospital capacity was 35 percent lower than normal. The result was presumably more instances of doctors and nurses being summoned to the patient bedside. It’s possible all of these alerts were necessary – after all, more patients were sick. However, it’s also possible that many of these alerts were false alarms because the type of patients showing up to the hospital were different. Either way, this threatened to overwhelm physicians and hospitals. This “alert overload” was discovered months into the pandemic and led the University of Michigan health system to shut down its use of the algorithm…(More)”.

Digital Constitutionalism in Europe: Reframing Rights and Powers in the Algorithmic Society


Book by Giovanni De Gregorio: “This book is about rights and powers in the digital age. It is an attempt to reframe the role of constitutional democracies in the algorithmic society. By focusing on the European constitutional framework as a lodestar, this book examines the rise and consolidation of digital constitutionalism as a reaction to digital capitalism. The primary goal is to examine how European digital constitutionalism can protect fundamental rights and democratic values against the charm of digital liberalism and the challenges raised by platform powers. Firstly, this book investigates the reasons leading to the development of digital constitutionalism in Europe. Secondly, it provides a normative framework analysing to what extent European constitutionalism provides an architecture to protect rights and limit the exercise of unaccountable powers in the algorithmic society….(More)”.

To make AI fair, here’s what we must learn to do


Article by Mona Sloane: “…From New York City to California and the European Union, many artificial intelligence (AI) regulations are in the works. The intent is to promote equity, accountability and transparency, and to avoid tragedies similar to the Dutch childcare-benefits scandal.

But these won’t be enough to make AI equitable. There must be practical know-how on how to build AI so that it does not exacerbate social inequality. In my view, that means setting out clear ways for social scientists, affected communities and developers to work together.

Right now, developers who design AI work in different realms from the social scientists who can anticipate what might go wrong. As a sociologist focusing on inequality and technology, I rarely get to have a productive conversation with a technologist, or with my fellow social scientists, that moves beyond flagging problems. When I look through conference proceedings, I see the same: very few projects integrate social needs with engineering innovation.

To spur fruitful collaborations, mandates and approaches need to be designed more effectively. Here are three principles that technologists, social scientists and affected communities can apply together to yield AI applications that are less likely to warp society.

Include lived experience. Vague calls for broader participation in AI systems miss the point. Nearly everyone interacting online — using Zoom or clicking reCAPTCHA boxes — is feeding into AI training data. The goal should be to get input from the most relevant participants.

Otherwise, we risk participation-washing: superficial engagement that perpetuates inequality and exclusion. One example is the EU AI Alliance: an online forum, open to anyone, designed to provide democratic feedback to the European Commission’s appointed expert group on AI. When I joined in 2018, it was an unmoderated echo chamber of mostly men exchanging opinions, not representative of the population of the EU, the AI industry or relevant experts…(More)”

Radically Human: How New Technology Is Transforming Business and Shaping Our Future


Book by Paul Daugherty and H. James Wilson: “Technology advances are making tech more . . . human. This changes everything you thought you knew about innovation and strategy. In their groundbreaking book, “Human + Machine,” Accenture technology leaders Paul R. Daugherty and H. James Wilson showed how leading organizations use the power of human-machine collaboration to transform their processes and their bottom lines. Now, as new AI powered technologies like the metaverse, natural language processing, and digital twins begin to rapidly impact both life and work, those companies and other pioneers across industries are tipping the balance even more strikingly toward the human side with technology-led strategy that is reshaping the very nature of innovation. In “Radically Human,” Daugherty and Wilson show this profound shift, fast-forwarded by the pandemic, toward more human–and more humane–technology. Artificial intelligence is becoming less artificial and more intelligent. Instead of data-hungry approaches to AI, innovators are pursuing data-efficient approaches that enable machines to learn as humans do. Instead of replacing workers with machines, they’re unleashing human expertise to create human-centered AI. In place of lumbering legacy IT systems, they’re building cloud-first IT architectures able to continuously adapt to a world of billions of connected devices. And they’re pursuing strategies that will take their place alongside classic, winning business formulas like disruptive innovation. These against-the-grain approaches to the basic building blocks of business–Intelligence, Data, Expertise, Architecture, and Strategy (IDEAS)–are transforming competition. Industrial giants and startups alike are drawing on this radically human IDEAS framework to create new business models, optimize post-pandemic approaches to work and talent, rebuild trust with their stakeholders, and show the way toward a sustainable future….(More)”.

Governing AI to Advance Shared Prosperity


Chapter by Ekaterina Klinova: “This chapter describes a governance approach to promoting AI research and development that creates jobs and advances shared prosperity. Concerns over the labor-saving focus of AI advancement are shared by a growing number of economists, technologists, and policymakers around the world. They warn about the risk of AI entrenching poverty and inequality globally. Yet, translating those concerns into proactive governance interventions that would steer AI away from generating excessive levels of automation remains difficult and largely unattempted. Key causes of this difficulty arise from two types of sources: (1) insufficiently deep understanding of the full composition of factors giving AI R&D its present emphasis on labor-saving applications; and (2) lack of tools and processes that would enable AI practitioners and policymakers to anticipate and assess the impact of AI technologies on employment, wages and job quality. This chapter argues that addressing (2) will require creating worker-participatory means of differentiating between genuinely worker-benefiting AI and worker-displacing or worker-exploiting AI. To contribute to tackling (1), this chapter reviews AI practitioners’ motivations and constraints, such as relevant laws, market incentives, as well as less tangible but still highly influential constraining and motivating factors, including explicit and implicit norms in the AI field, visions of future societal order popular among the field’s members and ways that AI practitioners define goals worth pursuing and measure success. I highlight how each of these factors contributes meaningfully to giving AI advancement its excessive labor-saving emphasis and describe opportunities for governance interventions that could correct that over emphasis….(More)”.

AI & Society


Special Issue of Daedalus edited by James Manyika: “AI is transforming our relationships with technology and with others, our senses of self, as well as our approaches to health care, banking, democracy, and the courts. But while AI in its many forms has become ubiquitous and its benefits to society and the individual have grown, its impacts are varied. Concerns about its unintended effects and misuses have become paramount in conversations about the successful integration of AI in society. This volume explores the many facets of artificial intelligence: its technology, its potential futures, its effects on labor and the economy, its relationship with inequalities, its role in law and governance, its challenges to national security, and what it says about us as humans…(More)” See also https://aiethicscourse.org/

Artificial intelligence is creating a new colonial world order


Series by  Karen Hao: “…Over the last few years, an increasing number of scholars have argued that the impact of AI is repeating the patterns of colonial history. European colonialism, they say, was characterized by the violent capture of land, extraction of resources, and exploitation of people—for example, through slavery—for the economic enrichment of the conquering country. While it would diminish the depth of past traumas to say the AI industry is repeating this violence today, it is now using other, more insidious means to enrich the wealthy and powerful at the great expense of the poor….

MIT Technology Review’s new AI Colonialism series, which will be publishing throughout this week, digs into these and other parallels between AI development and the colonial past by examining communities that have been profoundly changed by the technology. In part one, we head to South Africa, where AI surveillance tools, built on the extraction of people’s behaviors and faces, are re-entrenching racial hierarchies and fueling a digital apartheid.

In part two, we head to Venezuela, where AI data-labeling firms found cheap and desperate workers amid a devastating economic crisis, creating a new model of labor exploitation. The series also looks at ways to move away from these dynamics. In part three, we visit ride-hailing drivers in Indonesia who, by building power through community, are learning to resist algorithmic control and fragmentation. In part four, we end in Aotearoa, the Maori name for New Zealand, where an Indigenous couple are wresting back control of their community’s data to revitalize its language.

Together, the stories reveal how AI is impoverishing the communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires. They also suggest how AI could be so much more—a way for the historically dispossessed to reassert their culture, their voice, and their right to determine their own future.

That is ultimately the aim of this series: to broaden the view of AI’s impact on society so as to begin to figure out how things could be different. It’s not possible to talk about “AI for everyone” (Google’s rhetoric), “responsible AI” (Facebook’s rhetoric), or “broadly distribut[ing]” its benefits (OpenAI’s rhetoric) without honestly acknowledging and confronting the obstacles in the way….(More)”.