Air Canada chatbot promised a discount. Now the airline has to pay it


Article by Kyle Melnick: “After his grandmother died in Ontario a few years ago, British Columbia resident Jake Moffatt visited Air Canada’s website to book a flight for the funeral. He received assistance from a chatbot, which told him the airline offered reduced rates for passengers booking last-minute travel due to tragedies.

Moffatt bought a nearly $600 ticket for a next-day flight after the chatbot said he would get some of his money back under the airline’s bereavement policy as long as he applied within 90 days, according to a recent civil-resolutions tribunal decision.

But when Moffatt later attempted to receive the discount, he learned that the chatbot had been wrong. Air Canada only awarded bereavement fees if the request had been submitted before a flight. The airline later argued the chatbot wasa separate legal entity “responsible for its own actions,” the decision said.

Moffatt filed a claim with the Canadian tribunal, which ruled Wednesday that Air Canada owed Moffatt more than $600 in damages and tribunal fees after failing to provide “reasonable care.”

As companies have added artificial intelligence-powered chatbots to their websites in hopes of providing faster service, the Air Canada dispute sheds light on issues associated with the growing technology and how courts could approach questions of accountability. The Canadian tribunal in this case came down on the side of the customer, ruling that Air Canada did not ensure its chatbot was accurate…(More)”

University of Michigan Sells Recordings of Study Groups and Office Hours to Train AI


Article by Joseph Cox: “The University of Michigan is selling hours of audio recordings of study groups, office hours, lectures, and more to outside third-parties for tens of thousands of dollars for the purpose of training large language models (LLMs). 404 Media has downloaded a sample of the data, which includes a one hour and 20 minute long audio recording of what appears to be a lecture.

The news highlights how some LLMs may ultimately be trained on data with an unclear level of consent from the source subjects. ..(More)”.

Could AI Speak on Behalf of Future Humans?


Article by Konstantin Scheuermann & Angela Aristidou : “An enduring societal challenge the world over is a “perspective deficit” in collective decision-making. Whether within a single business, at the local community level, or the international level, some perspectives are not (adequately) heard and may not receive fair and inclusive representation during collective decision-making discussions and procedures. Most notably, future generations of humans and aspects of the natural environment may be deeply affected by present-day collective decisions. Yet, they are often “voiceless” as they cannot advocate for their interests.

Today, as we witness the rapid integration of artificial intelligence (AI) systems into the everyday fabric of our societies, we recognize the potential in some AI systems to surface and/or amplify the perspectives of these previously voiceless stakeholders. Some classes of AI systems, notably Generative AI (e.g., ChatGPT, Llama, Gemini), are capable of acting as the proxy of the previously unheard by generating multi-modal outputs (audio, video, and text).

We refer to these outputs collectively here as “AI Voice,” signifying that the previously unheard in decision-making scenarios gain opportunities to express their interests—in other words, voice—through the human-friendly outputs of these AI systems. AI Voice, however, cannot realize its promise without first challenging how voice is given and withheld in our collective decision-making processes and how the new technology may and does unsettle the status quo. There is also an important distinction between the “right to voice” and the “right to decide” when considering the roles AI Voice may assume—ranging from a passive facilitator to an active collaborator. This is one highly promising and feasible possibility for how to leverage AI to create a more equitable collective future, but to do so responsibly will require careful strategy and much further conversation…(More)”.

Handbook of Artificial Intelligence at Work


Book edited by Martha Garcia-Murillo and Andrea Renda: “With the advancement in processing power and storage now enabling algorithms to expand their capabilities beyond their initial narrow applications, technology is becoming increasingly powerful. This highly topical Handbook provides a comprehensive overview of the impact of Artificial Intelligence (AI) on work, assessing its effect on an array of economic sectors, the resulting nature of work, and the subsequent policy implications of these changes.

Featuring contributions from leading experts across diverse fields, the Handbook of Artificial Intelligence at Work takes an interdisciplinary approach to understanding AI’s connections to existing economic, social, and political ecosystems. Considering a range of fields including agriculture, manufacturing, health care, education, law and government, the Handbook provides detailed sector-specific analyses of how AI is changing the nature of work, the challenges it presents and the opportunities it creates. Looking forward, it makes policy recommendations to address concerns, such as the potential displacement of some human labor by AI and growth in inequality affecting those lacking the necessary skills to interact with these technologies or without opportunities to do so.

This vital Handbook is an essential read for students and academics in the fields of business and management, information technology, AI, and public policy. It will also be highly informative from a cross-disciplinary perspective for practitioners, as well as policy makers with an interest in the development of AI technology…(More)”

Language Machinery


Essay by Richard Hughes Gibson: “… current debates about writing machines are not as fresh as they seem. As is quietly acknowledged in the footnotes of scientific papers, much of the intellectual infrastructure of today’s advances was laid decades ago. In the 1940s, the mathematician Claude Shannon demonstrated that language use could be both described by statistics and imitated with statistics, whether those statistics were in human heads or a machine’s memory. Shannon, in other words, was the first statistical language modeler, which makes ChatGPT and its ilk his distant brainchildren. Shannon never tried to build such a machine, but some astute early readers of his work recognized that computers were primed to translate his paper-and-ink experiments into a powerful new medium. In writings now discussed largely in niche scholarly and computing circles, these readers imagined—and even made preliminary sketches of—machines that would translate Shannon’s proposals into reality. These readers likewise raised questions about the meaning of such machines’ outputs and wondered what the machines revealed about our capacity to write.

The current barrage of commentary has largely neglected this backstory, and our discussions suffer for forgetting that issues that appear novel to us belong to the mid-twentieth century. Shannon and his first readers were the original residents of the headspace in which so many of us now find ourselves. Their ambitions and insights have left traces on our discourse, just as their silences and uncertainties haunt our exchanges. If writing machines constitute a “philosophical event” or a “prompt for philosophizing,” then I submit that we are already living in the event’s aftermath, which is to say, in Shannon’s aftermath. Amid the rampant speculation about a future dominated by writing machines, I propose that we turn in the other direction to listen to field reports from some of the first people to consider what it meant to read and write in Shannon’s world…(More)”.

Copyright Policy Options for Generative Artificial Intelligence


Paper by Joshua S. Gans: “New generative artificial intelligence (AI) models, including large language models and image generators, have created new challenges for copyright policy as such models may be trained on data that includes copy-protected content. This paper examines this issue from an economics perspective and analyses how different copyright regimes for generative AI will impact the quality of content generated as well as the quality of AI training. A key factor is whether generative AI models are small (with content providers capable of negotiations with AI providers) or large (where negotiations are prohibitive). For small AI models, it is found that giving original content providers copyright protection leads to superior social welfare outcomes compared to having no copyright protection. For large AI models, this comparison is ambiguous and depends on the level of potential harm to original content providers and the importance of content for AI training quality. However, it is demonstrated that an ex-post `fair use’ type mechanism can lead to higher expected social welfare than traditional copyright regimes…(More)”.

Computing Power and the Governance of AI


Blog by Lennart Heim, Markus Anderljung, Emma Bluemke, and Robert Trager: “Computing power – compute for short – is a key driver of AI progress. Over the past thirteen years, the amount of compute used to train leading AI systems has increased by a factor of 350 million. This has enabled the major AI advances that have recently gained global attention.

Governments have taken notice. They are increasingly engaged in compute governance: using compute as a lever to pursue AI policy goals, such as limiting misuse risks, supporting domestic industries, or engaging in geopolitical competition. 

There are at least three ways compute can be used to govern AI. Governments can: 

  • Track or monitor compute to gain visibility into AI development and use
  • Subsidize or limit access to compute to shape the allocation of resources across AI projects
  • Monitor activity, limit access, or build “guardrails” into hardware to enforce rules

Compute governance is a particularly important approach to AI governance because it is feasible. Compute is detectable: training advanced AI systems requires tens of thousands of highly advanced AI chips, which cannot be acquired or used inconspicuously. It is excludable: AI chips, being physical goods, can be given to or taken away from specific actors and in cases of specific uses. And it is quantifiable: chips, their features, and their usage can be measured. Compute’s detectability and excludability are further enhanced by the highly concentrated structure of the AI supply chain: very few companies are capable of producing the tools needed to design advanced chips, the machines needed to make them, or the data centers that house them. 

However, just because compute can be used as a tool to govern AI doesn’t mean that it should be used in all cases. Compute governance is a double-edged sword, with both potential benefits and the risk of negative consequences: it can support widely shared goals like safety, but it can also be used to infringe on civil liberties, perpetuate existing power structures, and entrench authoritarian regimes. Indeed, some things are better ungoverned. 

In our paper we argue that compute is a particularly promising node for AI governance. We also highlight the risks of compute governance and offer suggestions for how to mitigate them. This post summarizes our findings and key takeaways, while also offering some of our own commentary…(More)”

AI is too important to be monopolised


Article by Marietje Schaake: “…From the promise of medical breakthroughs to the perils of election interference, the hopes of helpful climate research to the challenge of cracking fundamental physics, AI is too important to be monopolised.

Yet the market is moving in exactly that direction, as resources and talent to develop the most advanced AI sit firmly in the hands of a very small number of companies. That is particularly true for resource-intensive data and computing power (termed “compute”), which are required to train large language models for a variety of AI applications. Researchers and small and medium-sized enterprises risk fatal dependency on Big Tech once again, or else they will miss out on the latest wave of innovation. 

On both sides of the Atlantic, feverish public investments are being made in an attempt to level the computational playing field. To ensure scientists have access to capacities comparable to those of Silicon Valley giants, the US government established the National AI Research Resource last month. This pilot project is being led by the US National Science Foundation. By working with 10 other federal agencies and 25 civil society groups, it will facilitate government-funded data and compute to help the research and education community build and understand AI. 

The EU set up a decentralised network of supercomputers with a similar aim back in 2018, before the recent wave of generative AI created a new sense of urgency. The EuroHPC has lived in relative obscurity and the initiative appears to have been under-exploited. As European Commission president Ursula von der Leyen said late last year: we need to put this power to useThe EU now imagines that democratised supercomputer access can also help with the creation of “AI factories,” where small businesses pool their resources to develop new cutting-edge models. 

There has long been talk of considering access to the internet a public utility, because of how important it is for education, employment and acquiring information. Yet rules to that end were never adopted. But with the unlocking of compute as a shared good, the US and the EU are showing real willingness to make investments into public digital infrastructure.

Even if the latest measures are viewed as industrial policy in a new jacket, they are part of a long overdue step to shape the digital market and offset the outsized power of big tech companies in various corners of our societies…(More)”.

Applying AI to Rebuild Middle Class Jobs


Paper by David Autor: “While the utopian vision of the current Information Age was that computerization would flatten economic hierarchies by democratizing information, the opposite has occurred. Information, it turns out, is merely an input into a more consequential economic function, decision-making, which is the province of elite experts. The unique opportunity that AI offers to the labor market is to extend the relevance, reach, and value of human expertise. Because of AI’s capacity to weave information and rules with acquired experience to support decision-making, it can be applied to enable a larger set of workers possessing complementary knowledge to perform some of the higher-stakes decision-making tasks that are currently arrogated to elite experts, e.g., medical care to doctors, document production to lawyers, software coding to computer engineers, and undergraduate education to professors. My thesis is not a forecast but an argument about what is possible: AI, if used well, can assist with restoring the middle-skill, middle-class heart of the US labor market that has been hollowed out by automation and globalization…(More)”.

AI cannot be used to deny health care coverage, feds clarify to insurers


Article by Beth Mole: “Health insurance companies cannot use algorithms or artificial intelligence to determine care or deny coverage to members on Medicare Advantage plans, the Centers for Medicare & Medicaid Services (CMS) clarified in a memo sent to all Medicare Advantage insurers.

The memo—formatted like an FAQ on Medicare Advantage (MA) plan rules—comes just months after patients filed lawsuits claiming that UnitedHealth and Humana have been using a deeply flawed AI-powered tool to deny care to elderly patients on MA plans. The lawsuits, which seek class-action status, center on the same AI tool, called nH Predict, used by both insurers and developed by NaviHealth, a UnitedHealth subsidiary.

According to the lawsuits, nH Predict produces draconian estimates for how long a patient will need post-acute care in facilities like skilled nursing homes and rehabilitation centers after an acute injury, illness, or event, like a fall or a stroke. And NaviHealth employees face discipline for deviating from the estimates, even though they often don’t match prescribing physicians’ recommendations or Medicare coverage rules. For instance, while MA plans typically provide up to 100 days of covered care in a nursing home after a three-day hospital stay, using nH Predict, patients on UnitedHealth’s MA plan rarely stay in nursing homes for more than 14 days before receiving payment denials, the lawsuits allege…(More)”