Air Canada chatbot promised a discount. Now the airline has to pay it


Article by Kyle Melnick: “After his grandmother died in Ontario a few years ago, British Columbia resident Jake Moffatt visited Air Canada’s website to book a flight for the funeral. He received assistance from a chatbot, which told him the airline offered reduced rates for passengers booking last-minute travel due to tragedies.

Moffatt bought a nearly $600 ticket for a next-day flight after the chatbot said he would get some of his money back under the airline’s bereavement policy as long as he applied within 90 days, according to a recent civil-resolutions tribunal decision.

But when Moffatt later attempted to receive the discount, he learned that the chatbot had been wrong. Air Canada only awarded bereavement fees if the request had been submitted before a flight. The airline later argued the chatbot wasa separate legal entity “responsible for its own actions,” the decision said.

Moffatt filed a claim with the Canadian tribunal, which ruled Wednesday that Air Canada owed Moffatt more than $600 in damages and tribunal fees after failing to provide “reasonable care.”

As companies have added artificial intelligence-powered chatbots to their websites in hopes of providing faster service, the Air Canada dispute sheds light on issues associated with the growing technology and how courts could approach questions of accountability. The Canadian tribunal in this case came down on the side of the customer, ruling that Air Canada did not ensure its chatbot was accurate…(More)”

Community views on the secondary use of general practice data: Findings from a mixed-methods study


Paper by Annette J. Braunack-Mayer et al: “General practice data, particularly when combined with hospital and other health service data through data linkage, are increasingly being used for quality assurance, evaluation, health service planning and research.Using general practice data is particularly important in countries where general practitioners (GPs) are the first and principal source of health care for most people.

Although there is broad public support for the secondary use of health data, there are good reasons to question whether this support extends to general practice settings. GP–patient relationships may be very personal and longstanding and the general practice health record can capture a large amount of information about patients. There is also the potential for multiple angles on patients’ lives: GPs often care for, or at least record information about, more than one generation of a family. These factors combine to amplify patients’ and GPs’ concerns about sharing patient data….

Adams et al. have developed a model of social licence, specifically in the context of sharing administrative data for health research, based on an analysis of the social licence literature and founded on two principal elements: trust and legitimacy.In this model, trust is founded on research enterprises being perceived as reliable and responsive, including in relation to privacy and security of information, and having regard to the community’s interests and well-being.

Transparency and accountability measures may be used to demonstrate trustworthiness and, as a consequence, to generate trust. Transparency involves a level of openness about the way data are handled and used as well as about the nature and outcomes of the research. Adams et al. note that lack of transparency can undermine trust. They also note that the quality of public engagement is important and that simply providing information is not sufficient. While this is one element of transparency, other elements such as accountability and collaboration are also part of the trusting, reflexive relationship necessary to establish and support social licence.

The second principal element, legitimacy, is founded on research enterprises conforming to the legal, cultural and social norms of society and, again, acting in the best interests of the community. In diverse communities with a range of views and interests, it is necessary to develop a broad consensus on what amounts to the common good through deliberative and collaborative processes.

Social licence cannot be assumed. It must be built through public discussion and engagement to avoid undermining the relationship of trust with health care providers and confidence in the confidentiality of health information…(More)”

Data, Privacy Laws and Firm Production: Evidence from the GDPR


Paper by Mert Demirer, Diego J. Jiménez Hernández, Dean Li & Sida Peng: “By regulating how firms collect, store, and use data, privacy laws may change the role of data in production and alter firm demand for information technology inputs. We study how firms respond to privacy laws in the context of the EU’s General Data Protection Regulation (GDPR) by using seven years of data from a large global cloud-computing provider. Our difference-in-difference estimates indicate that, in response to the GDPR, EU firms decreased data storage by 26% and data processing by 15% relative to comparable US firms, becoming less “data-intensive.” To estimate the costs of the GDPR for firms, we propose and estimate a production function where data and computation serve as inputs to the production of “information.” We find that data and computation are strong complements in production and that firm responses are consistent with the GDPR, representing a 20% increase in the cost of data on average. Variation in the firm-level effects of the GDPR and industry-level exposure to data, however, drives significant heterogeneity in our estimates of the impact of the GDPR on production costs…(More)”

University of Michigan Sells Recordings of Study Groups and Office Hours to Train AI


Article by Joseph Cox: “The University of Michigan is selling hours of audio recordings of study groups, office hours, lectures, and more to outside third-parties for tens of thousands of dollars for the purpose of training large language models (LLMs). 404 Media has downloaded a sample of the data, which includes a one hour and 20 minute long audio recording of what appears to be a lecture.

The news highlights how some LLMs may ultimately be trained on data with an unclear level of consent from the source subjects. ..(More)”.

Could AI Speak on Behalf of Future Humans?


Article by Konstantin Scheuermann & Angela Aristidou : “An enduring societal challenge the world over is a “perspective deficit” in collective decision-making. Whether within a single business, at the local community level, or the international level, some perspectives are not (adequately) heard and may not receive fair and inclusive representation during collective decision-making discussions and procedures. Most notably, future generations of humans and aspects of the natural environment may be deeply affected by present-day collective decisions. Yet, they are often “voiceless” as they cannot advocate for their interests.

Today, as we witness the rapid integration of artificial intelligence (AI) systems into the everyday fabric of our societies, we recognize the potential in some AI systems to surface and/or amplify the perspectives of these previously voiceless stakeholders. Some classes of AI systems, notably Generative AI (e.g., ChatGPT, Llama, Gemini), are capable of acting as the proxy of the previously unheard by generating multi-modal outputs (audio, video, and text).

We refer to these outputs collectively here as “AI Voice,” signifying that the previously unheard in decision-making scenarios gain opportunities to express their interests—in other words, voice—through the human-friendly outputs of these AI systems. AI Voice, however, cannot realize its promise without first challenging how voice is given and withheld in our collective decision-making processes and how the new technology may and does unsettle the status quo. There is also an important distinction between the “right to voice” and the “right to decide” when considering the roles AI Voice may assume—ranging from a passive facilitator to an active collaborator. This is one highly promising and feasible possibility for how to leverage AI to create a more equitable collective future, but to do so responsibly will require careful strategy and much further conversation…(More)”.

Handbook of Artificial Intelligence at Work


Book edited by Martha Garcia-Murillo and Andrea Renda: “With the advancement in processing power and storage now enabling algorithms to expand their capabilities beyond their initial narrow applications, technology is becoming increasingly powerful. This highly topical Handbook provides a comprehensive overview of the impact of Artificial Intelligence (AI) on work, assessing its effect on an array of economic sectors, the resulting nature of work, and the subsequent policy implications of these changes.

Featuring contributions from leading experts across diverse fields, the Handbook of Artificial Intelligence at Work takes an interdisciplinary approach to understanding AI’s connections to existing economic, social, and political ecosystems. Considering a range of fields including agriculture, manufacturing, health care, education, law and government, the Handbook provides detailed sector-specific analyses of how AI is changing the nature of work, the challenges it presents and the opportunities it creates. Looking forward, it makes policy recommendations to address concerns, such as the potential displacement of some human labor by AI and growth in inequality affecting those lacking the necessary skills to interact with these technologies or without opportunities to do so.

This vital Handbook is an essential read for students and academics in the fields of business and management, information technology, AI, and public policy. It will also be highly informative from a cross-disciplinary perspective for practitioners, as well as policy makers with an interest in the development of AI technology…(More)”

Data Science, AI and Data Philanthropy in Foundations : On the Path to Maturity


Report by Filippo Candela, Sevda Kilicalp, and Daniel Spiers: “This research explores the data-related initiatives currently undertaken by a pool of foundations from across Europe. To the authors’ knowledge, this is the first study that has investigated the level of data work within philanthropic foundations, even though the rise of data and its importance has increasingly been recognised in the non-profit sector. Given that this is an inaugural piece of research, the study takes an exploratory approach, prioritising a comprehensive survey of data practices foundations are currently implementing or exploring. The goal was to obtain a snapshot of the current level of maturity and commitment of foundations regarding data-related matters…(More)”

Language Machinery


Essay by Richard Hughes Gibson: “… current debates about writing machines are not as fresh as they seem. As is quietly acknowledged in the footnotes of scientific papers, much of the intellectual infrastructure of today’s advances was laid decades ago. In the 1940s, the mathematician Claude Shannon demonstrated that language use could be both described by statistics and imitated with statistics, whether those statistics were in human heads or a machine’s memory. Shannon, in other words, was the first statistical language modeler, which makes ChatGPT and its ilk his distant brainchildren. Shannon never tried to build such a machine, but some astute early readers of his work recognized that computers were primed to translate his paper-and-ink experiments into a powerful new medium. In writings now discussed largely in niche scholarly and computing circles, these readers imagined—and even made preliminary sketches of—machines that would translate Shannon’s proposals into reality. These readers likewise raised questions about the meaning of such machines’ outputs and wondered what the machines revealed about our capacity to write.

The current barrage of commentary has largely neglected this backstory, and our discussions suffer for forgetting that issues that appear novel to us belong to the mid-twentieth century. Shannon and his first readers were the original residents of the headspace in which so many of us now find ourselves. Their ambitions and insights have left traces on our discourse, just as their silences and uncertainties haunt our exchanges. If writing machines constitute a “philosophical event” or a “prompt for philosophizing,” then I submit that we are already living in the event’s aftermath, which is to say, in Shannon’s aftermath. Amid the rampant speculation about a future dominated by writing machines, I propose that we turn in the other direction to listen to field reports from some of the first people to consider what it meant to read and write in Shannon’s world…(More)”.

Copyright Policy Options for Generative Artificial Intelligence


Paper by Joshua S. Gans: “New generative artificial intelligence (AI) models, including large language models and image generators, have created new challenges for copyright policy as such models may be trained on data that includes copy-protected content. This paper examines this issue from an economics perspective and analyses how different copyright regimes for generative AI will impact the quality of content generated as well as the quality of AI training. A key factor is whether generative AI models are small (with content providers capable of negotiations with AI providers) or large (where negotiations are prohibitive). For small AI models, it is found that giving original content providers copyright protection leads to superior social welfare outcomes compared to having no copyright protection. For large AI models, this comparison is ambiguous and depends on the level of potential harm to original content providers and the importance of content for AI training quality. However, it is demonstrated that an ex-post `fair use’ type mechanism can lead to higher expected social welfare than traditional copyright regimes…(More)”.

How Health Data Integrity Can Earn Trust and Advance Health


Article by Jochen Lennerz, Nick Schneider and Karl Lauterbach: “Efforts to share health data across borders snag on legal and regulatory barriers. Before detangling the fine print, let’s agree on overarching principles.

Imagine a scenario in which Mary, an individual with a rare disease, has agreed to share her medical records for a research project aimed at finding better treatments for genetic disorders. Mary’s consent is grounded in trust that her data will be handled with the utmost care, protected from unauthorized access, and used according to her wishes. 

It may sound simple, but meeting these standards comes with myriad complications. Whose job is it to weigh the risk that Mary might be reidentified, even if her information is de-identified and stored securely? How should that assessment be done? How can data from Mary’s records be aggregated with patients from health systems in other countries, each with their own requirements for data protection and formats for record keeping? How can Mary’s wishes be respected, both in terms of what research is conducted and in returning relevant results to her?

From electronic medical records to genomic sequencing, health care providers and researchers now have an unprecedented wealth of information that could help tailor treatments to individual needs, revolutionize understanding of disease, and enhance the overall quality of health care. Data protection, privacy safeguards, and cybersecurity are all paramount for safeguarding sensitive medical information, but much of the potential that lies in this abundance of data is being lost because well-intentioned regulations have not been set up to allow for data sharing and collaboration. This stymies efforts to study rare diseases, map disease patterns, improve public health surveillance, and advance evidence-based policymaking (for instance, by comparing effectiveness of interventions across regions and demographics). Projects that could excel with enough data get bogged down in bureaucracy and uncertainty. For example, Germany now has strict data protection laws—with heavy punishment for violations—that should allow de-identified health insurance claims to be used for research within secure processing environments, but the legality of such use has been challenged…(More)”.