AI as a Public Good: Ensuring Democratic Control of AI in the Information Space


Report by the Forum on Information and Democracy: “…The report outlines key recommendations to governments, the industry and relevant stakeholders, notably:

  • Foster the creation of a tailored certification system for AI companies inspired by the success of the Fair Trade certification system.
  • Establish standards governing content authenticity and provenance, including for author authentication.
  • Implement a comprehensive legal framework that clearly defines the rights of individuals including the right to be informed, to receive an explanation, to challenge a machine-generated outcome, and to non-discrimination
  • Provide users with an easy and user-friendly opportunity to choose alternative recommender systems that do not optimize for engagement but build on ranking in support of positive individual and societal outcomes, such as reliable information, bridging content or diversity of information.
  • Set up a participatory process to determine the rules and criteria guiding dataset provenance and curation, human labeling for AI training, alignment, and red-teaming to build inclusive, non-discriminatory and transparent AI systems…(More)”.

The AI project pushing local languages to replace French in Mali’s schools


Article by Annie Risemberg and Damilare Dosunmu: “For the past six months,Alou Dembele, a27-year-oldengineer and teacher, has spent his afternoons reading storybooks with children in the courtyard of a community school in Mali’s capital city, Bamako. The books are written in Bambara — Mali’s most widely spoken language — and include colorful pictures and stories based on local culture. Dembele has over 100 Bambara books to pick from — an unimaginable educational resource just a year ago.

From 1960 to 2023, French was Mali’s official language. But in June last year, the military government replaced it in favor of 13 local languages, creating a desperate need for new educational materials.

Artificial intelligence came to the rescue: RobotsMali, a government-backed initiative, used tools like ChatGPT, Google Translate, and free-to-use image-maker Playgroundto create a pool of 107 books in Bambara in less than a year. Volunteer teachers, like Dembele, distribute them through after-school classes. Within a year, the books have reached over 300 elementary school kids, according to RobotsMali’s co-founder, Michael Leventhal. They are not only helping bridge the gap created after French was dropped but could also be effective in helping children learn better, experts told Rest of World…(More)”.

Mirror, Mirror, on the Wall, Who’s the Fairest of Them All?


Paper by Alice Xiang: “Debates in AI ethics often hinge on comparisons between AI and humans: which is more beneficial, which is more harmful, which is more biased, the human or the machine? These questions, however, are a red herring. They ignore what is most interesting and important about AI ethics: AI is a mirror. If a person standing in front of a mirror asked you, “Who is more beautiful, me or the person in the mirror?” the question would seem ridiculous. Sure, depending on the angle, lighting, and personal preferences of the beholder, the person or their reflection might appear more beautiful, but the question is moot. AI reflects patterns in our society, just and unjust, and the worldviews of its human creators, fair or biased. The question then is not which is fairer, the human or the machine, but what can we learn from this reflection of our society and how can we make AI fairer? This essay discusses the challenges to developing fairer AI, and how they stem from this reflective property…(More)”.

AI doomsayers funded by billionaires ramp up lobbying


Article by Brendan Borderlon: “Two nonprofits funded by tech billionaires are now directly lobbying Washington to protect humanity against the alleged extinction risk posed by artificial intelligence — an escalation critics see as a well-funded smokescreen to head off regulation and competition.

The similarly named Center for AI Policy and Center for AI Safety both registered their first lobbyists in late 2023, raising the profile of a sprawling influence battle that’s so far been fought largely through think tanks and congressional fellowships.

Each nonprofit spent close to $100,000 on lobbying in the last three months of the year. The groups draw money from organizations with close ties to the AI industry like Open Philanthropy, financed by Facebook co-founder Dustin Moskovitz, and Lightspeed Grants, backed by Skype co-founder Jaan Tallinn.

Their message includes policies like CAIP’s call for legislation that would hold AI developers liable for “severe harms,” require permits to develop “high-risk” systems and empower regulators to “pause AI projects if they identify a clear emergency.”

“[The] risks of AI remain neglected — and are in danger of being outpaced by the rapid rate of AI development,” Nathan Calvin, senior policy counsel at the CAIS Action Fund, said in an email.

Detractors see the whole enterprise as a diversion. By focusing on apocalyptic scenarios, critics claim, these well-funded groups are raising barriers to entry for smaller AI firms and shifting attention away from more immediate and concrete problems with the technology, such as its potential to eliminate jobs or perpetuate discrimination.

Until late last year, organizations working to focus Washington on AI’s existential threat tended to operate under the radar. Instead of direct lobbying, groups like Open Philanthropy funded AI staffers in Congress and poured money into key think tanks. The RAND Corporation, an influential think tank that played a key role in drafting President Joe Biden’s October executive order on AI, received more than $15 million from Open Philanthropy last year…(More)”.

Air Canada chatbot promised a discount. Now the airline has to pay it


Article by Kyle Melnick: “After his grandmother died in Ontario a few years ago, British Columbia resident Jake Moffatt visited Air Canada’s website to book a flight for the funeral. He received assistance from a chatbot, which told him the airline offered reduced rates for passengers booking last-minute travel due to tragedies.

Moffatt bought a nearly $600 ticket for a next-day flight after the chatbot said he would get some of his money back under the airline’s bereavement policy as long as he applied within 90 days, according to a recent civil-resolutions tribunal decision.

But when Moffatt later attempted to receive the discount, he learned that the chatbot had been wrong. Air Canada only awarded bereavement fees if the request had been submitted before a flight. The airline later argued the chatbot wasa separate legal entity “responsible for its own actions,” the decision said.

Moffatt filed a claim with the Canadian tribunal, which ruled Wednesday that Air Canada owed Moffatt more than $600 in damages and tribunal fees after failing to provide “reasonable care.”

As companies have added artificial intelligence-powered chatbots to their websites in hopes of providing faster service, the Air Canada dispute sheds light on issues associated with the growing technology and how courts could approach questions of accountability. The Canadian tribunal in this case came down on the side of the customer, ruling that Air Canada did not ensure its chatbot was accurate…(More)”

University of Michigan Sells Recordings of Study Groups and Office Hours to Train AI


Article by Joseph Cox: “The University of Michigan is selling hours of audio recordings of study groups, office hours, lectures, and more to outside third-parties for tens of thousands of dollars for the purpose of training large language models (LLMs). 404 Media has downloaded a sample of the data, which includes a one hour and 20 minute long audio recording of what appears to be a lecture.

The news highlights how some LLMs may ultimately be trained on data with an unclear level of consent from the source subjects. ..(More)”.

Could AI Speak on Behalf of Future Humans?


Article by Konstantin Scheuermann & Angela Aristidou : “An enduring societal challenge the world over is a “perspective deficit” in collective decision-making. Whether within a single business, at the local community level, or the international level, some perspectives are not (adequately) heard and may not receive fair and inclusive representation during collective decision-making discussions and procedures. Most notably, future generations of humans and aspects of the natural environment may be deeply affected by present-day collective decisions. Yet, they are often “voiceless” as they cannot advocate for their interests.

Today, as we witness the rapid integration of artificial intelligence (AI) systems into the everyday fabric of our societies, we recognize the potential in some AI systems to surface and/or amplify the perspectives of these previously voiceless stakeholders. Some classes of AI systems, notably Generative AI (e.g., ChatGPT, Llama, Gemini), are capable of acting as the proxy of the previously unheard by generating multi-modal outputs (audio, video, and text).

We refer to these outputs collectively here as “AI Voice,” signifying that the previously unheard in decision-making scenarios gain opportunities to express their interests—in other words, voice—through the human-friendly outputs of these AI systems. AI Voice, however, cannot realize its promise without first challenging how voice is given and withheld in our collective decision-making processes and how the new technology may and does unsettle the status quo. There is also an important distinction between the “right to voice” and the “right to decide” when considering the roles AI Voice may assume—ranging from a passive facilitator to an active collaborator. This is one highly promising and feasible possibility for how to leverage AI to create a more equitable collective future, but to do so responsibly will require careful strategy and much further conversation…(More)”.

Handbook of Artificial Intelligence at Work


Book edited by Martha Garcia-Murillo and Andrea Renda: “With the advancement in processing power and storage now enabling algorithms to expand their capabilities beyond their initial narrow applications, technology is becoming increasingly powerful. This highly topical Handbook provides a comprehensive overview of the impact of Artificial Intelligence (AI) on work, assessing its effect on an array of economic sectors, the resulting nature of work, and the subsequent policy implications of these changes.

Featuring contributions from leading experts across diverse fields, the Handbook of Artificial Intelligence at Work takes an interdisciplinary approach to understanding AI’s connections to existing economic, social, and political ecosystems. Considering a range of fields including agriculture, manufacturing, health care, education, law and government, the Handbook provides detailed sector-specific analyses of how AI is changing the nature of work, the challenges it presents and the opportunities it creates. Looking forward, it makes policy recommendations to address concerns, such as the potential displacement of some human labor by AI and growth in inequality affecting those lacking the necessary skills to interact with these technologies or without opportunities to do so.

This vital Handbook is an essential read for students and academics in the fields of business and management, information technology, AI, and public policy. It will also be highly informative from a cross-disciplinary perspective for practitioners, as well as policy makers with an interest in the development of AI technology…(More)”

Language Machinery


Essay by Richard Hughes Gibson: “… current debates about writing machines are not as fresh as they seem. As is quietly acknowledged in the footnotes of scientific papers, much of the intellectual infrastructure of today’s advances was laid decades ago. In the 1940s, the mathematician Claude Shannon demonstrated that language use could be both described by statistics and imitated with statistics, whether those statistics were in human heads or a machine’s memory. Shannon, in other words, was the first statistical language modeler, which makes ChatGPT and its ilk his distant brainchildren. Shannon never tried to build such a machine, but some astute early readers of his work recognized that computers were primed to translate his paper-and-ink experiments into a powerful new medium. In writings now discussed largely in niche scholarly and computing circles, these readers imagined—and even made preliminary sketches of—machines that would translate Shannon’s proposals into reality. These readers likewise raised questions about the meaning of such machines’ outputs and wondered what the machines revealed about our capacity to write.

The current barrage of commentary has largely neglected this backstory, and our discussions suffer for forgetting that issues that appear novel to us belong to the mid-twentieth century. Shannon and his first readers were the original residents of the headspace in which so many of us now find ourselves. Their ambitions and insights have left traces on our discourse, just as their silences and uncertainties haunt our exchanges. If writing machines constitute a “philosophical event” or a “prompt for philosophizing,” then I submit that we are already living in the event’s aftermath, which is to say, in Shannon’s aftermath. Amid the rampant speculation about a future dominated by writing machines, I propose that we turn in the other direction to listen to field reports from some of the first people to consider what it meant to read and write in Shannon’s world…(More)”.

Copyright Policy Options for Generative Artificial Intelligence


Paper by Joshua S. Gans: “New generative artificial intelligence (AI) models, including large language models and image generators, have created new challenges for copyright policy as such models may be trained on data that includes copy-protected content. This paper examines this issue from an economics perspective and analyses how different copyright regimes for generative AI will impact the quality of content generated as well as the quality of AI training. A key factor is whether generative AI models are small (with content providers capable of negotiations with AI providers) or large (where negotiations are prohibitive). For small AI models, it is found that giving original content providers copyright protection leads to superior social welfare outcomes compared to having no copyright protection. For large AI models, this comparison is ambiguous and depends on the level of potential harm to original content providers and the importance of content for AI training quality. However, it is demonstrated that an ex-post `fair use’ type mechanism can lead to higher expected social welfare than traditional copyright regimes…(More)”.