The Adoption and Implementation of Artificial Intelligence Chatbots in Public Organizations: Evidence from U.S. State Governments


Paper by Tzuhao Chen, Mila Gascó-Hernandez, and Marc Esteve: “Although the use of artificial intelligence (AI) chatbots in public organizations has increased in recent years, three crucial gaps remain unresolved. First, little empirical evidence has been produced to examine the deployment of chatbots in government contexts. Second, existing research does not distinguish clearly between the drivers of adoption and the determinants of success and, therefore, between the stages of adoption and implementation. Third, most current research does not use a multidimensional perspective to understand the adoption and implementation of AI in government organizations. Our study addresses these gaps by exploring the following question: what determinants facilitate or impede the adoption and implementation of chatbots in the public sector? We answer this question by analyzing 22 state agencies across the U.S.A. that use chatbots. Our analysis identifies ease of use and relative advantage of chatbots, leadership and innovative culture, external shock, and individual past experiences as the main drivers of the decisions to adopt chatbots. Further, it shows that different types of determinants (such as knowledge-base creation and maintenance, technology skills and system crashes, human and financial resources, cross-agency interaction and communication, confidentiality and safety rules and regulations, and citizens’ expectations, and the COVID-19 crisis) impact differently the adoption and implementation processes and, therefore, determine the success of chatbots in a different manner. Future research could focus on the interaction among different types of determinants for both adoption and implementation, as well as on the role of specific stakeholders, such as IT vendors…(More)”.

Who Wrote This? How AI and the Lure of Efficiency Threaten Human Writing


Book by Naomi S. Baron: “Would you read this book if a computer wrote it? Would you even know? And why would it matter?

Today’s eerily impressive artificial intelligence writing tools present us with a crucial challenge: As writers, do we unthinkingly adopt AI’s time-saving advantages or do we stop to weigh what we gain and lose when heeding its siren call? To understand how AI is redefining what it means to write and think, linguist and educator Naomi S. Baron leads us on a journey connecting the dots between human literacy and today’s technology. From nineteenth-century lessons in composition, to mathematician Alan Turing’s work creating a machine for deciphering war-time messages, to contemporary engines like ChatGPT, Baron gives readers a spirited overview of the emergence of both literacy and AI, and a glimpse of their possible future. As the technology becomes increasingly sophisticated and fluent, it’s tempting to take the easy way out and let AI do the work for us. Baron cautions that such efficiency isn’t always in our interest. As AI plies us with suggestions or full-blown text, we risk losing not just our technical skills but the power of writing as a springboard for personal reflection and unique expression.

Funny, informed, and conversational, Who Wrote This? urges us as individuals and as communities to make conscious choices about the extent to which we collaborate with AI. The technology is here to stay. Baron shows us how to work with AI and how to spot where it risks diminishing the valuable cognitive and social benefits of being literate…(More)”.

Computing the Climate: How We Know What We Know About Climate Change


Book by Steve M. Easterbrook: “How do we know that climate change is an emergency? How did the scientific community reach this conclusion all but unanimously, and what tools did they use to do it? This book tells the story of climate models, tracing their history from nineteenth-century calculations on the effects of greenhouse gases, to modern Earth system models that integrate the atmosphere, the oceans, and the land using the full resources of today’s most powerful supercomputers. Drawing on the author’s extensive visits to the world’s top climate research labs, this accessible, non-technical book shows how computer models help to build a more complete picture of Earth’s climate system. ‘Computing the Climate’ is ideal for anyone who has wondered where the projections of future climate change come from – and why we should believe them…(More)”.

Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models


Paper by Pengfei Li, Jianyi Yang, Mohammad A. Islam, Shaolei Ren: “The growing carbon footprint of artificial intelligence (AI) models, especially large ones such as GPT-3 and GPT-4, has been undergoing public scrutiny. Unfortunately, however, the equally important and enormous water footprint of AI models has remained under the radar. For example, training GPT-3 in Microsoft’s state-of-the-art U.S. data centers can directly consume 700,000 liters of clean freshwater (enough for producing 370 BMW cars or 320 Tesla electric vehicles) and the water consumption would have been tripled if training were done in Microsoft’s Asian data centers, but such information has been kept as a secret. This is extremely concerning, as freshwater scarcity has become one of the most pressing challenges shared by all of us in the wake of the rapidly growing population, depleting water resources, and aging water infrastructures. To respond to the global water challenges, AI models can, and also should, take social responsibility and lead by example by addressing their own water footprint. In this paper, we provide a principled methodology to estimate fine-grained water footprint of AI models, and also discuss the unique spatial-temporal diversities of AI models’ runtime water efficiency. Finally, we highlight the necessity of holistically addressing water footprint along with carbon footprint to enable truly sustainable AI…(More)”.

AI could choke on its own exhaust as it fills the web


Article by Ina Fried and Scott Rosenberg: “Scott RosenbergThe internet is beginning to fill up with more and more content generated by artificial intelligence rather than human beings, posing weird new dangers both to human society and to the AI programs themselves.

What’s happening: Experts estimate that AI-generated content could account for as much as 90% of information on the internet in a few years’ time, as ChatGPT, Dall-E and similar programs spill torrents of verbiage and images into online spaces.

  • That’s happening in a world that hasn’t yet figured out how to reliably label AI-generated output and differentiate it from human-created content.

The danger to human society is the now-familiar problem of information overload and degradation.

  • AI turbocharges the ability to create mountains of new content while it undermines the ability to check that material for reliability and recycles biases and errors in the data that was used to train it.
  • There’s also widespread fear that AI could undermine the jobs of people who create content today, from artists and performers to journalists, editors and publishers. The current strike by Hollywood actors and writers underlines this risk.

The danger to AI itself is newer and stranger. A raft of recent research papers have introduced a novel lexicon of potential AI disorders that are just coming into view as the technology is more widely deployed and used.

  • Model collapse” is researchers’ name for what happens to generative AI models, like OpenAI’s GPT-3 and GPT-4, when they’re trained using data produced by other AIs rather than human beings.
  • Feed a model enough of this “synthetic” data, and the quality of the AI’s answers can rapidly deteriorate, as the systems lock in on the most probable word choices and discard the “tail” choices that keep their output interesting.
  • Model Autophagy Disorder, or MAD, is how one set of researchers at Rice and Stanford universities dubbed the result of AI consuming its own products.
  • “Habsburg AI” is what another researcher earlier this year labeled the phenomenon, likening it to inbreeding: “A system that is so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, likely with exaggerated, grotesque features.”…(More)”.

The Coming Wave


Book by Mustafa Suleyman and Michael Bhaskar: “Soon you will live surrounded by AIs. They will organise your life, operate your business, and run core government services. You will live in a world of DNA printers and quantum computers, engineered pathogens and autonomous weapons, robot assistants and abundant energy.

None of us are prepared.

As co-founder of the pioneering AI company DeepMind, part of Google, Mustafa Suleyman has been at the centre of this revolution. The coming decade, he argues, will be defined by this wave of powerful, fast-proliferating new technologies.

In The Coming Wave, Suleyman shows how these forces will create immense prosperity but also threaten the nation-state, the foundation of global order. As our fragile governments sleepwalk into disaster, we face an existential dilemma: unprecedented harms on one side and the threat of overbearing surveillance on the other…(More)”.

Regulation of Artificial Intelligence Around the World


Report by the Law Library of Congress: “…provides a list of jurisdictions in the world where legislation that specifically refers to artificial intelligence (AI) or systems utilizing AI have been adopted or proposed. Researchers of the Law Library surveyed all jurisdictions in their research portfolios to find such legislation, and those encountered have been compiled in the annexed list with citations and brief descriptions of the relevant legislation. Only adopted or proposed instruments that have legal effect are reported for national and subnational jurisdictions and the European Union (EU); guidance or policy documents that have no legal effect are not included for these jurisdictions. Major international organizations have also been surveyed and documents adopted or proposed by these organizations that specifically refer to AI are reported in the list…(More)”.

Integrating AI into Urban Planning Workflows: Democracy Over Authoritarianism


Essay by Tyler Hinkle: “As AI tools become integrated into urban planning, a dual narrative of promise and potential pitfalls emerges. These tools offer unprecedented efficiency, creativity, and data analysis, yet if not guided by ethical considerations, they could inadvertently lead to exclusion, manipulation, and surveillance.

While AI, exemplified by tools like NovelAI, holds the potential to aggregate and synthesize public input, there’s a risk of suppressing genuine human voices in favor of algorithmic consensus. This could create a future urban landscape devoid of cultural depth and diversity, echoing historical authoritarianism.

In a potential dystopian scenario, an AI-based planning software gains access to all smart city devices, amassing data to reshape communities without consulting their residents. This data-driven transformation, devoid of human input, risks eroding the essence of community identity, autonomy, and shared decision-making. Imagine AI altering traffic flow, adjusting public transportation routes, or even redesigning public spaces based solely on data patterns, disregarding the unique needs and desires of the people who call that community home.

However, an optimistic approach guided by ethical principles can pave the way for a brighter future. Integrating AI with democratic ideals, akin to Fishkin’s deliberative democracy, can amplify citizens’ voices rather than replacing them. AI-driven deliberation can become a powerful vehicle for community engagement, transforming Arnstein’s ladder of citizen participation into a true instrument of empowerment. In addition, echoing the calls for alignment to be addresses holistically for AI, there will be alignment issues with AI as it becomes integrated into urban planning. We must take the time to ensure AI is properly aligned so it is a tool to help communities and not hurt them.

By treading carefully and embedding ethical considerations at the core, we can unleash AI’s potential to construct communities that are efficient, diverse, and resilient, while ensuring that democratic values remain paramount…(More)”.

Advancing Environmental Justice with AI


Article by Justina Nixon-Saintil: “Given its capacity to innovate climate solutions, the technology sector could provide the tools we need to understand, mitigate, and even reverse the damaging effects of global warming. In fact, addressing longstanding environmental injustices requires these companies to put the newest and most effective technologies into the hands of those on the front lines of the climate crisis.

Tools that harness the power of artificial intelligence, in particular, could offer unprecedented access to accurate information and prediction, enabling communities to learn from and adapt to climate challenges in real time. The IBM Sustainability Accelerator, which we launched in 2022, is at the forefront of this effort, supporting the development and scaling of projects such as the Deltares Aquality App, an AI-powered tool that helps farmers assess and improve water quality. As a result, farmers can grow crops more sustainably, prevent runoff pollution, and protect biodiversity.

Consider also the challenges that smallholder farmers face, such as rising costs, the difficulty of competing with larger producers that have better tools and technology, and, of course, the devastating effects of climate change on biodiversity and weather patterns. Accurate information, especially about soil conditions and water availability, can help them address these issues, but has historically been hard to obtain…(More)”.

AI and new standards promise to make scientific data more useful by making it reusable and accessible


Article by Bradley Wade Bishop: “…AI makes it highly desirable for any data to be machine-actionable – that is, usable by machines without human intervention. Now, scholars can consider machines not only as tools but also as potential autonomous data reusers and collaborators.

The key to machine-actionable data is metadata. Metadata are the descriptions scientists set for their data and may include elements such as creator, date, coverage and subject. Minimal metadata is minimally useful, but correct and complete standardized metadata makes data more useful for both people and machines.

It takes a cadre of research data managers and librarians to make machine-actionable data a reality. These information professionals work to facilitate communication between scientists and systems by ensuring the quality, completeness and consistency of shared data.

The FAIR data principles, created by a group of researchers called FORCE11 in 2016 and used across the world, provide guidance on how to enable data reuse by machines and humans. FAIR data is findable, accessible, interoperable and reusable – meaning it has robust and complete metadata.

In the past, I’ve studied how scientists discover and reuse data. I found that scientists tend to use mental shortcuts when they’re looking for data – for example, they may go back to familiar and trusted sources or search for certain key terms they’ve used before. Ideally, my team could build this decision-making process of experts and remove as many biases as possible to improve AI. The automation of these mental shortcuts should reduce the time-consuming chore of locating the right data…(More)”.