EU leadership in trustworthy AI: Guardrails, Innovation & Governance


Article by Thierry Breton: “As mentioned in President von der Leyen’s State of the Union letter of intent, Europe should lead global efforts on artificial intelligence, guiding innovation, setting guardrails and developing global governance.

First, on innovation: we will launch the EU AI Start-Up Initiative, leveraging one of Europe’s biggest assets: its public high-performance computing infrastructure. We will identify the most promising European start-ups in AI and give them access to our supercomputing capacity.

I have said it before: AI is a combination of data, computing and algorithms. To train and finetune the most advanced foundation models, developers need large amounts of computing power.

Europe is a world leader in supercomputing through its European High-Performance Computing Joint Undertaking (EuroHPC). Soon, Europe will have its first exascale supercomputers, JUPITER in Germany and JULES VERNE in France (able to perform a quintillion -that means a billion billion- calculations per second), in addition to various existing supercomputers (such as LEONARDO in Italy and LUMI in Finland).

Access to Europe’s supercomputing infrastructure will help start-ups bring down the training time for their newest AI models from months or years to days or weeks. And it will help them lead the development and scale-up of AI responsibly and in line with European values.

This goes together with our broader efforts to support AI innovation across the value chain – from AI start-ups to all those businesses using AI technologies in their industrial ecosystems. This includes our Testing and Experimentation Facilities for AI (launched in January 2023)our Digital Innovation Hubsthe development of regulatory sandboxes under the AI Act, our support for the European Partnership on AI, Data and Robotics and the cutting-edge research supported by HorizonEurope.

Second, guardrails for AI: Europe has pioneered clear rules for AI systems through the EU AI Act, the world’s first comprehensive regulatory framework for AI. My teams are working closely with the Parliament and Council to support the swift adoption of the EU AI Act. This will give citizens and businesses confidence in AI developed in Europe, knowing that it is safe and respects fundamental rights and European values. And it serves as an inspiration for global rules and principles for trustworthy AI.

As reiterated by President von der Leyen, we are developing an AI Pact that will convene AI companies, help them prepare for the implementation of the EU AI Act and encourage them to commit voluntarily to applying the principles of the Act before its date of applicability.

Third, governance: with the AI Act and the Coordinated Plan on AI, we are working towards a governance framework for AI, which can be a centre of expertise, in particular on large foundation models, and promote cooperation, not only between Member States, but also internationally…(More)”

AI often mangles African languages. Local scientists and volunteers are taking it back to school


Article by Sandeep Ravindran: “Imagine joyfully announcing to your Facebook friends that your wife gave birth, and having Facebook automatically translate your words to “my prostitute gave birth.” Shamsuddeen Hassan Muhammad, a computer science Ph.D. student at the University of Porto, says that’s what happened to a friend when Facebook’s English translation mangled the nativity news he shared in his native language, Hausa.

Such errors in artificial intelligence (AI) translation are common with African languages. AI may be increasingly ubiquitous, but if you’re from the Global South, it probably doesn’t speak your language.

That means Google Translate isn’t much help, and speech recognition tools such as Siri or Alexa can’t understand you. All of these services rely on a field of AI known as natural language processing (NLP), which allows AI to “understand” a language. The overwhelming majority of the world’s 7000 or so languages lack data, tools, or techniques for NLP, making them “low-resourced,” in contrast with a handful of “high-resourced” languages such as English, French, German, Spanish, and Chinese.

Hausa is the second most spoken African language, with an estimated 60 million to 80 million speakers, and it’s just one of more than 2000 African languages that are mostly absent from AI research and products. The few products available don’t work as well as those for English, notes Graham Neubig, an NLP researcher at Carnegie Mellon University. “It’s not the people who speak the languages making the technology.” More often the technology simply doesn’t exist. “For example, now you cannot talk to Siri in Hausa, because there is no data set to train Siri,” Muhammad says.

He is trying to fill that gap with a project he co-founded called HausaNLP, one of several launched within the past few years to develop AI tools for African languages…(More)”.

The Adoption and Implementation of Artificial Intelligence Chatbots in Public Organizations: Evidence from U.S. State Governments


Paper by Tzuhao Chen, Mila Gascó-Hernandez, and Marc Esteve: “Although the use of artificial intelligence (AI) chatbots in public organizations has increased in recent years, three crucial gaps remain unresolved. First, little empirical evidence has been produced to examine the deployment of chatbots in government contexts. Second, existing research does not distinguish clearly between the drivers of adoption and the determinants of success and, therefore, between the stages of adoption and implementation. Third, most current research does not use a multidimensional perspective to understand the adoption and implementation of AI in government organizations. Our study addresses these gaps by exploring the following question: what determinants facilitate or impede the adoption and implementation of chatbots in the public sector? We answer this question by analyzing 22 state agencies across the U.S.A. that use chatbots. Our analysis identifies ease of use and relative advantage of chatbots, leadership and innovative culture, external shock, and individual past experiences as the main drivers of the decisions to adopt chatbots. Further, it shows that different types of determinants (such as knowledge-base creation and maintenance, technology skills and system crashes, human and financial resources, cross-agency interaction and communication, confidentiality and safety rules and regulations, and citizens’ expectations, and the COVID-19 crisis) impact differently the adoption and implementation processes and, therefore, determine the success of chatbots in a different manner. Future research could focus on the interaction among different types of determinants for both adoption and implementation, as well as on the role of specific stakeholders, such as IT vendors…(More)”.

Who Wrote This? How AI and the Lure of Efficiency Threaten Human Writing


Book by Naomi S. Baron: “Would you read this book if a computer wrote it? Would you even know? And why would it matter?

Today’s eerily impressive artificial intelligence writing tools present us with a crucial challenge: As writers, do we unthinkingly adopt AI’s time-saving advantages or do we stop to weigh what we gain and lose when heeding its siren call? To understand how AI is redefining what it means to write and think, linguist and educator Naomi S. Baron leads us on a journey connecting the dots between human literacy and today’s technology. From nineteenth-century lessons in composition, to mathematician Alan Turing’s work creating a machine for deciphering war-time messages, to contemporary engines like ChatGPT, Baron gives readers a spirited overview of the emergence of both literacy and AI, and a glimpse of their possible future. As the technology becomes increasingly sophisticated and fluent, it’s tempting to take the easy way out and let AI do the work for us. Baron cautions that such efficiency isn’t always in our interest. As AI plies us with suggestions or full-blown text, we risk losing not just our technical skills but the power of writing as a springboard for personal reflection and unique expression.

Funny, informed, and conversational, Who Wrote This? urges us as individuals and as communities to make conscious choices about the extent to which we collaborate with AI. The technology is here to stay. Baron shows us how to work with AI and how to spot where it risks diminishing the valuable cognitive and social benefits of being literate…(More)”.

Computing the Climate: How We Know What We Know About Climate Change


Book by Steve M. Easterbrook: “How do we know that climate change is an emergency? How did the scientific community reach this conclusion all but unanimously, and what tools did they use to do it? This book tells the story of climate models, tracing their history from nineteenth-century calculations on the effects of greenhouse gases, to modern Earth system models that integrate the atmosphere, the oceans, and the land using the full resources of today’s most powerful supercomputers. Drawing on the author’s extensive visits to the world’s top climate research labs, this accessible, non-technical book shows how computer models help to build a more complete picture of Earth’s climate system. ‘Computing the Climate’ is ideal for anyone who has wondered where the projections of future climate change come from – and why we should believe them…(More)”.

Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models


Paper by Pengfei Li, Jianyi Yang, Mohammad A. Islam, Shaolei Ren: “The growing carbon footprint of artificial intelligence (AI) models, especially large ones such as GPT-3 and GPT-4, has been undergoing public scrutiny. Unfortunately, however, the equally important and enormous water footprint of AI models has remained under the radar. For example, training GPT-3 in Microsoft’s state-of-the-art U.S. data centers can directly consume 700,000 liters of clean freshwater (enough for producing 370 BMW cars or 320 Tesla electric vehicles) and the water consumption would have been tripled if training were done in Microsoft’s Asian data centers, but such information has been kept as a secret. This is extremely concerning, as freshwater scarcity has become one of the most pressing challenges shared by all of us in the wake of the rapidly growing population, depleting water resources, and aging water infrastructures. To respond to the global water challenges, AI models can, and also should, take social responsibility and lead by example by addressing their own water footprint. In this paper, we provide a principled methodology to estimate fine-grained water footprint of AI models, and also discuss the unique spatial-temporal diversities of AI models’ runtime water efficiency. Finally, we highlight the necessity of holistically addressing water footprint along with carbon footprint to enable truly sustainable AI…(More)”.

AI could choke on its own exhaust as it fills the web


Article by Ina Fried and Scott Rosenberg: “Scott RosenbergThe internet is beginning to fill up with more and more content generated by artificial intelligence rather than human beings, posing weird new dangers both to human society and to the AI programs themselves.

What’s happening: Experts estimate that AI-generated content could account for as much as 90% of information on the internet in a few years’ time, as ChatGPT, Dall-E and similar programs spill torrents of verbiage and images into online spaces.

  • That’s happening in a world that hasn’t yet figured out how to reliably label AI-generated output and differentiate it from human-created content.

The danger to human society is the now-familiar problem of information overload and degradation.

  • AI turbocharges the ability to create mountains of new content while it undermines the ability to check that material for reliability and recycles biases and errors in the data that was used to train it.
  • There’s also widespread fear that AI could undermine the jobs of people who create content today, from artists and performers to journalists, editors and publishers. The current strike by Hollywood actors and writers underlines this risk.

The danger to AI itself is newer and stranger. A raft of recent research papers have introduced a novel lexicon of potential AI disorders that are just coming into view as the technology is more widely deployed and used.

  • Model collapse” is researchers’ name for what happens to generative AI models, like OpenAI’s GPT-3 and GPT-4, when they’re trained using data produced by other AIs rather than human beings.
  • Feed a model enough of this “synthetic” data, and the quality of the AI’s answers can rapidly deteriorate, as the systems lock in on the most probable word choices and discard the “tail” choices that keep their output interesting.
  • Model Autophagy Disorder, or MAD, is how one set of researchers at Rice and Stanford universities dubbed the result of AI consuming its own products.
  • “Habsburg AI” is what another researcher earlier this year labeled the phenomenon, likening it to inbreeding: “A system that is so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, likely with exaggerated, grotesque features.”…(More)”.

The Coming Wave


Book by Mustafa Suleyman and Michael Bhaskar: “Soon you will live surrounded by AIs. They will organise your life, operate your business, and run core government services. You will live in a world of DNA printers and quantum computers, engineered pathogens and autonomous weapons, robot assistants and abundant energy.

None of us are prepared.

As co-founder of the pioneering AI company DeepMind, part of Google, Mustafa Suleyman has been at the centre of this revolution. The coming decade, he argues, will be defined by this wave of powerful, fast-proliferating new technologies.

In The Coming Wave, Suleyman shows how these forces will create immense prosperity but also threaten the nation-state, the foundation of global order. As our fragile governments sleepwalk into disaster, we face an existential dilemma: unprecedented harms on one side and the threat of overbearing surveillance on the other…(More)”.

Regulation of Artificial Intelligence Around the World


Report by the Law Library of Congress: “…provides a list of jurisdictions in the world where legislation that specifically refers to artificial intelligence (AI) or systems utilizing AI have been adopted or proposed. Researchers of the Law Library surveyed all jurisdictions in their research portfolios to find such legislation, and those encountered have been compiled in the annexed list with citations and brief descriptions of the relevant legislation. Only adopted or proposed instruments that have legal effect are reported for national and subnational jurisdictions and the European Union (EU); guidance or policy documents that have no legal effect are not included for these jurisdictions. Major international organizations have also been surveyed and documents adopted or proposed by these organizations that specifically refer to AI are reported in the list…(More)”.

Integrating AI into Urban Planning Workflows: Democracy Over Authoritarianism


Essay by Tyler Hinkle: “As AI tools become integrated into urban planning, a dual narrative of promise and potential pitfalls emerges. These tools offer unprecedented efficiency, creativity, and data analysis, yet if not guided by ethical considerations, they could inadvertently lead to exclusion, manipulation, and surveillance.

While AI, exemplified by tools like NovelAI, holds the potential to aggregate and synthesize public input, there’s a risk of suppressing genuine human voices in favor of algorithmic consensus. This could create a future urban landscape devoid of cultural depth and diversity, echoing historical authoritarianism.

In a potential dystopian scenario, an AI-based planning software gains access to all smart city devices, amassing data to reshape communities without consulting their residents. This data-driven transformation, devoid of human input, risks eroding the essence of community identity, autonomy, and shared decision-making. Imagine AI altering traffic flow, adjusting public transportation routes, or even redesigning public spaces based solely on data patterns, disregarding the unique needs and desires of the people who call that community home.

However, an optimistic approach guided by ethical principles can pave the way for a brighter future. Integrating AI with democratic ideals, akin to Fishkin’s deliberative democracy, can amplify citizens’ voices rather than replacing them. AI-driven deliberation can become a powerful vehicle for community engagement, transforming Arnstein’s ladder of citizen participation into a true instrument of empowerment. In addition, echoing the calls for alignment to be addresses holistically for AI, there will be alignment issues with AI as it becomes integrated into urban planning. We must take the time to ensure AI is properly aligned so it is a tool to help communities and not hurt them.

By treading carefully and embedding ethical considerations at the core, we can unleash AI’s potential to construct communities that are efficient, diverse, and resilient, while ensuring that democratic values remain paramount…(More)”.