Connecting the dots: AI is eating the web that enabled it


Article by Tom Wheeler: “The large language models (LLMs) of generative AI that scraped their training data from websites are now using that data to eliminate the need to go to many of those same websites. Respected digital commentator Casey Newton concluded, “the web is entering a state of managed decline.” The Washington Post headline was more dire: “Web publishers brace for carnage as Google adds AI answers.”…

Created by Sir Tim Berners-Lee in 1989, the World Wide Web redefined the nature of the internet into a user-friendly linkage of diverse information repositories. “The first decade of the web…was decentralized with a long-tail of content and options,” Berners-Lee wrote this year on the occasion of its 35th anniversary.  Over the intervening decades, that vision of distributed sources of information has faced multiple challenges. The dilution of decentralization began with powerful centralized hubs such as Facebook and Google that directed user traffic. Now comes the ultimate disintegration of Berners-Lee’s vision as generative AI reduces traffic to websites by recasting their information.

The web’s open access to the world’s information trained the large language models (LLMs) of generative AI. Now, those generative AI models are coming for their progenitor.

The web allowed users to discover diverse sources of information from which to draw conclusions. AI cuts out the intellectual middleman to go directly to conclusions from a centralized source.

The AI paradigm of cutting out the middleman appears to have been further advanced in Apple’s recent announcement that it will incorporate OpenAI to enable its Siri app to provide ChatGPT-like answers. With this new deal, Apple becomes an AI-based disintermediator, not only eliminating the need to go to websites, but also potentially disintermediating the need for the Google search engine for which Apple has been paying $20 billion annually.

The AtlanticUniversity of Toronto, and Gartner studies suggest the Pew research on website mortality could be just the beginning. Generative AI’s ability to deliver conclusions cannibalizes traffic to individual websites threatening the raison d’être of all websites, especially those that are commercially supported…(More)” 

Using AI to Inform Policymaking


Paper for the AI4Democracy series at The Center for the Governance of Change at IE University: “Good policymaking requires a multifaceted approach, incorporating diverse tools and processes to address the varied needs and expectations of constituents. The paper by Turan and McKenzie focuses on an LLM-based tool, “Talk to the City” (TttC), developed to facilitate collective decision-making by soliciting, analyzing, and organizing public opinion. This tool has been tested in three distinct applications:

1. Finding Shared Principles within Constituencies: Through large-scale citizen consultations, TttC helps identify common values and priorities.

2. Compiling Shared Experiences in Community Organizing: The tool aggregates and synthesizes the experiences of community members, providing a cohesive overview.

3. Action-Oriented Decision Making in Decentralized Governance: TttC supports decision-making processes in decentralized governance structures by providing actionable insights from diverse inputs.

CAPABILITIES AND BENEFITS OF LLM TOOLS

LLMs, when applied to democratic decision-making, offer significant advantages:

  • Processing Large Volumes of Qualitative Inputs: LLMs can handle extensive qualitative data, summarizing discussions and identifying overarching themes with high accuracy.
  • Producing Aggregate Descriptions in Natural Language: The ability to generate clear, comprehensible summaries from complex data makes these tools invaluable for communicating nuanced topics.
  • Facilitating Understanding of Constituents’ Needs: By organizing public input, LLM tools help leaders gain a better understanding of their constituents’ needs and priorities.

CASE STUDIES AND TOOL EFFICACY

The paper presents case studies using TttC, demonstrating its effectiveness in improving collective deliberation and decision-making. Key functionalities include:

  • Aggregating Responses and Clustering Ideas: TttC identifies common themes and divergences within a population’s opinions.
  • Interactive Interface for Exploration: The tool provides an interactive platform for exploring the diversity of opinions at both individual and group scales, revealing complexity, common ground, and polarization…(More)”

The use of AI for improving energy security


Rand Report: “Electricity systems around the world are under pressure due to aging infrastructure, rising demand for electricity and the need to decarbonise energy supplies at pace. Artificial intelligence (AI) applications have potential to help address these pressures and increase overall energy security. For example, AI applications can reduce peak demand through demand response, improve the efficiency of wind farms and facilitate the integration of large numbers of electric vehicles into the power grid. However, the widespread deployment of AI applications could also come with heightened cybersecurity risks, the risk of unexplained or unexpected actions, or supplier dependency and vendor lock-in. The speed at which AI is developing means many of these opportunities and risks are not yet well understood.

The aim of this study was to provide insight into the state of AI applications for the power grid and the associated risks and opportunities. Researchers conducted a focused scan of the scientific literature to find examples of relevant AI applications in the United States, the European Union, China and the United Kingdom…(More)”.

Framework for Governance of Indigenous Data (GID)


Framework by The National Indigenous Australians Agency (NIAA): “Australian Public Service agencies now have a single Framework for working with Indigenous data.

The National Indigenous Australians Agency will collaborate across the Australian Public Service to implement the Framework for Governance of Indigenous Data in 2024.

Commonwealth agencies are expected to develop a seven-year implementation plan, guided by four principles:

  1. Partner with Aboriginal and Torres Strait Islander people
  2. Build data-related capabilities
  3. Provide knowledge of data assets
  4. Build an inclusive data system

The Framework represents the culmination of over 18 months of co-design effort between the Australian Government and Aboriginal and Torres Strait Islander partners. While we know we have some way to go, the Framework serves as a significant step forward to improve the collection, use and disclosure of data, to better serve Aboriginal and Torres Strait Islander priorities.

The Framework places Aboriginal and Torres Strait Islander peoples at its core. Recognising the importance of authentic engagement, it emphasises the need for First Nations communities to have a say in decisions affecting them, including the use of data in government policy-making.

Acknowledging data’s significance in self-determination, the Framework provides a stepping stone towards greater awareness and acceptance by Australian Government agencies of the principles of Indigenous Data Sovereignty.

It offers practical guidance on implementing key aspects of data governance aligned with both Indigenous Data Sovereignty principles and the objectives of the Australian Government…(More)”.

Can Artificial Intelligence Bring Deliberation to the Masses?


Chapter by Hélène Landemore: “A core problem in deliberative democracy is the tension between two seemingly equally important conditions of democratic legitimacy: deliberation, on the one hand, and mass participation, on the other. Might artificial intelligence help bring quality deliberation to the masses? The answer is a qualified yes. The chapter first examines the conundrum in deliberative democracy around the trade-off between deliberation and mass participation by returning to the seminal debate between Joshua Cohen and Jürgen Habermas. It then turns to an analysis of the 2019 French Great National Debate, a low-tech attempt to involve millions of French citizens in a two-month-long structured exercise of collective deliberation. Building on the shortcomings of this process, the chapter then considers two different visions for an algorithm-powered form of mass deliberation—Mass Online Deliberation (MOD), on the one hand, and Many Rotating Mini-publics (MRMs), on the other—theorizing various ways artificial intelligence could play a role in them. To the extent that artificial intelligence makes the possibility of either vision more likely to come to fruition, it carries with it the promise of deliberation at the very large scale….(More)”

Artificial Intelligence Opportunities for State and Local Departments Of Transportation


Report by the National Academies of Sciences, Engineering, and Medicine: “Artificial intelligence (AI) has revolutionized various areas in departments of transportation (DOTs), such as traffic management and optimization. Through predictive analytics and real-time data processing, AI systems show promise in alleviating congestion, reducing travel times, and enhancing overall safety by alerting drivers to potential hazards. AI-driven simulations are also used for testing and improving transportation systems, saving time and resources that would otherwise be needed for physical tests…(More)”.

A Generation of AI Guinea Pigs


Article by Caroline Mimbs Nyce: “This spring, the Los Angeles Unified School District—the second-largest public school district in the United States—introduced students and parents to a new “educational friend” named Ed. A learning platform that includes a chatbot represented by a small illustration of a smiling sun, Ed is being tested in 100 schools within the district and is accessible at all hours through a website. It can answer questions about a child’s courses, grades, and attendance, and point users to optional activities.

As Superintendent Alberto M. Carvalho put it to me, “AI is here to stay. If you don’t master it, it will master you.” Carvalho says he wants to empower teachers and students to learn to use AI safely. Rather than “keep these assets permanently locked away,” the district has opted to “sensitize our students and the adults around them to the benefits, but also the challenges, the risks.” Ed is just one manifestation of that philosophy; the school district also has a mandatory Digital Citizenship in the Age of AI course for students ages 13 and up.

Ed is, according to three first graders I spoke with this week at Alta Loma Elementary School, very good. They especially like it when Ed awards them gold stars for completing exercises. But even as they use the program, they don’t quite understand it. When I asked them if they know what AI is, they demurred. One asked me if it was a supersmart robot…(More)”.

Cryptographers Discover a New Foundation for Quantum Secrecy


Article by Ben Brubaker: “…Say you want to send a private message, cast a secret vote or sign a document securely. If you do any of these tasks on a computer, you’re relying on encryption to keep your data safe. That encryption needs to withstand attacks from codebreakers with their own computers, so modern encryption methods rely on assumptions about what mathematical problems are hard for computers to solve.

But as cryptographers laid the mathematical foundations for this approach to information security in the 1980s, a few researchers discovered that computational hardness wasn’t the only way to safeguard secrets. Quantum theory, originally developed to understand the physics of atoms, turned out to have deep connections to information and cryptography. Researchers found ways to base the security of a few specific cryptographic tasks directly on the laws of physics. But these tasks were strange outliers — for all others, there seemed to be no alternative to the classical computational approach.

By the end of the millennium, quantum cryptography researchers thought that was the end of the story. But in just the past few years, the field has undergone another seismic shift.

“There’s been this rearrangement of what we believe is possible with quantum cryptography,” said Henry Yuen, a quantum information theorist at Columbia University.

In a string of recent papers, researchers have shown that most cryptographic tasks could still be accomplished securely even in hypothetical worlds where practically all computation is easy. All that matters is the difficulty of a special computational problem about quantum theory itself.

“The assumptions you need can be way, way, way weaker,” said Fermi Ma, a quantum cryptographer at the Simons Institute for the Theory of Computing in Berkeley, California. “This is giving us new insights into computational hardness itself.”…(More)”.

Governing with Artificial Intelligence


OECD Report: “OECD countries are increasingly investing in better understanding the potential value of using Artificial Intelligence (AI) to improve public governance. The use of AI by the public sector can increase productivity, responsiveness of public services, and strengthen the accountability of governments. However, governments must also mitigate potential risks, building an enabling environment for trustworthy AI. This policy paper outlines the key trends and policy challenges in the development, use, and deployment of AI in and by the public sector. First, it discusses the potential benefits and specific risks associated with AI use in the public sector. Second, it looks at how AI in the public sector can be used to improve productivity, responsiveness, and accountability. Third, it provides an overview of the key policy issues and presents examples of how countries are addressing them across the OECD…(More)”.

Handbook on Public Policy and Artificial Intelligence


Book edited by Regine Paul, Emma Carmel and Jennifer Cobbe: “…explores the relationship between public policy and artificial intelligence (AI) technologies across a broad range of geographical, technical, political and policy contexts. It contributes to critical AI studies, focusing on the intersection of the norms, discourses, policies, practices and regulation that shape AI in the public sector.

Expert authors in the field discuss the creation and use of AI technologies, and how public authorities respond to their development, by bringing together emerging scholarly debates about AI technologies with longer-standing insights on public administration, policy, regulation and governance. Contributions in the Handbook mobilize diverse perspectives to critically examine techno-solutionist approaches to public policy and AI, dissect the politico-economic interests underlying AI promotion and analyse implications for sustainable development, fairness and equality. Ultimately, this Handbook questions whether regulatory concepts such as ethical, trustworthy or accountable AI safeguard a democratic future or contribute to a problematic de-politicization of the public sector…(More)”.