Paper by Yun-Shiuan Chuang et al: “Human groups are able to converge to more accurate beliefs through deliberation, even in the presence of polarization and partisan bias – a phenomenon known as the “wisdom of partisan crowds.” Large Language Models (LLMs) agents are increasingly being used to simulate human collective behavior, yet few benchmarks exist for evaluating their dynamics against the behavior of human groups. In this paper, we examine the extent to which the wisdom of partisan crowds emerges in groups of LLM-based agents that are prompted to role-play as partisan personas (e.g., Democrat or Republican). We find that they not only display human-like partisan biases, but also converge to more accurate beliefs through deliberation, as humans do. We then identify several factors that interfere with convergence, including the use of chain-of-thought prompting and lack of details in personas. Conversely, fine-tuning on human data appears to enhance convergence. These findings show the potential and limitations of LLM-based agents as a model of human collective intelligence…(More)”
Data Disquiet: Concerns about the Governance of Data for Generative AI
Paper by Susan Aaronson: “The growing popularity of large language models (LLMs) has raised concerns about their accuracy. These chatbots can be used to provide information, but it may be tainted by errors or made-up or false information (hallucinations) caused by problematic data sets or incorrect assumptions made by the model. The questionable results produced by chatbots has led to growing disquiet among users, developers and policy makers. The author argues that policy makers need to develop a systemic approach to address these concerns. The current piecemeal approach does not reflect the complexity of LLMs or the magnitude of the data upon which they are based, therefore, the author recommends incentivizing greater transparency and accountability around data-set development…(More)”.
God-like: A 500-Year History of Artificial Intelligence in Myths, Machines, Monsters
Book by Kester Brewin: “In the year 1600 a monk is burned at the stake for claiming to have built a device that will allow him to know all things.
350 years later, having witnessed ‘Trinity’ – the first test of the atomic bomb – America’s leading scientist outlines a memory machine that will help end war on earth.
25 years in the making, an ex-soldier finally unveils this ‘machine for augmenting human intellect’, dazzling as he stands ‘Zeus-like, dealing lightning with both hands.’
AI is both stunningly new and rooted in ancient desires. As we finally welcome this ‘god-like’ technology amongst us, what can learn from the myths and monsters of the past about how to survive alongside our greatest ever invention?…(More)”.
How artificial intelligence can facilitate investigative journalism
Article by Luiz Fernando Toledo: “A few years ago, I worked on a project for a large Brazilian television channel whose objective was to analyze the profiles of more than 250 guardianship counselors in the city of São Paulo. These elected professionals have the mission of protecting the rights of children and adolescents in Brazil.
Critics had pointed out that some counselors did not have any expertise or prior experience working with young people and were only elected with the support of religious communities. The investigation sought to verify whether these elected counselors had professional training in working with children and adolescents or had any relationships with churches.
After requesting the counselors’ resumes through Brazil’s access to information law, a small team combed through each resume in depth—a laborious and time-consuming task. But today, this project might have required far less time and labor. Rapid developments in generative AI hold potential to significantly scale access and analysis of data needed for investigative journalism.
Many articles address the potential risks of generative AI for journalism and democracy, such as threats AI poses to the business model for journalism and its ability to facilitate the creation and spread of mis- and disinformation. No doubt there is cause for concern. But technology will continue to evolve, and it is up to journalists and researchers to understand how to use it in favor of the public interest.
I wanted to test how generative AI can help journalists, especially those that work with public documents and data. I tested several tools, including Ask Your PDF (ask questions to any documents in your computer), Chatbase (create your own chatbot), and Document Cloud (upload documents and ask GPT-like questions to hundreds of documents simultaneously).
These tools make use of the same mechanism that operates OpenAI’s famous ChatGPT—large language models that create human-like text. But they analyze the user’s own documents rather than information on the internet, ensuring more accurate answers by using specific, user-provided sources…(More)”.
AI-enhanced Collective Intelligence: The State of the Art and Prospects
Paper by Hao Cui and Taha Yasseri: “The current societal challenges exceed the capacity of human individual or collective effort alone. As AI evolves, its role within human collectives is poised to vary from an assistive tool to a participatory member. Humans and AI possess complementary capabilities that, when synergized, can achieve a level of collective intelligence that surpasses the collective capabilities of either humans or AI in isolation. However, the interactions in human-AI systems are inherently complex, involving intricate processes and interdependencies. This review incorporates perspectives from network science to conceptualize a multilayer representation of human-AI collective intelligence, comprising a cognition layer, a physical layer, and an information layer. Within this multilayer network, humans and AI agents exhibit varying characteristics; humans differ in diversity from surface-level to deep-level attributes, while AI agents range in degrees of functionality and anthropomorphism. The interplay among these agents shapes the overall structure and dynamics of the system. We explore how agents’ diversity and interactions influence the system’s collective intelligence. Furthermore, we present an analysis of real-world instances of AI-enhanced collective intelligence. We conclude by addressing the potential challenges in AI-enhanced collective intelligence and offer perspectives on future developments in this field…(More)”.
The New Fire: War, Peace, and Democracy in the Age of AI
Book by Ben Buchanan and Andrew Imbrie: “Artificial intelligence is revolutionizing the modern world. It is ubiquitous—in our homes and offices, in the present and most certainly in the future. Today, we encounter AI as our distant ancestors once encountered fire. If we manage AI well, it will become a force for good, lighting the way to many transformative inventions. If we deploy it thoughtlessly, it will advance beyond our control. If we wield it for destruction, it will fan the flames of a new kind of war, one that holds democracy in the balance. As AI policy experts Ben Buchanan and Andrew Imbrie show in The New Fire, few choices are more urgent—or more fascinating—than how we harness this technology and for what purpose.
The new fire has three sparks: data, algorithms, and computing power. These components fuel viral disinformation campaigns, new hacking tools, and military weapons that once seemed like science fiction. To autocrats, AI offers the prospect of centralized control at home and asymmetric advantages in combat. It is easy to assume that democracies, bound by ethical constraints and disjointed in their approach, will be unable to keep up. But such a dystopia is hardly preordained. Combining an incisive understanding of technology with shrewd geopolitical analysis, Buchanan and Imbrie show how AI can work for democracy. With the right approach, technology need not favor tyranny…(More)”.
Algorithmic attention rents: A theory of digital platform market power
Paper by Tim O’Reilly, Ilan Strauss and Mariana Mazzucato: “We outline a theory of algorithmic attention rents in digital aggregator platforms. We explore the way that as platforms grow, they become increasingly capable of extracting rents from a variety of actors in their ecosystems—users, suppliers, and advertisers—through their algorithmic control over user attention. We focus our analysis on advertising business models, in which attention harvested from users is monetized by reselling the attention to suppliers or other advertisers, though we believe the theory has relevance to other online business models as well. We argue that regulations should mandate the disclosure of the operating metrics that platforms use to allocate user attention and shape the “free” side of their marketplace, as well as details on how that attention is monetized…(More)”.
Limiting Data Broker Sales in the Name of U.S. National Security: Questions on Substance and Messaging
Article by Peter Swire and Samm Sacks: “A new executive order issued today contains multiple provisions, most notably limiting bulk sales of personal data to “countries of concern.” The order has admirable national security goals but quite possibly would be ineffective and may be counterproductive. There are serious questions about both the substance and the messaging of the order.
The new order combines two attractive targets for policy action. First, in this era of bipartisan concern about China, the new order would regulate transactions specifically with “countries of concern,” notably China, but also others such as Iran and North Korea. A key rationale for the order is to prevent China from amassing sensitive information about Americans, for use in tracking and potentially manipulating military personnel, government officials, or anyone else of interest to the Chinese regime.
Second, the order targets bulk sales, to countries of concern, of sensitive personal information by data brokers, such as genomic, biometric, and precise geolocation data. The large and growing data broker industry has come under well-deserved bipartisan scrutiny for privacy risks. Congress has held hearings and considered bills to regulate such brokers. California has created a data broker registry and last fall passed the Delete Act to enable individuals to require deletion of their personal data. In January, the Federal Trade Commission issued an order prohibiting data broker Outlogic from sharing or selling sensitive geolocation data, finding that the company had acted without customer consent, in an unfair and deceptive manner. In light of these bipartisan concerns, a new order targeting both China and data brokers has a nearly irresistible political logic.
Accurate assessment of the new order, however, requires an understanding of this order as part of a much bigger departure from the traditional U.S. support for free and open flows of data across borders. Recently, in part for national security reasons, the U.S. has withdrawn its traditional support in the World Trade Organization (WTO) for free and open data flows, and the Department of Commerce has announced a proposed rule, in the name of national security, that would regulate U.S.-based cloud providers when selling to foreign countries, including for purposes of training artificial intelligence (AI) models. We are concerned that these initiatives may not sufficiently account for the national security advantages of the long-standing U.S. position and may have negative effects on the U.S. economy.
Despite the attractiveness of the regulatory targets—data brokers and countries of concern—U.S. policymakers should be cautious as they implement this order and the other current policy changes. As discussed below, there are some possible privacy advances as data brokers have to become more careful in their sales of data, but a better path would be to ensure broader privacy and cybersecurity safeguards to better protect data and critical infrastructure systems from sophisticated cyberattacks from China and elsewhere…(More)”.
Breaking the Gridlock
UNDP Human Development Report 2024: “We can do better than this. Better than runaway climate change and pandemics. Better than a spate of unconstitutional transfers of power amid a rising, globalizing tide of populism. Better than cascading human rights violations and unconscionable massacres of people in their homes and civic venues, in hospitals, schools and shelters.
We must do better than a world always on the brink, a socioecological house of cards. We owe it to ourselves, to each other, to our children and their children.
We have so much going for us.
We know what the global challenges are and who will be most affected by them. And we know there will surely be more that we cannot anticipate today.
We know which choices offer better opportunities for peace, shared prosperity and sustainability, better ways to navigate interacting layers of uncertainty and interlinked planetary surprises.
We enjoy unprecedented wealth know-how and technology—unimaginable to our ancestors—that with more equitable distribution and use could power bold and necessary choices for peace and for sustainable, inclusive human development on which peace depends…
In short, why are we so stuck? And how do we get unstuck without resorting myopically to violence or isolationism? These questions motivate the 2023–2024 Human Development Report.
Sharp questions belie their complexity; issues with power disparities at their core often defy easy explanation. Magic bullets entice but mislead—siren songs peddled by sloganeering that exploits group-based grievances. Slick solutions and simple recipes poison our willingness to do the hard work of overcoming polarization.
Geopolitical quagmires abound, driven by shifting power dynamics among states and by national gazes yanked inward by inequalities, insecurity and polarization, all recurring themes in this and recent Human Development Reports. Yet we need not sit on our hands simply because great power competition is heating up while countries underrepresented in global governance seek a greater say in matters of global import. Recall that global cooperation on smallpox eradication and protection of the ozone layer, among other important issues such as nuclear nonproliferation, happened over the course of the Cold War…(More)”.
New Horizons
An Introduction to the 2nd Edition of the State of Open Data by Renata Avila and Tim Davies: “The struggle to deliver on the vision that data, this critical resource of modern societies, should be widely available, well structured, and shared for all to use, has been a long one. It has been a struggle involving thousands upon thousands of individuals, organisations, and communities. Without their efforts, public procurement would be opaque, smart-cities even more corporate controlled, transport systems less integrated, and pandemic responses less rapid. Across numerous initiatives, open data has become more embedded as a way to support accountability, enable collaboration, and to better unlock the value of data.
However, much like the climber reaching the top of the foothills, and for the first time seeing the hard climb of the whole mountain coming into view, open data advocates, architects, and community builders have not reached the end of their journey. As we move into the middle of the 2020s, action on open data faces new and significant challenges if we are to see a future in which open and enabling data infrastructures and ecosystems are the norm rather than a sparse patchwork of exceptions. Building open infrastructures to power social change for the next century is no small task, and to meet the challenges ahead, we will need all that the lessons we can gather from more than 15 years of open data action to date…Across the collection, we can find two main pathways to broader participation explored. On the one hand are discussions of widening public engagement and data literacy, creating a more diverse constituency of people interested and able to engage with data projects in a voluntary capacity. On the other, are calls for more formalisation of data governance, embedding citizen voices within increasingly structured data collaborations and ensuring that affected stakeholders are consulted on, or given a role in, key data decisions. Mariel García-Montes (Data Literacy) underscores the case for an equity-first approach to the first pathway, highlighting how generalist data literacy can be used for or against the public good, and calling for approaches to data literacy building that centre on an understanding of inequality and power. In writing on urban development, Stefaan G. Verhulst and Sampriti Saxena (Urban Development) point to a number of examples of the latter approach in which cities are experimenting with various forms of deliberative conversations and processes…(More)”.