Paper by David Leslie: “In the current hype-laden climate surrounding the rapid proliferation of foundation models and generative AI systems like ChatGPT, it is becoming increasingly important for societal stakeholders to reach sound understandings of their limitations and potential transformative effects. This is especially true in the natural and applied sciences, where magical thinking among some scientists about the take-off of “artificial general intelligence” has arisen simultaneously as the growing use of these technologies is putting longstanding norms, policies, and standards of good research practice under pressure. In this analysis, I argue that a deflationary understanding of foundation models and generative AI systems can help us sense check our expectations of what role they can play in processes of scientific exploration, sense-making, and discovery. I claim that a more sober, tool-based understanding of generative AI systems as computational instruments embedded in warm-blooded research processes can serve several salutary functions. It can play a crucial bubble-bursting role that mitigates some of the most serious threats to the ethos of modern science posed by an unreflective overreliance on these technologies. It can also strengthen the epistemic and normative footing of contemporary science by helping researchers circumscribe the part to be played by machine-led prediction in communicative contexts of scientific discovery while concurrently prodding them to recognise that such contexts are principal sites for human empowerment, democratic agency, and creativity. Finally, it can help spur ever richer approaches to collaborative experimental design, theory-construction, and scientific world-making by encouraging researchers to deploy these kinds of computational tools to heuristically probe unbounded search spaces and patterns in high-dimensional biophysical data that would otherwise be inaccessible to human-scale examination and inference…(More)”.
The UN Hired an AI Company to Untangle the Israeli-Palestinian Crisis
Article by David Gilbert: “…The application of artificial intelligence technologies to conflict situations has been around since at least 1996, with machine learning being used to predict where conflicts may occur. The use of AI in this area has expanded in the intervening years, being used to improve logistics, training, and other aspects of peacekeeping missions. Lane and Shults believe they could use artificial intelligence to dig deeper and find the root causes of conflicts.
Their idea for an AI program that models the belief systems that drive human behavior first began when Lane moved to Northern Ireland a decade ago to study whether computation modeling and cognition could be used to understand issues around religious violence.
In Belfast, Lane figured out that by modeling aspects of identity and social cohesion, and identifying the factors that make people motivated to fight and die for a particular cause, he could accurately predict what was going to happen next.
“We set out to try and come up with something that could help us better understand what it is about human nature that sometimes results in conflict, and then how can we use that tool to try and get a better handle or understanding on these deeper, more psychological issues at really large scales,” Lane says.
The result of their work was a study published in 2018 in The Journal for Artificial Societies and Social Simulation, which found that people are typically peaceful but will engage in violence when an outside group threatens the core principles of their religious identity.
A year later, Lane wrote that the model he had developed predicted that measures introduced by Brexit—the UK’s departure from the European Union that included the introduction of a hard border in the Irish Sea between Northern Ireland and the rest of the UK—would result in a rise in paramilitary activity. Months later, the model was proved right.
The multi-agent model developed by Lane and Shults relied on distilling more than 50 million articles from GDelt, a project that monitors “the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages.” But feeding the AI millions of articles and documents was not enough, the researchers realized. In order to fully understand what was driving the people of Northern Ireland to engage in violence against their neighbors, they would need to conduct their own research…(More)”.
The Government Analytics Handbook
(Open Access) Book edited by Daniel Rogger and Christian Schuster: “Governments across the world make thousands of personnel management decisions, procure millions of goods and services, and execute billions of processes each day. They are data rich. And yet, there is little systematic practice to-date which capitalizes on this data to make public administrations work better. This means that governments are missing out on data insights to save billions in procurement expenditures, recruit better talent into government, and identify sources of corruption, to name just a few.
The Government Analytics Handbook seeks to change that. It presents frontier evidence and practitioner insights on how to leverage data to make governments work better. Covering a range of microdata sources—such as administrative data and public servant surveys—as well as tools and resources for undertaking the analytics, it transforms the ability of governments to take a data-informed approach to diagnose and improve how public organizations work…(More)”.
The Tragedy of AI Governance
Paper by Simon Chesterman: “Despite hundreds of guides, frameworks, and principles intended to make AI “ethical” or “responsible”, ever more powerful applications continue to be released ever more quickly. Safety and security teams are being downsized or sidelined to bring AI products to market. And a significant portion of AI developers apparently believe there is a real risk that their work poses an existential threat to humanity.
This contradiction between statements and action can be attributed to three factors that undermine the prospects for meaningful governance of AI. The first is the shift of power from public to private hands, not only in deployment of AI products but in fundamental research. The second is the wariness of most states about regulating the sector too aggressively, for fear that it might drive innovation elsewhere. The third is the dysfunction of global processes to manage collective action problems, epitomized by the climate crisis and now frustrating efforts to govern a technology that does not respect borders. The tragedy of AI governance is that those with the greatest leverage to regulate AI have the least interest in doing so, while those with the greatest interest have the least leverage.
Resolving these challenges either requires rethinking the incentive structures — or waiting for a crisis that brings the need for regulation and coordination into sharper focus…(More)”
Open-access reformers launch next bold publishing plan
Article by Layal Liverpool: “The group behind the radical open-access initiative Plan S has announced its next big plan to shake up research publishing — and this one could be bolder than the first. It wants all versions of an article and its associated peer-review reports to be published openly from the outset, without authors paying any fees, and for authors, rather than publishers, to decide when and where to first publish their work.
The group of influential funding agencies, called cOAlition S, has over the past five years already caused upheaval in the scholarly publishing world by pressuring more journals to allow immediate open-access publishing. Its new proposal, prepared by a working group of publishing specialists and released on 31 October, puts forward an even broader transformation in the dissemination of research.
It outlines a future “community-based” and “scholar-led” open-research communication system (see go.nature.com/45zyjh) in which publishers are no longer gatekeepers that reject submitted work or determine first publication dates. Instead, authors would decide when and where to publish the initial accounts of their findings, both before and after peer review. Publishers would become service providers, paid to conduct processes such as copy-editing, typesetting and handling manuscript submissions…(More)”.
Choosing AI’s Impact on the Future of Work
Article by Daron Acemoglu & Simon Johnson …“Too many commentators see the path of technology as inevitable. But the historical record is clear: technologies develop according to the vision and choices of those in positions of power. As we document in Power and Progress: Our 1,000-Year Struggle over Technology and Prosperity, when these choices are left entirely in the hands of a small elite, you should expect that group to receive most of the benefits, while everyone else bears the costs—potentially for a long time.
Rapid advances in AI threaten to eliminate many jobs, and not just those of writers and actors. Jobs with routine elements, such as in regulatory compliance or clerical work, and those that involve simple data collection, data summary, and writing tasks are likely to disappear.
But there are still two distinct paths that this AI revolution could take. One is the path of automation, based on the idea that AI’s role is to perform tasks as well as or better than people. Currently, this vision dominates in the US tech sector, where Microsoft and Google (and their ecosystems) are cranking hard to create new AI applications that can take over as many human tasks as possible.
The negative impact on people along the “just automate” path is easy to predict from prior waves of digital technologies and robotics. It was these earlier forms of automation that contributed to the decline of American manufacturing employment and the huge increase in inequality over the last four decades. If AI intensifies automation, we are very likely to get more of the same—a gap between capital and labor, more inequality between the professional class and the rest of the workers, and fewer good jobs in the economy….(More)”
Automating Empathy
Open Access Book by Andrew McStay: “We live in a world where artificial intelligence and intensive use of personal data has become normalized. Companies across the world are developing and launching technologies to infer and interact with emotions, mental states, and human conditions. However, the methods and means of mediating information about people and their emotional states are incomplete and problematic.
Automating Empathy offers a critical exploration of technologies that sense intimate dimensions of human life and the modern ethical questions raised by attempts to perform and simulate empathy. It traces the ascendance of empathic technologies from their origins in physiognomy and pathognomy to the modern day and explores technologies in nations with non-Western ethical histories and approaches to emotion, such as Japan. The book examines applications of empathic technologies across sectors such as education, policing, and transportation, and considers key questions of everyday use such as the integration of human-state sensing in mixed reality, the use of neurotechnologies, and the moral limits of using data gleaned through automated empathy. Ultimately, Automating Empathy outlines the key principles necessary to usher in a future where automated empathy can serve and do good…(More)”
Data Equity: Foundational Concepts for Generative AI
WEF Report: “This briefing paper focuses on data equity within foundation models, both in terms of the impact of Generative AI (genAI) on society and on the further development of genAI tools.
GenAI promises immense potential to drive digital and social innovation, such as improving efficiency, enhancing creativity and augmenting existing data. GenAI has the potential to democratize access and usage of technologies. However, left unchecked, it could deepen inequities. With the advent of genAI significantly increasing the rate at which AI is deployed and developed, exploring frameworks for data equity is more urgent than ever.
The goals of the briefing paper are threefold: to establish a shared vocabulary to facilitate collaboration and dialogue; to scope initial concerns to establish a framework for inquiry on which stakeholders can focus; and to shape future development of promising technologies.
The paper represents a first step in exploring and promoting data equity in the context of genAI. The proposed definitions, framework and recommendations are intended to proactively shape the development of promising genAI technologies…(More)”.
AI is Like… A Literature Review of AI Metaphors and Why They Matter for Policy
Paper by Matthijs M. Maas: “As AI systems have become increasingly capable and impactful, there has been significant public and policymaker debate over this technology’s impacts—and the appropriate legal or regulatory responses. Within these debates many have deployed—and contested—a dazzling range of analogies, metaphors, and comparisons for AI systems, their impact, or their regulation.
This report reviews why and how metaphors matter to both the study and practice of AI governance, in order to contribute to more productive dialogue and more reflective policymaking. It first reviews five stages at which different foundational metaphors play a role in shaping the processes of technological innovation, the academic study of their impacts; the regulatory agenda, the terms of the policymaking process, and legislative and judicial responses to new technology. It then surveys a series of cases where the choice of analogy materially influenced the regulation of internet issues, as well as (recent) AI law issues. The report then provides a non-exhaustive survey of 55 analogies that have been given for AI technology, and some of their policy implications. Finally, it discusses the risks of utilizing unreflexive analogies in AI law and regulation.
By disentangling the role of metaphors and frames in these debates, and the space of analogies for AI, this survey does not aim to argue against the use or role of analogies in AI regulation—but rather to facilitate more reflective and productive conversations on these timely challenges…(More)”.
Urban Development and the State of Open Data
Chapter by Stefaan G. Verhulst and Sampriti Saxena: “Nearly 4.4 billion people, or about 55% of the world’s population, lived in cities in 2018. By 2045, this number is anticipated to grow to 6 billion. Such level of growth requires innovative and targeted urban solutions. By more effectively leveraging open data, cities can meet the needs of an ever-growing population in an effective and sustainable manner. This paper updates the previous contribution by Jean-Noé Landry, titled “Open Data and Urban Development” in the 2019 edition of The State of Open Data. It also aims to contribute to a further deepening of the Third Wave of Open Data, which highlights the significance of open data at the subnational level as a more direct and immediate response to the on-the-ground needs of citizens. It considers recent developments in how the use of, and approach to, open data has evolved within an urban development context. It seeks to discuss emerging applications of open data in cities, recent developments in open data infrastructure, governance and policies related to open data, and the future outlook of the role of open data in urbanization…(More)”.