Paper by S. Lee et all: “Large language models (LLMs) have demonstrated their potential in social science research by emulating human perceptions and behaviors, a concept referred to as algorithmic fidelity. This study assesses the algorithmic fidelity and bias of LLMs by utilizing two nationally representative climate change surveys. The LLMs were conditioned on demographics and/or psychological covariates to simulate survey responses. The findings indicate that LLMs can effectively capture presidential voting behaviors but encounter challenges in accurately representing global warming perspectives when relevant covariates are not included. GPT-4 exhibits improved performance when conditioned on both demographics and covariates. However, disparities emerge in LLM estimations of the views of certain groups, with LLMs tending to underestimate worry about global warming among Black Americans. While highlighting the potential of LLMs to aid social science research, these results underscore the importance of meticulous conditioning, model selection, survey question format, and bias assessment when employing LLMs for survey simulation. Further investigation into prompt engineering and algorithm auditing is essential to harness the power of LLMs while addressing their inherent limitations…(More)”.
AI and Democracy’s Digital Identity Crisis
Essay by Shrey Jain, Connor Spelliscy, Samuel Vance-Law and Scott Moore: “AI-enabled tools have become sophisticated enough to allow a small number of individuals to run disinformation campaigns of an unprecedented scale. Privacy-preserving identity attestations can drastically reduce instances of impersonation and make disinformation easy to identify and potentially hinder. By understanding how identity attestations are positioned across the spectrum of decentralization, we can gain a better understanding of the costs and benefits of various attestations. In this paper, we discuss attestation types, including governmental, biometric, federated, and web of trust-based, and include examples such as e-Estonia, China’s social credit system, Worldcoin, OAuth, X (formerly Twitter), Gitcoin Passport, and EAS. We believe that the most resilient systems create an identity that evolves and is connected to a network of similarly evolving identities that verify one another. In this type of system, each entity contributes its respective credibility to the attestation process, creating a larger, more comprehensive set of attestations. We believe these systems could be the best approach to authenticating identity and protecting against some of the threats to democracy that AI can pose in the hands of malicious actors. However, governments will likely attempt to mitigate these risks by implementing centralized identity authentication systems; these centralized systems could themselves pose risks to the democratic processes they are built to defend. We therefore recommend that policymakers support the development of standards-setting organizations for identity, provide legal clarity for builders of decentralized tooling, and fund research critical to effective identity authentication systems…(More)”
The Bletchley Declaration
Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023: “In the context of our cooperation, and to inform action at the national and international levels, our agenda for addressing frontier AI risk will focus on:
- identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.
- building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.
In furtherance of this agenda, we resolve to support an internationally inclusive network of scientific research on frontier AI safety that encompasses and complements existing and new multilateral, plurilateral and bilateral collaboration, including through existing international fora and other relevant initiatives, to facilitate the provision of the best science available for policy making and the public good.
In recognition of the transformative positive potential of AI, and as part of ensuring wider international cooperation on AI, we resolve to sustain an inclusive global dialogue that engages existing international fora and other relevant initiatives and contributes in an open manner to broader international discussions, and to continue research on frontier AI safety to ensure that the benefits of the technology can be harnessed responsibly for good and for all. We look forward to meeting again in 2024…(More)”.
Enterprise Value and the Value of Data
Paper by Dan Ciuriak: “Data is often said to be the most valuable commodity of our age. It is a curiosity, therefore, that it remains largely invisible on the balance sheets of companies and largely unmeasured in our national economic accounts. This paper comments on the problems of using cost-based or transactions-based methods to establish value for a nation’s data in the system of national accounts and suggests that this should be complemented with value of economic rents attributable to data. This rent is part of enterprise value; accordingly, an indicator is required as an instrumental variable for the use of data for value creation within firms. The paper argues that traditional accounting looks through the firm to its tangible (and certain intangible) assets; that may no longer be feasible in measuring and understanding the data-driven economy…(More)”
Does the sun rise for ChatGPT? Scientific discovery in the age of generative AI
Paper by David Leslie: “In the current hype-laden climate surrounding the rapid proliferation of foundation models and generative AI systems like ChatGPT, it is becoming increasingly important for societal stakeholders to reach sound understandings of their limitations and potential transformative effects. This is especially true in the natural and applied sciences, where magical thinking among some scientists about the take-off of “artificial general intelligence” has arisen simultaneously as the growing use of these technologies is putting longstanding norms, policies, and standards of good research practice under pressure. In this analysis, I argue that a deflationary understanding of foundation models and generative AI systems can help us sense check our expectations of what role they can play in processes of scientific exploration, sense-making, and discovery. I claim that a more sober, tool-based understanding of generative AI systems as computational instruments embedded in warm-blooded research processes can serve several salutary functions. It can play a crucial bubble-bursting role that mitigates some of the most serious threats to the ethos of modern science posed by an unreflective overreliance on these technologies. It can also strengthen the epistemic and normative footing of contemporary science by helping researchers circumscribe the part to be played by machine-led prediction in communicative contexts of scientific discovery while concurrently prodding them to recognise that such contexts are principal sites for human empowerment, democratic agency, and creativity. Finally, it can help spur ever richer approaches to collaborative experimental design, theory-construction, and scientific world-making by encouraging researchers to deploy these kinds of computational tools to heuristically probe unbounded search spaces and patterns in high-dimensional biophysical data that would otherwise be inaccessible to human-scale examination and inference…(More)”.
The UN Hired an AI Company to Untangle the Israeli-Palestinian Crisis
Article by David Gilbert: “…The application of artificial intelligence technologies to conflict situations has been around since at least 1996, with machine learning being used to predict where conflicts may occur. The use of AI in this area has expanded in the intervening years, being used to improve logistics, training, and other aspects of peacekeeping missions. Lane and Shults believe they could use artificial intelligence to dig deeper and find the root causes of conflicts.
Their idea for an AI program that models the belief systems that drive human behavior first began when Lane moved to Northern Ireland a decade ago to study whether computation modeling and cognition could be used to understand issues around religious violence.
In Belfast, Lane figured out that by modeling aspects of identity and social cohesion, and identifying the factors that make people motivated to fight and die for a particular cause, he could accurately predict what was going to happen next.
“We set out to try and come up with something that could help us better understand what it is about human nature that sometimes results in conflict, and then how can we use that tool to try and get a better handle or understanding on these deeper, more psychological issues at really large scales,” Lane says.
The result of their work was a study published in 2018 in The Journal for Artificial Societies and Social Simulation, which found that people are typically peaceful but will engage in violence when an outside group threatens the core principles of their religious identity.
A year later, Lane wrote that the model he had developed predicted that measures introduced by Brexit—the UK’s departure from the European Union that included the introduction of a hard border in the Irish Sea between Northern Ireland and the rest of the UK—would result in a rise in paramilitary activity. Months later, the model was proved right.
The multi-agent model developed by Lane and Shults relied on distilling more than 50 million articles from GDelt, a project that monitors “the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages.” But feeding the AI millions of articles and documents was not enough, the researchers realized. In order to fully understand what was driving the people of Northern Ireland to engage in violence against their neighbors, they would need to conduct their own research…(More)”.
AI Globalism and AI Localism: Governing AI at the Local Level for Global Benefit
Article by Stefaan G. Verhulst: “With the UK Summit in full swing, 2023 will likely be seen as a pivotal year for AI governance, with governments promoting a global governance model: AI Globalism. For it to be relevant, flexible, and effective, any global approach will need to be informed by and complemented with local experimentation and leadership, ensuring local responsiveness: AI Localism.
Even as consumers and businesses extend their use of AI (generative AI in particular), governments are also taking notice. Determined not to be caught on the back foot, as they were with social media, regulators and policymakers around the world are exploring frameworks and institutional structures that could help maximize the benefits while minimizing the potential harms of AI. This week, the UK is hosting a high-profile AI Safety Summit, attended by political and business leaders from around the world, including Kamala Harris and Elon Musk. Similarly, US President Biden recently signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which he hailed as a “landmark executive order” to ensure “safety, security, trust, openness, and American leadership.”
Amid the various policy and regulatory proposals swirling around, there has been a notable emphasis on what we might call AI globalism. The UK summit has explicitly endorsed a global approach to AI safety, with coordination between the US, EU, and China core to its vision of more responsible and safe AI. This global perspective follows similar recent calls for “an AI equivalent of the IPCC” or the International Atomic Energy Agency (IAEA). Notably, such calls are emerging both from the private sector and from civil society leaders.
In many ways, a global approach makes sense. Like most technology, AI is transnational in scope, and its governance will require cross-jurisdictional coordination and harmonization. At the same time, we believe that AI globalism should be accompanied by a recognition that some of the most innovative AI initiatives are taking place in cities and municipalities and being regulated at those levels too.
We call it AI localism. In what follows, I outline a vision of a more decentralized approach to AI governance, one that would allow cities and local jurisdictions — including states — to develop and iterate governance frameworks tailored to their specific needs and challenges. This decentralized, local approach would need to take place alongside global efforts. The two would not be mutually exclusive but instead necessarily complementary…(More)”.
The Government Analytics Handbook
(Open Access) Book edited by Daniel Rogger and Christian Schuster: “Governments across the world make thousands of personnel management decisions, procure millions of goods and services, and execute billions of processes each day. They are data rich. And yet, there is little systematic practice to-date which capitalizes on this data to make public administrations work better. This means that governments are missing out on data insights to save billions in procurement expenditures, recruit better talent into government, and identify sources of corruption, to name just a few.
The Government Analytics Handbook seeks to change that. It presents frontier evidence and practitioner insights on how to leverage data to make governments work better. Covering a range of microdata sources—such as administrative data and public servant surveys—as well as tools and resources for undertaking the analytics, it transforms the ability of governments to take a data-informed approach to diagnose and improve how public organizations work…(More)”.
AI in public services will require empathy, accountability
Article by Yogesh Hirdaramani: “The Australian Government Department of the Prime Minister and Cabinet has released the first of its Long Term Insights Briefing, which focuses on how the Government can integrate artificial intelligence (AI) into public services while maintaining the trustworthiness of public service delivery.
Public servants need to remain accountable and transparent with their use of AI, continue to demonstrate empathy for the people they serve, use AI to better meet people’s needs, and build AI literacy amongst the Australian public, the report stated.
The report also cited a forthcoming study that found that Australian residents with a deeper understanding of AI are more likely to trust the Government’s use of AI in service delivery. However,more than half of survey respondents reported having little knowledge of AI.
Key takeaways
The report aims to supplement current policy work on how AI can be best governed in the public service to realise its benefits while maintaining public trust.
In the longer term, the Australian Government aims to use AI to deliver personalised services to its citizens, deliver services more efficiently and conveniently, and achieve a higher standard of care for its ageing population.
AI can help public servants achieve these goals through automating processes, improving service processing and response time, and providing AI-enabled interfaces which users can engage with, such as chatbots and virtual assistants.
However, AI can also lead to unfair or unintended outcomes due to bias in training data or hallucinations, the report noted.
According to the report, the trustworthy use of AI will require public servants to:
- Demonstrate integrity by remaining accountable for AI outcomes and transparent about AI use
- Demonstrate empathy by offering face-to-face services for those with greater vulnerabilities
- Use AI in ways that improve service delivery for end-users
- Build internal skills and systems to implement AI, while educating the public on the impact of AI
The Australian Taxation Office currently uses AI to identify high-risk business activity statements to determine whether refunds can be issued or if further review is required, noted the report. Taxpayers can appeal the decision if staff decide to deny refunds…(More)”
The Tragedy of AI Governance
Paper by Simon Chesterman: “Despite hundreds of guides, frameworks, and principles intended to make AI “ethical” or “responsible”, ever more powerful applications continue to be released ever more quickly. Safety and security teams are being downsized or sidelined to bring AI products to market. And a significant portion of AI developers apparently believe there is a real risk that their work poses an existential threat to humanity.
This contradiction between statements and action can be attributed to three factors that undermine the prospects for meaningful governance of AI. The first is the shift of power from public to private hands, not only in deployment of AI products but in fundamental research. The second is the wariness of most states about regulating the sector too aggressively, for fear that it might drive innovation elsewhere. The third is the dysfunction of global processes to manage collective action problems, epitomized by the climate crisis and now frustrating efforts to govern a technology that does not respect borders. The tragedy of AI governance is that those with the greatest leverage to regulate AI have the least interest in doing so, while those with the greatest interest have the least leverage.
Resolving these challenges either requires rethinking the incentive structures — or waiting for a crisis that brings the need for regulation and coordination into sharper focus…(More)”