Open Access Book edited by Stephen Boucher, Carina Antonia Hallin, and Lex Paulson: “…explores the concepts, methodologies, and implications of collective intelligence for democratic governance, in the ﬁrst comprehensive survey of this field.
Illustrated by a collection of inspiring case studies and edited by three pioneers in collective intelligence, this handbook serves as a unique primer on the science of collective intelligence applied to public challenges and will inspire public actors, academics, students, and activists across the world to apply collective intelligence in policymaking and administration to explore its potential, both to foster policy innovations and reinvent democracy…(More)”.
Open Access Book edited by Francis Fukuyama and Marietje Schaake: “…While there has been a tremendous upsurge in scholarly research into the political and social impacts of digital technologies, the vast majority of this work has tended to focus on rich countries in North America and Europe. Both regions had high levels of internet penetration and the state capacity to take on—potentially, at any rate—regulatory issues raised by digitization….The current volume is an initial effort to rectify the imbalance in the way that centers and programs such as ours look at the world, by focusing on what might broadly be labeled the “global south,” which we have labeled “emerging countries” (ECs). Countries and regions outside of North America and Europe face similar opportunities and challenges to those developed regions, but also problems that are unique to themselves…(More)”.
Article by Kaushik Basu: “Technology is changing the world faster than policymakers can devise new ways to cope with it. As a result, societies are becoming polarized, inequality is rising, and authoritarian regimes and corporations are doctoring reality and undermining democracy.
For ordinary people, there is ample reason to be “a little bit scared,” as OpenAI CEO Sam Altman recently put it. Major advances in artificial intelligence raise concerns about education, work, warfare, and other risks that could destabilize civilization long before climate change does. To his credit, Altman is urging lawmakers to regulate his industry.
In confronting this challenge, we must keep two concerns in mind. The first is the need for speed. If we take too long, we may find ourselves closing the barn door after the horse has bolted. That is what happened with the 1968 Nuclear Non-Proliferation Treaty: It came 23 years too late. If we had managed to establish some minimal rules after World War II, the NPT’s ultimate goal of nuclear disarmament might have been achievable.
The other concern involves deep uncertainty. This is such a new world that even those working on AI do not know where their inventions will ultimately take us. A law enacted with the best intentions can still backfire. When America’s founders drafted the Second Amendment conferring the “right to keep and bear arms,” they could not have known how firearms technology would change in the future, thereby changing the very meaning of the word “arms.” Nor did they foresee how their descendants would fail to realize this even after seeing the change.
But uncertainty does not justify fatalism. Policymakers can still effectively govern the unknown as long as they keep certain broad considerations in mind. For example, one idea that came up during a recent Senate hearing was to create a licensing system whereby only select corporations would be permitted to work on AI.
This approach comes with some obvious risks of its own. Licensing can often be a step toward cronyism, so we would also need new laws to deter politicians from abusing the system. Moreover, slowing your country’s AI development with additional checks does not mean that others will adopt similar measures. In the worst case, you may find yourself facing adversaries wielding precisely the kind of malevolent tools that you eschewed. That is why AI is best regulated multilaterally, even if that is a tall order in today’s world…(More)”.
Paper by Toby Shevlane et al: “Current approaches to building general-purpose AI systems tend to produce systems with both beneficial and harmful capabilities. Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabilities or strong manipulation skills. We explain why model evaluation is critical for addressing extreme risks. Developers must be able to identify dangerous capabilities (through “dangerous capability evaluations”) and the propensity of models to apply their capabilities for harm (through “alignment evaluations”). These evaluations will become critical for keeping policymakers and other stakeholders informed, and for making responsible decisions about model training, deployment, and security.
Figure 1 | The theory of change for model evaluations for extreme risk. Evaluations for dangerous capabilities and alignment inform risk assessments, and are in turn embedded into important governance processes…(More)”.
Blog by Mykola Blyzniuk: “In modern warfare, new technologies are increasingly being used to manipulate information and perceptions on the battlefield. This includes the use of deep fakes, or the malicious use of ICT (Information and Communication Technologies).
The dual use of new technologies in modern warfare highlights the need for further investigation. Here are two examples how the can be used to advance politial analysis and situational awareness…
The world of Natural Language Processing (NLP) technology took a leap with a recent study on the Russia-Ukraine conflict by Uddagiri Sirisha and Bolem Sai Chandana of the School of Computer Science and Engineering at Vellore Institute of Technology Andhra Pradesh ( VIT-AP) University in Amaravathi Andhra Pradesh, India.
The researchers developed a novel artificial intelligence model to analyze whether a piece of text is positive, negative or neutral in tone. This new model referred to as “ABSA-based ROBERTa-LSTM”, looks at not just the overall sentiment of a piece of text but also the sentiment towards specific aspects or entities mentioned in the text. The study took a pre-processed dataset of 484,221 tweets collected during April — May 2022 related to the Russia-Ukraine conflict and applied the model, resulting in a sentiment analysis accuracy of 94.7%, outperforming current techniques….(More)”.
Book edited by Stephen Cave and Kanta Dihal: “AI is now a global phenomenon. Yet Hollywood narratives dominate perceptions of AI in the English-speaking West and beyond, and much of the technology itself is shaped by a disproportionately white, male, US-based elite. However, different cultures have been imagining intelligent machines since long before we could build them, in visions that vary greatly across religious, philosophical, literary and cinematic traditions. This book aims to spotlight these alternative visions.
Imagining AI draws attention to the range and variety of visions of a future with intelligent machines and their potential significance for the research, regulation, and implementation of AI. The book is structured geographically, with each chapter presenting insights into how a specific region or culture imagines intelligent machines. The contributors, leading experts from academia and the arts, explore how the encounters between local narratives, digital technologies, and mainstream Western narratives create new imaginaries and insights in different contexts across the globe. The narratives they analyse range from ancient philosophy to contemporary science fiction, and visual art to policy discourse.
The book sheds new light on some of the most important themes in AI ethics, from the differences between Chinese and American visions of AI, to digital neo-colonialism. It is an essential work for anyone wishing to understand how different cultural contexts interplay with the most significant technology of our time…(More)”.
Article by Martin Wolf: “…our democratic processes do not work very well. Adding referendums to elections does not solve the problem. But adding citizens’ assemblies might.
In his farewell address, George Washington warned against the spirit of faction. He argued that the “alternate domination of one faction over another . . . is itself a frightful despotism. But . . . The disorders and miseries which result gradually incline the minds of men to seek security and repose in the absolute power of an individual”. If one looks at the US today, that peril is evident. In current electoral politics, manipulation of the emotions of a rationally ill-informed electorate is the path to power. The outcome is likely to be rule by those with the greatest talent for demagogy.
Elections are necessary. But unbridled majoritarianism is a disaster. A successful liberal democracy requires constraining institutions: independent oversight over elections, an independent judiciary and an independent bureaucracy. But are they enough? No. In my book, The Crisis of Democratic Capitalism, I follow the Australian economist Nicholas Gruen in arguing for the addition of citizens’ assemblies or citizens’ juries. These would insert an important element of ancient Greek democracy into the parliamentary tradition.
There are two arguments for introducing sortition (lottery) into the political process. First, these assemblies would be more representative than professional politicians can ever be. Second, it would temper the impact of political campaigning, nowadays made more distorting by the arts of advertising and the algorithms of social media…(More)”.
Paper by Diego Gómez-Zará, Peter Schiffer & Dashun Wang: “The future of the metaverse remains uncertain and continues to evolve, as was the case for many technological advances of the past. Now is the time for scientists, policymakers and research institutions to start considering actions to capture the potential of the metaverse and take concrete steps to avoid its pitfalls. Proactive investments in the form of competitive grants, internal agency efforts and infrastructure building should be considered, supporting innovation and adaptation to the future in which the metaverse may be more pervasive in society.
Government agencies and other research funders could also have a critical role in funding and promoting interoperability and shared protocols among different metaverse technologies and environments. These aspects will help the scientific research community to ensure broad adoption and reproducibility. For example, government research agencies may create an open and publicly accessible metaverse platform with open-source code and standard protocols that can be translated to commercial platforms as needed. In the USA, an agency such as the National Institute of Standards and Technology could set standards for protocols that are suitable for the research enterprise or, alternatively, an international convention could set global standards. Similarly, an agency such as the National Institutes of Health could leverage its extensive portfolio of behavioural research and build and maintain a metaverse for human subjects studies. Within such an ecosystem, researchers could develop and implement their own research protocols with appropriate protections, standardized and reproducible conditions, and secure data management. A publicly sponsored research-focused metaverse — which could be cross-compatible with commercial platforms — may create and capture substantial value for science, from augmenting scientific productivity to protecting research integrity.
There are important precedents for this sort of action in that governments and universities have built open repositories for data in fields such as astronomy and crystallography, and both the US National Science Foundation and the US Department of Energy have built and maintained high-performance computing environments that are available to the broader research community. Such efforts could be replicated and adapted for emerging metaverse technologies, which would be especially beneficial for under-resourced institutions to access and leverage common resources. Critically, the encouragement of private sector innovation and the development of public–private alliances must be balanced with the need for interoperability, openness and accessibility to the broader research community…(More)”.
Article by Shania Kennedy: “The World Health Organization (WHO) launched the International Pathogen Surveillance Network (IPSN), a public health network to prevent and detect infectious disease threats before they become epidemics or pandemics.
IPSN will rely on insights generated from pathogen genomics, which helps analyze the genetic material of viruses, bacteria, and other disease-causing micro-organisms to determine how they spread and how infectious or deadly they may be.
Using these data, researchers can identify and track diseases to improve outbreak prevention, response, and treatments.
“The goal of this new network is ambitious, but it can also play a vital role in health security: to give every country access to pathogen genomic sequencing and analytics as part of its public health system,” said WHO Director-General Tedros Adhanom Ghebreyesus, PhD, in the press release. “As was so clearly demonstrated to us during the COVID-19 pandemic, the world is stronger when it stands together to fight shared health threats.”
Genomics capacity worldwide was scaled up during the pandemic, but the press release indicates that many countries still lack effective tools and systems for public health data collection and analysis. This lack of resources and funding could slow the development of a strong global health surveillance infrastructure, which IPSN aims to help address.
The network will bring together experts in genomics and data analytics to optimize routine disease surveillance, including for COVID-19. According to the press release, pathogen genomics-based analyses of the SARS-COV-2 virus helped speed the development of effective vaccines and the identification of more transmissible virus variants…(More)”.
Article by Mark Shope: “This article is intended to be a best practices guide for disclosing the use of artificial intelligence tools in legal writing. The article focuses on using artificial intelligence tools that aid in drafting textual material, specifically in law review articles and law school courses. The article’s approach to disclosure and citation is intended to be a starting point for authors, institutions, and academic communities to tailor based on their own established norms and philosophies. Throughout the entire article, the author has used ChatGPT to provide examples of how artificial intelligence tools can be used in writing and how the output of artificial intelligence tools can be expressed in text, including examples of how that use and text should be disclosed and cited. The article will also include policies for professors to use in their classrooms and journals to use in their submission guidelines…(More)”