Technological Obsolescence


Essay by Jonathan Coopersmith: “In addition to killing over a million Americans, Covid-19 revealed embarrassing failures of local, state, and national public health systems to accurately and effectively collect, transmit, and process information. To some critics and reporters, the visible and easily understood face of those failures was the continued use of fax machines.

In reality, the critics were attacking the symptom, not the problem. Instead of “why were people still using fax machines?,” the better question was “what factors made fax machines more attractive than more capable technologies?” Those answers provide a better window into the complex, evolving world of technological obsolescence, a key component of our modern world—and on a smaller scale, provide a template to decide whether the NAE and other organizations should retain their fax machines.

The marketing dictionary of Monash University Business School defines technological obsolescence as “when a technical product or service is no longer needed or wanted even though it could still be in working order.” Significantly, the source is a business school, which implies strong economic and social factors in decision making about technology.  

Determining technological obsolescence depends not just on creators and promoters of new technologies but also on users, providers, funders, accountants, managers, standards setters—and, most importantly, competing needs and options. In short, it’s complicated.  

Like most aspects of technology, perspectives on obsolescence depend on your position. If existing technology meets your needs, upgrading may not seem worth the resources needed (e.g., for purchase and training). If, on the other hand, your firm or organization depends on income from providing, installing, servicing, training, advising, or otherwise benefiting from a new technology, not upgrading could jeopardize your future, especially in a very competitive market. And if you cannot find the resources to upgrade, you—and your users—may incur both visible and invisible costs…(More)”.

Model evaluation for extreme risks


Paper by Toby Shevlane et al: “Current approaches to building general-purpose AI systems tend to produce systems with both beneficial and harmful capabilities. Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabilities or strong manipulation skills. We explain why model evaluation is critical for addressing extreme risks. Developers must be able to identify dangerous capabilities (through “dangerous capability evaluations”) and the propensity of models to apply their capabilities for harm (through “alignment evaluations”). These evaluations will become critical for keeping policymakers and other stakeholders informed, and for making responsible decisions about model training, deployment, and security.

Figure 1 | The theory of change for model evaluations for extreme risk. Evaluations for dangerous capabilities and alignment inform risk assessments, and are in turn embedded into important governance processes…(More)”.

How to decode modern conflicts with cutting-edge technologies


Blog by Mykola Blyzniuk: “In modern warfare, new technologies are increasingly being used to manipulate information and perceptions on the battlefield. This includes the use of deep fakes, or the malicious use of ICT (Information and Communication Technologies).

Likewise, emerging tech can be instrumental in documenting human rights violationstracking the movement of troops and weaponsmonitoring public sentiments and the effects of conflict on civilians and exposing propaganda and disinformation.

The dual use of new technologies in modern warfare highlights the need for further investigation. Here are two examples how the can be used to advance politial analysis and situational awareness…

The world of Natural Language Processing (NLP) technology took a leap with a recent study on the Russia-Ukraine conflict by Uddagiri Sirisha and Bolem Sai Chandana of the School of Computer Science and Engineering at Vellore Institute of Technology Andhra Pradesh ( VIT-AP) University in Amaravathi Andhra Pradesh, India.

The researchers developed a novel artificial intelligence model to analyze whether a piece of text is positive, negative or neutral in tone. This new model referred to as “ABSA-based ROBERTa-LSTM”, looks at not just the overall sentiment of a piece of text but also the sentiment towards specific aspects or entities mentioned in the text. The study took a pre-processed dataset of 484,221 tweets collected during April — May 2022 related to the Russia-Ukraine conflict and applied the model, resulting in a sentiment analysis accuracy of 94.7%, outperforming current techniques….(More)”.

Imagining AI: How the World Sees Intelligent Machines


Book edited by Stephen Cave and Kanta Dihal: “AI is now a global phenomenon. Yet Hollywood narratives dominate perceptions of AI in the English-speaking West and beyond, and much of the technology itself is shaped by a disproportionately white, male, US-based elite. However, different cultures have been imagining intelligent machines since long before we could build them, in visions that vary greatly across religious, philosophical, literary and cinematic traditions. This book aims to spotlight these alternative visions.

Imagining AI draws attention to the range and variety of visions of a future with intelligent machines and their potential significance for the research, regulation, and implementation of AI. The book is structured geographically, with each chapter presenting insights into how a specific region or culture imagines intelligent machines. The contributors, leading experts from academia and the arts, explore how the encounters between local narratives, digital technologies, and mainstream Western narratives create new imaginaries and insights in different contexts across the globe. The narratives they analyse range from ancient philosophy to contemporary science fiction, and visual art to policy discourse.

The book sheds new light on some of the most important themes in AI ethics, from the differences between Chinese and American visions of AI, to digital neo-colonialism. It is an essential work for anyone wishing to understand how different cultural contexts interplay with the most significant technology of our time…(More)”.

Citizens’ juries can help fix democracy


Article by Martin Wolf: “…our democratic processes do not work very well. Adding referendums to elections does not solve the problem. But adding citizens’ assemblies might.

In his farewell address, George Washington warned against the spirit of faction. He argued that the “alternate domination of one faction over another . . . is itself a frightful despotism. But . . . The disorders and miseries which result gradually incline the minds of men to seek security and repose in the absolute power of an individual”. If one looks at the US today, that peril is evident. In current electoral politics, manipulation of the emotions of a rationally ill-informed electorate is the path to power. The outcome is likely to be rule by those with the greatest talent for demagogy.

Elections are necessary. But unbridled majoritarianism is a disaster. A successful liberal democracy requires constraining institutions: independent oversight over elections, an independent judiciary and an independent bureaucracy. But are they enough? No. In my book, The Crisis of Democratic Capitalism, I follow the Australian economist Nicholas Gruen in arguing for the addition of citizens’ assemblies or citizens’ juries. These would insert an important element of ancient Greek democracy into the parliamentary tradition.

There are two arguments for introducing sortition (lottery) into the political process. First, these assemblies would be more representative than professional politicians can ever be. Second, it would temper the impact of political campaigning, nowadays made more distorting by the arts of advertising and the algorithms of social media…(More)”.

The promise and pitfalls of the metaverse for science


Paper by Diego Gómez-Zará, Peter Schiffer & Dashun Wang: “The future of the metaverse remains uncertain and continues to evolve, as was the case for many technological advances of the past. Now is the time for scientists, policymakers and research institutions to start considering actions to capture the potential of the metaverse and take concrete steps to avoid its pitfalls. Proactive investments in the form of competitive grants, internal agency efforts and infrastructure building should be considered, supporting innovation and adaptation to the future in which the metaverse may be more pervasive in society.

Government agencies and other research funders could also have a critical role in funding and promoting interoperability and shared protocols among different metaverse technologies and environments. These aspects will help the scientific research community to ensure broad adoption and reproducibility. For example, government research agencies may create an open and publicly accessible metaverse platform with open-source code and standard protocols that can be translated to commercial platforms as needed. In the USA, an agency such as the National Institute of Standards and Technology could set standards for protocols that are suitable for the research enterprise or, alternatively, an international convention could set global standards. Similarly, an agency such as the National Institutes of Health could leverage its extensive portfolio of behavioural research and build and maintain a metaverse for human subjects studies. Within such an ecosystem, researchers could develop and implement their own research protocols with appropriate protections, standardized and reproducible conditions, and secure data management. A publicly sponsored research-focused metaverse — which could be cross-compatible with commercial platforms — may create and capture substantial value for science, from augmenting scientific productivity to protecting research integrity.

There are important precedents for this sort of action in that governments and universities have built open repositories for data in fields such as astronomy and crystallography, and both the US National Science Foundation and the US Department of Energy have built and maintained high-performance computing environments that are available to the broader research community. Such efforts could be replicated and adapted for emerging metaverse technologies, which would be especially beneficial for under-resourced institutions to access and leverage common resources. Critically, the encouragement of private sector innovation and the development of public–private alliances must be balanced with the need for interoperability, openness and accessibility to the broader research community…(More)”.

WHO Launches Global Infectious Disease Surveillance Network


Article by Shania Kennedy: “The World Health Organization (WHO) launched the International Pathogen Surveillance Network (IPSN), a public health network to prevent and detect infectious disease threats before they become epidemics or pandemics.

IPSN will rely on insights generated from pathogen genomics, which helps analyze the genetic material of viruses, bacteria, and other disease-causing micro-organisms to determine how they spread and how infectious or deadly they may be.

Using these data, researchers can identify and track diseases to improve outbreak prevention, response, and treatments.

“The goal of this new network is ambitious, but it can also play a vital role in health security: to give every country access to pathogen genomic sequencing and analytics as part of its public health system,” said WHO Director-General Tedros Adhanom Ghebreyesus, PhD, in the press release.  “As was so clearly demonstrated to us during the COVID-19 pandemic, the world is stronger when it stands together to fight shared health threats.”

Genomics capacity worldwide was scaled up during the pandemic, but the press release indicates that many countries still lack effective tools and systems for public health data collection and analysis. This lack of resources and funding could slow the development of a strong global health surveillance infrastructure, which IPSN aims to help address.

The network will bring together experts in genomics and data analytics to optimize routine disease surveillance, including for COVID-19. According to the press release, pathogen genomics-based analyses of the SARS-COV-2 virus helped speed the development of effective vaccines and the identification of more transmissible virus variants…(More)”.

Generative Artificial Intelligence and Data Privacy: A Primer


Report by Congressional Research Service: “Since the public release of Open AI’s ChatGPT, Google’s Bard, and other similar systems, some Members of Congress have expressed interest in the risks associated with “generative artificial intelligence (AI).” Although exact definitions vary, generative AI is a type of AI that can generate new content—such as text, images, and videos—through learning patterns from pre-existing data.
It is a broad term that may include various technologies and techniques from AI and machine learning (ML). Generative AI models have received significant attention and scrutiny due to their potential harms, such as risks involving privacy, misinformation, copyright, and non-consensual sexual imagery. This report focuses on privacy issues and relevant policy considerations for Congress. Some policymakers and stakeholders have raised privacy concerns about how individual data may be used to develop and deploy generative models. These concerns are not new or unique to generative AI, but the scale, scope, and capacity of such technologies may present new privacy challenges for Congress…(More)”.

Best Practices for Disclosure and Citation When Using Artificial Intelligence Tools


Article by Mark Shope: “This article is intended to be a best practices guide for disclosing the use of artificial intelligence tools in legal writing. The article focuses on using artificial intelligence tools that aid in drafting textual material, specifically in law review articles and law school courses. The article’s approach to disclosure and citation is intended to be a starting point for authors, institutions, and academic communities to tailor based on their own established norms and philosophies. Throughout the entire article, the author has used ChatGPT to provide examples of how artificial intelligence tools can be used in writing and how the output of artificial intelligence tools can be expressed in text, including examples of how that use and text should be disclosed and cited. The article will also include policies for professors to use in their classrooms and journals to use in their submission guidelines…(More)”

Why voters who value democracy participate in democratic backsliding


Paper by Braley, A., Lenz, G.S., Adjodah, D. et al.: “Around the world, citizens are voting away the democracies they claim to cherish. Here we present evidence that this behaviour is driven in part by the belief that their opponents will undermine democracy first. In an observational study (N = 1,973), we find that US partisans are willing to subvert democratic norms to the extent that they believe opposing partisans are willing to do the same. In experimental studies (N = 2,543, N = 1,848), we revealed to partisans that their opponents are more committed to democratic norms than they think. As a result, the partisans became more committed to upholding democratic norms themselves and less willing to vote for candidates who break these norms. These findings suggest that aspiring autocrats may instigate democratic backsliding by accusing their opponents of subverting democracy and that we can foster democratic stability by informing partisans about the other side’s commitment to democracy…(More)”