Model evaluation for extreme risks


Paper by Toby Shevlane et al: “Current approaches to building general-purpose AI systems tend to produce systems with both beneficial and harmful capabilities. Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabilities or strong manipulation skills. We explain why model evaluation is critical for addressing extreme risks. Developers must be able to identify dangerous capabilities (through “dangerous capability evaluations”) and the propensity of models to apply their capabilities for harm (through “alignment evaluations”). These evaluations will become critical for keeping policymakers and other stakeholders informed, and for making responsible decisions about model training, deployment, and security.

Figure 1 | The theory of change for model evaluations for extreme risk. Evaluations for dangerous capabilities and alignment inform risk assessments, and are in turn embedded into important governance processes…(More)”.

How to decode modern conflicts with cutting-edge technologies


Blog by Mykola Blyzniuk: “In modern warfare, new technologies are increasingly being used to manipulate information and perceptions on the battlefield. This includes the use of deep fakes, or the malicious use of ICT (Information and Communication Technologies).

Likewise, emerging tech can be instrumental in documenting human rights violationstracking the movement of troops and weaponsmonitoring public sentiments and the effects of conflict on civilians and exposing propaganda and disinformation.

The dual use of new technologies in modern warfare highlights the need for further investigation. Here are two examples how the can be used to advance politial analysis and situational awareness…

The world of Natural Language Processing (NLP) technology took a leap with a recent study on the Russia-Ukraine conflict by Uddagiri Sirisha and Bolem Sai Chandana of the School of Computer Science and Engineering at Vellore Institute of Technology Andhra Pradesh ( VIT-AP) University in Amaravathi Andhra Pradesh, India.

The researchers developed a novel artificial intelligence model to analyze whether a piece of text is positive, negative or neutral in tone. This new model referred to as “ABSA-based ROBERTa-LSTM”, looks at not just the overall sentiment of a piece of text but also the sentiment towards specific aspects or entities mentioned in the text. The study took a pre-processed dataset of 484,221 tweets collected during April — May 2022 related to the Russia-Ukraine conflict and applied the model, resulting in a sentiment analysis accuracy of 94.7%, outperforming current techniques….(More)”.

Imagining AI: How the World Sees Intelligent Machines


Book edited by Stephen Cave and Kanta Dihal: “AI is now a global phenomenon. Yet Hollywood narratives dominate perceptions of AI in the English-speaking West and beyond, and much of the technology itself is shaped by a disproportionately white, male, US-based elite. However, different cultures have been imagining intelligent machines since long before we could build them, in visions that vary greatly across religious, philosophical, literary and cinematic traditions. This book aims to spotlight these alternative visions.

Imagining AI draws attention to the range and variety of visions of a future with intelligent machines and their potential significance for the research, regulation, and implementation of AI. The book is structured geographically, with each chapter presenting insights into how a specific region or culture imagines intelligent machines. The contributors, leading experts from academia and the arts, explore how the encounters between local narratives, digital technologies, and mainstream Western narratives create new imaginaries and insights in different contexts across the globe. The narratives they analyse range from ancient philosophy to contemporary science fiction, and visual art to policy discourse.

The book sheds new light on some of the most important themes in AI ethics, from the differences between Chinese and American visions of AI, to digital neo-colonialism. It is an essential work for anyone wishing to understand how different cultural contexts interplay with the most significant technology of our time…(More)”.

WHO Launches Global Infectious Disease Surveillance Network


Article by Shania Kennedy: “The World Health Organization (WHO) launched the International Pathogen Surveillance Network (IPSN), a public health network to prevent and detect infectious disease threats before they become epidemics or pandemics.

IPSN will rely on insights generated from pathogen genomics, which helps analyze the genetic material of viruses, bacteria, and other disease-causing micro-organisms to determine how they spread and how infectious or deadly they may be.

Using these data, researchers can identify and track diseases to improve outbreak prevention, response, and treatments.

“The goal of this new network is ambitious, but it can also play a vital role in health security: to give every country access to pathogen genomic sequencing and analytics as part of its public health system,” said WHO Director-General Tedros Adhanom Ghebreyesus, PhD, in the press release.  “As was so clearly demonstrated to us during the COVID-19 pandemic, the world is stronger when it stands together to fight shared health threats.”

Genomics capacity worldwide was scaled up during the pandemic, but the press release indicates that many countries still lack effective tools and systems for public health data collection and analysis. This lack of resources and funding could slow the development of a strong global health surveillance infrastructure, which IPSN aims to help address.

The network will bring together experts in genomics and data analytics to optimize routine disease surveillance, including for COVID-19. According to the press release, pathogen genomics-based analyses of the SARS-COV-2 virus helped speed the development of effective vaccines and the identification of more transmissible virus variants…(More)”.

Generative Artificial Intelligence and Data Privacy: A Primer


Report by Congressional Research Service: “Since the public release of Open AI’s ChatGPT, Google’s Bard, and other similar systems, some Members of Congress have expressed interest in the risks associated with “generative artificial intelligence (AI).” Although exact definitions vary, generative AI is a type of AI that can generate new content—such as text, images, and videos—through learning patterns from pre-existing data.
It is a broad term that may include various technologies and techniques from AI and machine learning (ML). Generative AI models have received significant attention and scrutiny due to their potential harms, such as risks involving privacy, misinformation, copyright, and non-consensual sexual imagery. This report focuses on privacy issues and relevant policy considerations for Congress. Some policymakers and stakeholders have raised privacy concerns about how individual data may be used to develop and deploy generative models. These concerns are not new or unique to generative AI, but the scale, scope, and capacity of such technologies may present new privacy challenges for Congress…(More)”.

Actualizing Digital Self Determination: From Theory to Practice


Blog by Stefaan G. Verhulst: “The world is undergoing a rapid process of datafication, providing immense potential for addressing various challenges in society and the environment through responsible data reuse. However, datafication also results in imbalances, asymmetries, and silos that hinder the full realization of this potential and pose significant public policy challenges. In a recent paper, I suggest a key way to address these asymmetries–through a process of operationalizing digital self-determination. The paper, published open access in the journal Data and Policy (Cambridge University Press), is built around four key themes:…

Operationalizing DSD requires translating theoretical concepts into practical implementation. The paper proposes a four-pronged framework that covers processes, people and organizations, policies, products and technologies:

  • Processes include citizen engagement programs, public deliberations, and participatory impact assessments, can inform responsible data use.
  • People and organizations, including data stewards and intermediaries, play a vital role in fostering a culture of data agency and responsible data reuse.
  • Effective governance and policies, such as charters, social licenses, and codes of conduct, are key for implementing DSD.
  • Finally, technological tools and products need to focus on trusted data spaces, data portability, privacy-enhancing technologies, transparency, consent management, algorithmic accountability, and ethical AI….(More)” See also: International Network on Digital Self Determination.
Four ways to actualize digital self determination, Stefaan G. Verhulst

Crime, inequality and public health: a survey of emerging trends in urban data science


Paper by Massimiliano Luca, Gian Maria Campedelli, Simone Centellegher, Michele Tizzoni, and Bruno Lepri: “Urban agglomerations are constantly and rapidly evolving ecosystems, with globalization and increasing urbanization posing new challenges in sustainable urban development well summarized in the United Nations’ Sustainable Development Goals (SDGs). The advent of the digital age generated by modern alternative data sources provides new tools to tackle these challenges with spatio-temporal scales that were previously unavailable with census statistics. In this review, we present how new digital data sources are employed to provide data-driven insights to study and track (i) urban crime and public safety; (ii) socioeconomic inequalities and segregation; and (iii) public health, with a particular focus on the city scale…(More)”.

A Hiring Law Blazes a Path for A.I. Regulation


Article by Steve Lohr: “European lawmakers are finishing work on an A.I. act. The Biden administration and leaders in Congress have their plans for reining in artificial intelligence. Sam Altman, the chief executive of OpenAI, maker of the A.I. sensation ChatGPT, recommended the creation of a federal agency with oversight and licensing authority in Senate testimony last week. And the topic came up at the Group of 7 summit in Japan.

Amid the sweeping plans and pledges, New York City has emerged as a modest pioneer in A.I. regulation.

The city government passed a law in 2021 and adopted specific rules last month for one high-stakes application of the technology: hiring and promotion decisions. Enforcement begins in July.

The city’s law requires companies using A.I. software in hiring to notify candidates that an automated system is being used. It also requires companies to have independent auditors check the technology annually for bias. Candidates can request and be told what data is being collected and analyzed. Companies will be fined for violations.

New York City’s focused approach represents an important front in A.I. regulation. At some point, the broad-stroke principles developed by governments and international organizations, experts say, must be translated into details and definitions. Who is being affected by the technology? What are the benefits and harms? Who can intervene, and how?

“Without a concrete use case, you are not in a position to answer those questions,” said Julia Stoyanovich, an associate professor at New York University and director of its Center for Responsible A.I.

But even before it takes effect, the New York City law has been a magnet for criticism. Public interest advocates say it doesn’t go far enough, while business groups say it is impractical.

The complaints from both camps point to the challenge of regulating A.I., which is advancing at a torrid pace with unknown consequences, stirring enthusiasm and anxiety.

Uneasy compromises are inevitable.

Ms. Stoyanovich is concerned that the city law has loopholes that may weaken it. “But it’s much better than not having a law,” she said. “And until you try to regulate, you won’t learn how.”…(More)” – See also AI Localism: Governing AI at the Local Level

Boston Isn’t Afraid of Generative AI


Article by Beth Simone Noveck: “After ChatGPT burst on the scene last November, some government officials raced to prohibit its use. Italy banned the chatbot. New York City, Los Angeles Unified, Seattle, and Baltimore School Districts either banned or blocked access to generative AI tools, fearing that ChatGPT, Bard, and other content generation sites could tempt students to cheat on assignments, induce rampant plagiarism, and impede critical thinking. This week, US Congress heard testimony from Sam Altman, CEO of OpenAI, and AI researcher Gary Marcus as it weighed whether and how to regulate the technology.

In a rapid about-face, however, a few governments are now embracing a less fearful and more hands-on approach to AI. New York City Schools chancellor David Banks announced yesterday that NYC is reversing its ban because “the knee jerk fear and risk overlooked the potential of generative AI to support students and teachers, as well as the reality that our students are participating in and will work in a world where understanding generative AI is crucial.” And yesterday, City of Boston chief information officer Santiago Garces sent guidelines to every city official encouraging them to start using generative AI “to understand their potential.” The city also turned on use of Google Bard as part of the City of Boston’s enterprise-wide use of Google Workspace so that all public servants have access.

The “responsible experimentation approach” adopted in Boston—the first policy of its kind in the US—could, if used as a blueprint, revolutionize the public sector’s use of AI across the country and cause a sea change in how governments at every level approach AI. By promoting greater exploration of how AI can be used to improve government effectiveness and efficiency, and by focusing on how to use AI for governance instead of only how to govern AI, the Boston approach might help to reduce alarmism and focus attention on how to use AI for social good…(More)”.

Evidence-Based Policymaking: A Path to Data Culture


Article by Sajana Maharjan Amatya and Pranaya Sthapit: “…The first requirement of evidence-based planning is access to a supply of timely and reliable data. In Nepal, local governments produce lots of data, but it is too often locked away in multiple information systems operated by each municipal department. Gaining access to the data in these systems can be difficult because different departments often use different, proprietary formats. These information siloes block a 360 degree view of the available data—to say nothing of issues like redundancy, duplication, and inefficiency—and they frustrate public participation in an age when citizens expect streamlined digital access.

As a first step towards solving this artificial problem of data supply, D4D helps local governments gather their data onto one unified platform to release its full potential. We think of this as creating a “data lake” in each municipality for decentralized, democratic access. Freeing access to this already-existing evidence can open the door to fundamental changes in government procedures and the development and implementation of local policies, plans, and strategies.

Among the most telling shortcomings of Nepal’s legacy data policies has been the way that political interests have held sway in the local planning process, as exemplified by the political decision to distribute equal funds to all wards regardless of their unequal needs. In a more rational system, information about population size and other socioeconomic data about relative need would be a much more important factor in the allocation of funds. The National Planning Commission, a federal agency, has even distributed guidelines to Nepal’s local governments indicating that budgets should not simply be equal from ward to ward. But in practice, municipalities tend to allocate the same budget to each of their wards because elected leaders fear they will lose votes if they don’t get an equal share. Inevitably, ignoring evidence of relative need leads to the ad hoc allocation of funds to small, fragmented initiatives that mainly focus on infrastructure while overlooking other issues.

The application of available data to the planning cycle is what evidence-based planning is all about. The key is to codify the use of data throughout the planning process. So, D4D developed a framework and guidelines for evidence-based budgeting and planning for elected officials, committee members, and concerned citizens…(More)”.