OECD Report: “The swift evolution of AI technologies calls for policymakers to consider and proactively manage AI-driven change. The OECD’s Expert Group on AI Futures was established to help meet this need and anticipate AI developments and their potential impacts. Informed by insights from the Expert Group, this report distils research and expert insights on prospective AI benefits, risks and policy imperatives. It identifies ten priority benefits, such as accelerated scientific progress, productivity gains and better sense-making and forecasting. It discusses ten priority risks, such as facilitation of increasingly sophisticated cyberattacks; manipulation, disinformation, fraud and resulting harms to democracy; concentration of power; incidents in critical systems and exacerbated inequality and poverty. Finally, it points to ten policy priorities, including establishing clearer liability rules, drawing AI “red lines”, investing in AI safety and ensuring adequate risk management procedures. The report reviews existing public policy and governance efforts and remaining gaps…(More)”.
Human-AI coevolution
Paper by Dino Pedreschi et al: “Human-AI coevolution, defined as a process in which humans and AI algorithms continuously influence each other, increasingly characterises our society, but is understudied in artificial intelligence and complexity science literature. Recommender systems and assistants play a prominent role in human-AI coevolution, as they permeate many facets of daily life and influence human choices through online platforms. The interaction between users and AI results in a potentially endless feedback loop, wherein users’ choices generate data to train AI models, which, in turn, shape subsequent user preferences. This human-AI feedback loop has peculiar characteristics compared to traditional human-machine interaction and gives rise to complex and often “unintended” systemic outcomes. This paper introduces human-AI coevolution as the cornerstone for a new field of study at the intersection between AI and complexity science focused on the theoretical, empirical, and mathematical investigation of the human-AI feedback loop. In doing so, we: (i) outline the pros and cons of existing methodologies and highlight shortcomings and potential ways for capturing feedback loop mechanisms; (ii) propose a reflection at the intersection between complexity science, AI and society; (iii) provide real-world examples for different human-AI ecosystems; and (iv) illustrate challenges to the creation of such a field of study, conceptualising them at increasing levels of abstraction, i.e., scientific, legal and socio-political…(More)”.
What is ‘sovereign AI’ and why is the concept so appealing (and fraught)?
Article by John Letzing: “Denmark unveiled its own artificial intelligence supercomputer last month, funded by the proceeds of wildly popular Danish weight-loss drugs like Ozempic. It’s now one of several sovereign AI initiatives underway, which one CEO believes can “codify” a country’s culture, history, and collective intelligence – and become “the bedrock of modern economies.”
That particular CEO, Jensen Huang, happens to run a company selling the sort of chips needed to pursue sovereign AI – that is, to construct a domestic vintage of the technology, informed by troves of homegrown data and powered by the computing infrastructure necessary to turn that data into a strategic reserve of intellect…
It’s not surprising that countries are forging expansive plans to put their own stamp on AI. But big-ticket supercomputers and other costly resources aren’t feasible everywhere.
Training a large language model has gotten a lot more expensive lately; the funds required for the necessary hardware, energy, and staff may soon top $1 billion. Meanwhile, geopolitical friction over access to the advanced chips necessary for powerful AI systems could further warp the global playing field.
Even for countries with abundant resources and access, there are “sovereignty traps” to consider. Governments pushing ahead on sovereign AI could risk undermining global cooperation meant to ensure the technology is put to use in transparent and equitable ways. That might make it a lot less safe for everyone.
An example: a place using AI systems trained on a local set of values for its security may readily flag behaviour out of sync with those values as a threat…(More)”.
Code and Craft: How Generative Ai Tools Facilitate Job Crafting in Software Development
Paper by Leonie Rebecca Freise et al: “The rapid evolution of the software development industry challenges developers to manage their diverse tasks effectively. Traditional assistant tools in software development often fall short of supporting developers efficiently. This paper explores how generative artificial intelligence (GAI) tools, such as Github Copilot or ChatGPT, facilitate job crafting—a process where employees reshape their jobs to meet evolving demands. By integrating GAI tools into workflows, software developers can focus more on creative problem-solving, enhancing job satisfaction, and fostering a more innovative work environment. This study investigates how GAI tools influence task, cognitive, and relational job crafting behaviors among software developers, examining its implications for professional growth and adaptability within the industry. The paper provides insights into the transformative impacts of GAI tools on software development job crafting practices, emphasizing their role in enabling developers to redefine their job functions…(More)”.
How public-private partnerships can ensure ethical, sustainable and inclusive AI development
Article by Rohan Sharma: “Artificial intelligence (AI) has the potential to solve some of today’s most pressing societal challenges – from climate change to healthcare disparities – but it could also exacerbate existing inequalities if not developed and deployed responsibly.
The rapid pace of AI development, growing awareness of AI’s societal impact and the urgent need to harness AI for positive change make bridging the ‘AI divide’ essential now. Public-private partnerships (PPPs) can play a crucial role in ensuring AI is developed ethically, sustainably and inclusively by leveraging the strengths of multiple stakeholders across sectors and regions…
To bridge the AI divide effectively, collaboration among governments, private companies, civil society and other stakeholders is crucial. PPPs unite these stakeholders’ strengths to ensure AI is developed ethically, sustainably, and inclusively.
1. Bridging the resource and expertise gap
By combining public oversight and private innovation, PPPs bridge resource and expertise gaps. Governments offer funding, regulations and access to public data; companies contribute technical expertise, creativity and market solutions. This collaboration accelerates AI technologies for social good.
Singapore’s National AI Strategy 2.0, for instance, exemplifies how PPPs drive ethical AI development. By bringing together over one hundred experts from academia, industry and government, Singapore is building a trusted AI ecosystem focused on global challenges like health and climate change. Empowering citizens and businesses to use AI responsibly, Singapore demonstrates how PPPs create inclusive AI systems, serving as a model for others.
2. Fostering cross-border collaboration
AI development is a global endeavour, but countries vary in expertise and resources. PPPs facilitate international knowledge sharing, technology transfer and common ethical standards, ensuring AI benefits are distributed globally, rather than concentrated in a few regions or companies.
3. Ensuring multi-stakeholder engagement
Inclusive AI development requires involving not just public and private sectors, but also civil society organizations and local communities. Engaging these groups in PPPs brings diverse perspectives to AI design and deployment, integrating ethical, social and cultural considerations from the start.
These approaches underscore the value of PPPs in driving AI development through diverse expertise, shared resources and international collaboration…(More)”.
How to evaluate statistical claims
Blog by Sean Trott: “…The goal of this post is to distill what I take to be the most important, immediately applicable, and generalizable insights from these classes. That means that readers should be able to apply those insights without a background in math or knowing how to, say, build a linear model in R. In that way, it’ll be similar to my previous post about “useful cognitive lenses to see through”, but with a greater focus on evaluating claims specifically.
Lesson #1: Consider the whole distribution, not just the central tendency.
If you spend much time reading news articles or social media posts, the odds are good you’ll encounter some descriptive statistics: numbers summarizing or describing a distribution (a set of numbers or values in a dataset). One of the most commonly used descriptive statistics is the arithmetic mean: the sum of every value in a distribution, divided by the number of values overall. The arithmetic mean is a measure of “central tendency”, which just means it’s a way to characterize the typical or expected value in that distribution.
The arithmetic mean is a really useful measure. But as many readers might already know, it’s not perfect. It’s strongly affected by outliers—values that are really different from the rest of the distribution—and things like the skew of a distribution (see the image below for examples of skewed distribution).
In particular, the mean is pulled in the direction of outliers or distribution skew. That’s the logic behind the joke about the average salary of people at a bar jumping up as soon as a billionaire walks in. It’s also why other measures of central tendency, such as the median, are often presented alongside (or instead of) the mean—especially for distributions that happen to be very skewed, such as income or wealth.
It’s not that one of these measures is more “correct”. As Stephen Jay Gould wrote in his article The Median Is Not the Message, they’re just different perspectives on the same distribution:
A politician in power might say with pride, “The mean income of our citizens is $15,000 per year.” The leader of the opposition might retort, “But half our citizens make less than $10,000 per year.” Both are right, but neither cites a statistic with impassive objectivity. The first invokes a mean, the second a median. (Means are higher than medians in such cases because one millionaire may outweigh hundreds of poor people in setting a mean, but can balance only one mendicant in calculating a median.)..(More)”
AI Analysis of Body Camera Videos Offers a Data-Driven Approach to Police Reform
Article by Ingrid Wickelgren: But unless something tragic happens, body camera footage generally goes unseen. “We spend so much money collecting and storing this data, but it’s almost never used for anything,” says Benjamin Graham, a political scientist at the University of Southern California.
Graham is among a small number of scientists who are reimagining this footage as data rather than just evidence. Their work leverages advances in natural language processing, which relies on artificial intelligence, to automate the analysis of video transcripts of citizen-police interactions. The findings have enabled police departments to spot policing problems, find ways to fix them and determine whether the fixes improve behavior.
Only a small number of police agencies have opened their databases to researchers so far. But if this footage were analyzed routinely, it would be a “real game changer,” says Jennifer Eberhardt, a Stanford University psychologist, who pioneered this line of research. “We can see beat-by-beat, moment-by-moment how an interaction unfolds.”
In papers published over the past seven years, Eberhardt and her colleagues have examined body camera footage to reveal how police speak to white and Black people differently and what type of talk is likely to either gain a person’s trust or portend an undesirable outcome, such as handcuffing or arrest. The findings have refined and enhanced police training. In a study published in PNAS Nexus in September, the researchers showed that the new training changed officers’ behavior…(More)”.
Engaging publics in science: a practical typology
Paper by Heather Douglas et al: “Public engagement with science has become a prominent area of research and effort for democratizing science. In the fall of 2020, we held an online conference, Public Engagement with Science: Defining and Measuring Success, to address questions of how to do public engagement well. The conference was organized around conceptualizations of the publics engaged, with attendant epistemic, ethical, and political valences. We present here the typology of publics we used (volunteer, representative sample, stakeholder, and community publics), discuss the differences among those publics and what those differences mean for practice, and situate this typology within the existing work on public engagement with science. We then provide an overview of the essays published in this journal arising from the conference which provides a window into the rich work presented at the event…(More)”.
Access, Signal, Action: Data Stewardship Lessons from Valencia’s Floods
Article by Marta Poblet, Stefaan Verhulst, and Anna Colom: “Valencia has a rich history in water management, a legacy shaped by both triumphs and tragedies. This connection to water is embedded in the city’s identity, yet modern floods test its resilience in new ways.
During the recent floods, Valencians experienced a troubling paradox. In today’s connected world, digital information flows through traditional and social media, weather apps, and government alert systems designed to warn us of danger and guide rapid responses. Despite this abundance of data, a tragedy unfolded last month in Valencia. This raises a crucial question: how can we ensure access to the right data, filter it for critical signals, and transform those signals into timely, effective action?
Data stewardship becomes essential in this process.
In particular, the devastating floods in Valencia underscore the importance of:
- having access to data to strengthen the signal (first mile challenges)
- separating signal from noise
- translating signal into action (last mile challenges)…(More)”.
The Motivational State: A strengths-based approach to improving public sector productivity
Paper by Alex Fox and Chris Fox: “…argues that traditional approaches to improving public sector productivity, such as adopting private sector practices, technology-driven reforms, and tighter management, have failed to address the complex and evolving needs of public service users. It proposes a shift towards a strengths-based, person-led model, where public services are co-produced with individuals, families, and communities…(More)”.