Paper by Geoff Keeling et al: “Moral imagination” is the capacity to register that one’s perspective on a decision-making situation is limited, and to imagine alternative perspectives that reveal new considerations or approaches. We have developed a Moral Imagination approach that aims to drive a culture of responsible innovation, ethical awareness, deliberation, decision-making, and commitment in organizations developing new technologies. We here present a case study that illustrates one key aspect of our approach – the technomoral scenario – as we have applied it in our work with product and engineering teams. Technomoral scenarios are fictional narratives that raise ethical issues surrounding the interaction between emerging technologies and society. Through facilitated roleplaying and discussion, participants are prompted to examine their own intentions, articulate justifications for actions, and consider the impact of decisions on various stakeholders. This process helps developers to reenvision their choices and responsibilities, ultimately contributing to a culture of responsible innovation…(More)”.
Assessing potential future artificial intelligence risks, benefits and policy imperatives
OECD Report: “The swift evolution of AI technologies calls for policymakers to consider and proactively manage AI-driven change. The OECD’s Expert Group on AI Futures was established to help meet this need and anticipate AI developments and their potential impacts. Informed by insights from the Expert Group, this report distils research and expert insights on prospective AI benefits, risks and policy imperatives. It identifies ten priority benefits, such as accelerated scientific progress, productivity gains and better sense-making and forecasting. It discusses ten priority risks, such as facilitation of increasingly sophisticated cyberattacks; manipulation, disinformation, fraud and resulting harms to democracy; concentration of power; incidents in critical systems and exacerbated inequality and poverty. Finally, it points to ten policy priorities, including establishing clearer liability rules, drawing AI “red lines”, investing in AI safety and ensuring adequate risk management procedures. The report reviews existing public policy and governance efforts and remaining gaps…(More)”.
Human-AI coevolution
Paper by Dino Pedreschi et al: “Human-AI coevolution, defined as a process in which humans and AI algorithms continuously influence each other, increasingly characterises our society, but is understudied in artificial intelligence and complexity science literature. Recommender systems and assistants play a prominent role in human-AI coevolution, as they permeate many facets of daily life and influence human choices through online platforms. The interaction between users and AI results in a potentially endless feedback loop, wherein users’ choices generate data to train AI models, which, in turn, shape subsequent user preferences. This human-AI feedback loop has peculiar characteristics compared to traditional human-machine interaction and gives rise to complex and often “unintended” systemic outcomes. This paper introduces human-AI coevolution as the cornerstone for a new field of study at the intersection between AI and complexity science focused on the theoretical, empirical, and mathematical investigation of the human-AI feedback loop. In doing so, we: (i) outline the pros and cons of existing methodologies and highlight shortcomings and potential ways for capturing feedback loop mechanisms; (ii) propose a reflection at the intersection between complexity science, AI and society; (iii) provide real-world examples for different human-AI ecosystems; and (iv) illustrate challenges to the creation of such a field of study, conceptualising them at increasing levels of abstraction, i.e., scientific, legal and socio-political…(More)”.
What is ‘sovereign AI’ and why is the concept so appealing (and fraught)?
Article by John Letzing: “Denmark unveiled its own artificial intelligence supercomputer last month, funded by the proceeds of wildly popular Danish weight-loss drugs like Ozempic. It’s now one of several sovereign AI initiatives underway, which one CEO believes can “codify” a country’s culture, history, and collective intelligence – and become “the bedrock of modern economies.”
That particular CEO, Jensen Huang, happens to run a company selling the sort of chips needed to pursue sovereign AI – that is, to construct a domestic vintage of the technology, informed by troves of homegrown data and powered by the computing infrastructure necessary to turn that data into a strategic reserve of intellect…
It’s not surprising that countries are forging expansive plans to put their own stamp on AI. But big-ticket supercomputers and other costly resources aren’t feasible everywhere.
Training a large language model has gotten a lot more expensive lately; the funds required for the necessary hardware, energy, and staff may soon top $1 billion. Meanwhile, geopolitical friction over access to the advanced chips necessary for powerful AI systems could further warp the global playing field.
Even for countries with abundant resources and access, there are “sovereignty traps” to consider. Governments pushing ahead on sovereign AI could risk undermining global cooperation meant to ensure the technology is put to use in transparent and equitable ways. That might make it a lot less safe for everyone.
An example: a place using AI systems trained on a local set of values for its security may readily flag behaviour out of sync with those values as a threat…(More)”.
Code and Craft: How Generative Ai Tools Facilitate Job Crafting in Software Development
Paper by Leonie Rebecca Freise et al: “The rapid evolution of the software development industry challenges developers to manage their diverse tasks effectively. Traditional assistant tools in software development often fall short of supporting developers efficiently. This paper explores how generative artificial intelligence (GAI) tools, such as Github Copilot or ChatGPT, facilitate job crafting—a process where employees reshape their jobs to meet evolving demands. By integrating GAI tools into workflows, software developers can focus more on creative problem-solving, enhancing job satisfaction, and fostering a more innovative work environment. This study investigates how GAI tools influence task, cognitive, and relational job crafting behaviors among software developers, examining its implications for professional growth and adaptability within the industry. The paper provides insights into the transformative impacts of GAI tools on software development job crafting practices, emphasizing their role in enabling developers to redefine their job functions…(More)”.
How public-private partnerships can ensure ethical, sustainable and inclusive AI development
Article by Rohan Sharma: “Artificial intelligence (AI) has the potential to solve some of today’s most pressing societal challenges – from climate change to healthcare disparities – but it could also exacerbate existing inequalities if not developed and deployed responsibly.
The rapid pace of AI development, growing awareness of AI’s societal impact and the urgent need to harness AI for positive change make bridging the ‘AI divide’ essential now. Public-private partnerships (PPPs) can play a crucial role in ensuring AI is developed ethically, sustainably and inclusively by leveraging the strengths of multiple stakeholders across sectors and regions…
To bridge the AI divide effectively, collaboration among governments, private companies, civil society and other stakeholders is crucial. PPPs unite these stakeholders’ strengths to ensure AI is developed ethically, sustainably, and inclusively.
1. Bridging the resource and expertise gap
By combining public oversight and private innovation, PPPs bridge resource and expertise gaps. Governments offer funding, regulations and access to public data; companies contribute technical expertise, creativity and market solutions. This collaboration accelerates AI technologies for social good.
Singapore’s National AI Strategy 2.0, for instance, exemplifies how PPPs drive ethical AI development. By bringing together over one hundred experts from academia, industry and government, Singapore is building a trusted AI ecosystem focused on global challenges like health and climate change. Empowering citizens and businesses to use AI responsibly, Singapore demonstrates how PPPs create inclusive AI systems, serving as a model for others.
2. Fostering cross-border collaboration
AI development is a global endeavour, but countries vary in expertise and resources. PPPs facilitate international knowledge sharing, technology transfer and common ethical standards, ensuring AI benefits are distributed globally, rather than concentrated in a few regions or companies.
3. Ensuring multi-stakeholder engagement
Inclusive AI development requires involving not just public and private sectors, but also civil society organizations and local communities. Engaging these groups in PPPs brings diverse perspectives to AI design and deployment, integrating ethical, social and cultural considerations from the start.
These approaches underscore the value of PPPs in driving AI development through diverse expertise, shared resources and international collaboration…(More)”.
Quantitative Urban Economics
Paper by Stephen J. Redding: “This paper reviews recent quantitative urban models. These models are sufficiently rich to capture observed features of the data, such as many asymmetric locations and a rich geography of the transport network. Yet these models remain sufficiently tractable as to permit an analytical characterization of their theoretical properties. With only a small number of structural parameters (elasticities) to be estimated, they lend themselves to transparent identification. As they rationalize the observed spatial distribution of economic activity within cities, they can be used to undertake counterfactuals for the impact of empirically-realistic public-policy interventions on this observed distribution. Empirical applications include estimating the strength of agglomeration economies and evaluating the impact of transport infrastructure improvements (e.g., railroads, roads, Rapid Bus Transit Systems), zoning and land use regulations, place-based policies, and new technologies such as remote working…(More)”.
Mini-publics and the public: challenges and opportunities
Conversation between Sarah Castell and Stephen Elstub: “…there’s a real problem here: the public are only going to get to know about a mini-public if it gets media coverage, but the media will only cover it if it makes an impact. But it’s more likely to make an impact if the public are aware of it. That’s a tension that mini-publics need to overcome, because it’s important that they reach out to the public. Ultimately it doesn’t matter how inclusive the recruitment is and how well it’s done. It doesn’t matter how well designed the process is. It is still a small number of people involved, so we want mini-publics to be able to influence public opinion and stimulate public debate. And if they can do that, then it’s more likely to affect elite opinion and debate as well, and possibly policy.
One more thing is that, people in power aren’t in the habit of sharing power. And that’s why it’s very difficult. I think the politicians are mainly motivated around this because they hope it’s going to look good to the electorate and get them some votes, but they are also worried about low levels of trust in society and what the ramifications of that might be. But in general, people in power don’t give it away very easily…
Part of the problem is that a lot of the research around public views on deliberative processes was done through experiments. It is useful, but it doesn’t quite tell us what will happen when mini-publics are communicated to the public in the messy real public sphere. Previously, there just weren’t that many well-known cases that we could actually do field research on. But that is starting to change.
There’s also more interdisciplinary work needed in this area. We need to improve how communication strategies around citizens’ assembly are done – there must be work that’s relevant in political communication studies and other fields who have this kind of insight…(More)”.
Ignorance: A Global History
Book by Peter Burke: “Throughout history, every age has thought of itself as more knowledgeable than the last. Renaissance humanists viewed the Middle Ages as an era of darkness, Enlightenment thinkers tried to sweep superstition away with reason, the modern welfare state sought to slay the “giant” of ignorance, and in today’s hyperconnected world seemingly limitless information is available on demand. But what about the knowledge lost over the centuries? Are we really any less ignorant than our ancestors?
In this highly original account, Peter Burke examines the long history of humanity’s ignorance across religion and science, war and politics, business and catastrophes. Burke reveals remarkable stories of the many forms of ignorance—genuine or feigned, conscious and unconscious—from the willful politicians who redrew Europe’s borders in 1919 to the politics of whistleblowing and climate change denial. The result is a lively exploration of human knowledge across the ages, and the importance of recognizing its limits…(More)”.
Operational Learning
International Red Cross: “Operational learning in emergencies is the lesson learned from managing and dealing with crises, refining protocols for resource allocation, decision-making, communication strategies, and others. The summaries are generated using AI and Large Language Models, based on data coming from Final DREF Reports, Emergency Appeal reports and others…(More)”