The Bletchley Declaration


Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023: “In the context of our cooperation, and to inform action at the national and international levels, our agenda for addressing frontier AI risk will focus on:

  • identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.
  • building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.

In furtherance of this agenda, we resolve to support an internationally inclusive network of scientific research on frontier AI safety that encompasses and complements existing and new multilateral, plurilateral and bilateral collaboration, including through existing international fora and other relevant initiatives, to facilitate the provision of the best science available for policy making and the public good.

In recognition of the transformative positive potential of AI, and as part of ensuring wider international cooperation on AI, we resolve to sustain an inclusive global dialogue that engages existing international fora and other relevant initiatives and contributes in an open manner to broader international discussions, and to continue research on frontier AI safety to ensure that the benefits of the technology can be harnessed responsibly for good and for all. We look forward to meeting again in 2024…(More)”.

Does the sun rise for ChatGPT? Scientific discovery in the age of generative AI


Paper by David Leslie: “In the current hype-laden climate surrounding the rapid proliferation of foundation models and generative AI systems like ChatGPT, it is becoming increasingly important for societal stakeholders to reach sound understandings of their limitations and potential transformative effects. This is especially true in the natural and applied sciences, where magical thinking among some scientists about the take-off of “artificial general intelligence” has arisen simultaneously as the growing use of these technologies is putting longstanding norms, policies, and standards of good research practice under pressure. In this analysis, I argue that a deflationary understanding of foundation models and generative AI systems can help us sense check our expectations of what role they can play in processes of scientific exploration, sense-making, and discovery. I claim that a more sober, tool-based understanding of generative AI systems as computational instruments embedded in warm-blooded research processes can serve several salutary functions. It can play a crucial bubble-bursting role that mitigates some of the most serious threats to the ethos of modern science posed by an unreflective overreliance on these technologies. It can also strengthen the epistemic and normative footing of contemporary science by helping researchers circumscribe the part to be played by machine-led prediction in communicative contexts of scientific discovery while concurrently prodding them to recognise that such contexts are principal sites for human empowerment, democratic agency, and creativity. Finally, it can help spur ever richer approaches to collaborative experimental design, theory-construction, and scientific world-making by encouraging researchers to deploy these kinds of computational tools to heuristically probe unbounded search spaces and patterns in high-dimensional biophysical data that would otherwise be inaccessible to human-scale examination and inference…(More)”.

The UN Hired an AI Company to Untangle the Israeli-Palestinian Crisis


Article by David Gilbert: “…The application of artificial intelligence technologies to conflict situations has been around since at least 1996, with machine learning being used to predict where conflicts may occur. The use of AI in this area has expanded in the intervening years, being used to improve logistics, training, and other aspects of peacekeeping missions. Lane and Shults believe they could use artificial intelligence to dig deeper and find the root causes of conflicts.

Their idea for an AI program that models the belief systems that drive human behavior first began when Lane moved to Northern Ireland a decade ago to study whether computation modeling and cognition could be used to understand issues around religious violence.

In Belfast, Lane figured out that by modeling aspects of identity and social cohesion, and identifying the factors that make people motivated to fight and die for a particular cause, he could accurately predict what was going to happen next.

“We set out to try and come up with something that could help us better understand what it is about human nature that sometimes results in conflict, and then how can we use that tool to try and get a better handle or understanding on these deeper, more psychological issues at really large scales,” Lane says.

The result of their work was a study published in 2018 in The Journal for Artificial Societies and Social Simulation, which found that people are typically peaceful but will engage in violence when an outside group threatens the core principles of their religious identity.

A year later, Lane wrote that the model he had developed predicted that measures introduced by Brexit—the UK’s departure from the European Union that included the introduction of a hard border in the Irish Sea between Northern Ireland and the rest of the UK—would result in a rise in paramilitary activity. Months later, the model was proved right.

The multi-agent model developed by Lane and Shults relied on distilling more than 50 million articles from GDelt, a project that ​​monitors “the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages.” But feeding the AI millions of articles and documents was not enough, the researchers realized. In order to fully understand what was driving the people of Northern Ireland to engage in violence against their neighbors, they would need to conduct their own research…(More)”.

The Open Sky


Essay by Lars Erik Schönander: “Any time you walk outside, satellites may be watching you from space. There are currently more than 8,000 active satellites in orbit, including over a thousand designed to observe the Earth.

Satellite technology has come a long way since its secretive inception during the Cold War, when a country’s ability to successfully operate satellites meant not only that it was capable of launching rockets into Earth orbit but that it had eyes in the sky. Today not only governments across the world but private enterprises too launch satellites, collect and analyze satellite imagery, and sell it to a range of customers, from government agencies to the person on the street. SpaceX’s Starlink satellites bring the Internet to places where conventional coverage is spotty or compromised. Satellite data allows the United States to track rogue ships and North Korean missile launches, while scientists track wildfires, floods, and changes in forest cover.

The industry’s biggest technical challenge, aside from acquiring the satellite imagery itself, has always been to analyze and interpret it. This is why new AI tools are set to drastically change how satellite imagery is used — and who uses it. For instance, Meta’s Segment Anything Model, a machine-learning tool designed to “cut out” discrete objects from images, is proving highly effective at identifying objects in satellite images.

But the biggest breakthrough will likely come from large language models — tools like OpenAI’s ChatGPT — that may soon allow ordinary people to query the Earth’s surface the way data scientists query databases. Achieving this goal is the ambition of companies like Planet Labs, which has launched hundreds of satellites into space and is working with Microsoft to build what it calls a “queryable Earth.” At this point, it is still easy to dismiss their early attempt as a mere toy. But as the computer scientist Paul Graham once noted, if people like a new invention that others dismiss as a toy, this is probably a good sign of its future success.

This means that satellite intelligence capabilities that were once restricted to classified government agencies, and even now belong only to those with bountiful money or expertise, are about to be open to anyone with an Internet connection…(More)”.

The Tragedy of AI Governance


Paper by Simon Chesterman: “Despite hundreds of guides, frameworks, and principles intended to make AI “ethical” or “responsible”, ever more powerful applications continue to be released ever more quickly. Safety and security teams are being downsized or sidelined to bring AI products to market. And a significant portion of AI developers apparently believe there is a real risk that their work poses an existential threat to humanity.

This contradiction between statements and action can be attributed to three factors that undermine the prospects for meaningful governance of AI. The first is the shift of power from public to private hands, not only in deployment of AI products but in fundamental research. The second is the wariness of most states about regulating the sector too aggressively, for fear that it might drive innovation elsewhere. The third is the dysfunction of global processes to manage collective action problems, epitomized by the climate crisis and now frustrating efforts to govern a technology that does not respect borders. The tragedy of AI governance is that those with the greatest leverage to regulate AI have the least interest in doing so, while those with the greatest interest have the least leverage.

Resolving these challenges either requires rethinking the incentive structures — or waiting for a crisis that brings the need for regulation and coordination into sharper focus…(More)”

Enhancing the European Administrative Space (ComPAct)


European Commission: “Efficient national public administrations are critical to transform EU and national policies into reality, to implement reforms to the benefit of people and business alike, and to channel investments towards the achievement of the green and digital transition, and greater competitiveness. At the same time, national public administrations are also under an increasing pressure to deal with polycrisis and with many competing priorities. 

For the first time, with the ComPAct, the Commission is proposing a strategic set of actions not only to support the public administrations in the Member States to become more resilient, innovative and skilled, but also to strengthen the administrative cooperation between them, thereby allowing to close existing gaps in policies and services at European level.

With the ComPAct, the Commission aims to enhance the European Administrative Space by promoting a common set of overarching principles underpinning the quality of public administration and reinforcing its support for the administrative modernisation of the Member States. The ComPAct will help Member States address the EU Skills Agenda and the actions under the European Year of Skills, deliver on the targets of the Digital Decade to have 100% of key public services accessible online by 2030, and shape the conditions for the economies and societies to deliver on the ambitious 2030 climate and energy targets. The ComPAct will also help EU enlargement countries on their path to building better public administrations…(More)”.

Shifting policy systems – a framework for what to do and how to do it


Blog by UK Policy Lab: “Systems change is hard work, and it takes time. The reality is that no single system map or tool is enough to get you from point A to point B, from system now to system next. Over the last year, we have explored the latest in systems change theory and applied it to policymaking. In this four part blog series, we share our reflections on the wealth of knowledge we’ve gained working on intractable issues surrounding how support is delivered for people experiencing multiple disadvantage. Along the way, we realised that we need to make new tools to support policy teams to do this deep work in the future, and to see afresh the limitations of existing mental models for change and transformation.

Policy Lab has previously written about systems mapping as a useful process for understanding the interconnected nature of factors and actors that make up policy ecosystems. Here, we share our latest experimentation on how we can generate practical ideas for long-lasting and systemic change.

This blog includes:

  • An overview of what we did on our latest project – including the policy context, systems change frameworks we experimented with, and the bespoke project framework we created;
  • Our reflections on how we carried out the project;
  • A matrix which provides a practical guide for you to use this approach in your own work…(More)”.

Future Law, Ethics, and Smart Technologies


Book edited by John-Stewart Gordon: “This interdisciplinary textbook serves as a solid introduction to the future of legal education against the background of the widespread use of AI written by colleagues from different disciplines, e.g. law, philosophy/ethics, economy, and computer science, whose common interest concerns AI and its impact on legal and ethical issues. The book provides, first, a general overview of the effects of AI on major disciplines such as ethics, law, economy, political science, and healthcare. Secondly, it offers a comprehensive analysis of major key issues concerning law: (a) AI decision-making, (b) rights, status, and responsibility, (c) regulation and standardisation, and (d) education…(More)”.

Towards a Taxonomy of Anticipatory Methods: Integrating Traditional and Innovative Methods for Migration Policy


Towards a Taxonomy of Anticipatory Methods: Integrating Traditional and Innovative Methods for Migration Policy

Blog by Sara Marcucci, and Stefaan Verhulst: “…In this week’s blog post, we delineate a taxonomy of anticipatory methods, categorizing them into three distinct sub-categories: Experience-based, Exploration-based, and Expertise-based methods. Our focus will be on what the practical applications of these methods are and how both traditional and non-traditional data sources play a pivotal role within each of these categories. …Experience-based methods in the realm of migration policy focus on gaining insights from the lived experiences of individuals and communities involved in migration processes. These methods allow policymakers to tap into the lived experiences, challenges, and aspirations of individuals and communities, fostering a more empathetic and holistic approach to policy development.

Through the lens of people’s experiences and viewpoints, it is possible to create and explore a multitude of scenarios. This in-depth exploration provides policy makers with a comprehensive understanding of these potential pathways, which, in turn, inform their decision-making process…(More)”.

Data collaboration to enable the EU Green Deal


Article by Justine Gangneux: “In the fight against climate change, local authorities are increasingly turning to cross-sectoral data sharing as a game-changing strategy.

This collaborative approach empowers cities and communities to harness a wealth of data from diverse sources, enabling them to pinpoint emission hotspots, tailor policies for maximum impact, and allocate resources wisely.

Data can also strengthen climate resilience by engaging local communities and facilitating real-time progress tracking…

In recent years, more and more local data initiatives aimed at tackling climate change have emerged, spanning from urban planning to mobility, adaptation and energy management.

Such is the case of Porto’s CityCatalyst – the project put five demonstrators in place to showcase smart cities infrastructure and develop data standards and models, contributing to the efficient and integrated management of urban flows…

In Latvia, Riga is also exploring data solutions such as visualisations, aggregation or analytics, as part of the Positive Energy District strategy.  Driven by the national Energy Efficiency Law, the city is developing a project to monitor energy consumption based on building utility use data (heat, electricity, gas, or water), customer and billing data, and Internet of Things smart metre data from individual buildings…

As these examples show, it is not just public data that holds the key; private sector data, from utilities as energy or water, to telecoms, offers cities valuable insights in their efforts to tackle climate change…(More)”.