Two Paths for A.I.


Essay by Joshua Rothman: “Last spring, Daniel Kokotajlo, an A.I.-safety researcher working at OpenAI, quit his job in protest. He’d become convinced that the company wasn’t prepared for the future of its own technology, and wanted to sound the alarm. After a mutual friend connected us, we spoke on the phone. I found Kokotajlo affable, informed, and anxious. Advances in “alignment,” he told me—the suite of techniques used to insure that A.I. acts in accordance with human commands and values—were lagging behind gains in intelligence. Researchers, he said, were hurtling toward the creation of powerful systems they couldn’t control.

Kokotajlo, who had transitioned from a graduate program in philosophy to a career in A.I., explained how he’d educated himself so that he could understand the field. While at OpenAI, part of his job had been to track progress in A.I. so that he could construct timelines predicting when various thresholds of intelligence might be crossed. At one point, after the technology advanced unexpectedly, he’d had to shift his timelines up by decades. In 2021, he’d written a scenario about A.I. titled “What 2026 Looks Like.” Much of what he’d predicted had come to pass before the titular year. He’d concluded that a point of no return, when A.I. might become better than people at almost all important tasks, and be trusted with great power and authority, could arrive in 2027 or sooner. He sounded scared.

Around the same time that Kokotajlo left OpenAI, two computer scientists at Princeton, Sayash Kapoor and Arvind Narayanan, were preparing for the publication of their book, “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.” In it, Kapoor and Narayanan, who study technology’s integration with society, advanced views that were diametrically opposed to Kokotajlo’s. They argued that many timelines of A.I.’s future were wildly optimistic; that claims about its usefulness were often exaggerated or outright fraudulent; and that, because of the world’s inherent complexity, even powerful A.I. would change it only slowly. They cited many cases in which A.I. systems had been called upon to deliver important judgments—about medical diagnoses, or hiring—and had made rookie mistakes that indicated a fundamental disconnect from reality. The newest systems, they maintained, suffered from the same flaw.Recently, all three researchers have sharpened their views, releasing reports that take their analyses further. The nonprofit AI Futures Project, of which Kokotajlo is the executive director, has published “AI 2027,” a heavily footnoted document, written by Kokotajlo and four other researchers, which works out a chilling scenario in which “superintelligent” A.I. systems either dominate or exterminate the human race by 2030. It’s meant to be taken seriously, as a warning about what might really happen. Meanwhile, Kapoor and Narayanan, in a new paper titled “AI as Normal Technology,” insist that practical obstacles of all kinds—from regulations and professional standards to the simple difficulty of doing physical things in the real world—will slow A.I.’s deployment and limit its transformational potential. While conceding that A.I. may eventually turn out to be a revolutionary technology, on the scale of electricity or the internet, they maintain that it will remain “normal”—that is, controllable through familiar safety measures, such as fail-safes, kill switches, and human supervision—for the foreseeable future. “AI is often analogized to nuclear weapons,” they argue. But “the right analogy is nuclear power,” which has remained mostly manageable and, if anything, may be underutilized for safety reasons.

Making the case for collaborative digital infrastructure to scale regenerative food supply networks


Briefing paper from the Food Data Collaboration: “…a call to action to collaborate and invest in data infrastructure that will enable shorter, relational, regenerative food supply networks to scale.

These food supply networks play a vital role in achieving a truly sustainable and resilient food system. By embracing data technology that fosters commons ownership models, collaboration and interdependence we can build a more inclusive and dynamic food ecosystem in which collaborative efforts, as opposed to competitive businesses operating in silos, can achieve transformative scale.

Since 2022, the Food Data Collaboration has been exploring the potential for open data standards to enable shorter, relational, regenerative food supply networks to scale and pave the way towards a healthier, more equitable, and more resilient food future. This paper explores the high level rationale for our approach and is essential reading for anyone keen to know more about the project’s aims, achievements and future development…(More)”.

The Agentic State: How Agentic AI Will Revamp 10 Functional Layers of Public Administration


Whitepaper by the Global Government Technology Centre Berlin: “…explores how agentic AI will transform ten functional layers of government and public administration. The Agentic State signifies a fundamental shift in governance, where AI systems can perceive, reason, and act with minimal human intervention to deliver public value. Its impact on  key functional layers of government will be as follows…(More)”.

Unlock Your City’s Hidden Solutions


Article by Andreas Pawelke, Basma Albanna and Damiano Cerrone: “Cities around the world face urgent challenges — from climate change impacts to rapid urbanization and infrastructure strain. Municipal leaders struggle with limited budgets, competing priorities, and pressure to show quick results, making traditional approaches to urban transformation increasingly difficult to implement.

Every city, however, has hidden success stories — neighborhoods, initiatives, or communities that are achieving remarkable results despite facing similar challenges as their peers.

These “positive deviants” often remain unrecognized and underutilized, yet they contain the seeds of solutions that are already adapted to local contexts and constraints.

Data-Powered Positive Deviance (DPPD) combines urban data, advanced analytics, and community engagement to systematically uncover these bright spots and amplify their impact. This new approach offers a pathway to urban transformation that is not only evidence-based but also cost-effective and deeply rooted in local realities.

DPPD is particularly valuable in resource-constrained environments, where expensive external solutions often fail to take hold. By starting with what’s already working, cities can make strategic investments that build on existing strengths rather than starting from scratch. Leveraging AI tools that improve community engagement, the approach becomes even more powerful — enabling cities to envision potential futures, and engage citizens in meaningful co-creation…(More)”

Data as Policy


Paper by Janet Freilich and W. Nicholson Price II: “A large literature on regulation highlights the many different methods of policy-making: command-and-control rulemaking, informational disclosures, tort liability, taxes, and more. But the literature overlooks a powerful method to achieve policy objectives: data. The state can provide (or suppress) data as a regulatory tool to solve policy problems. For administrations with expansive views of government’s purpose, government-provided data can serve as infrastructure for innovation and push innovation in socially desirable directions; for administrations with deregulatory ambitions, suppressing or choosing not to collect data can reduce regulatory power or serve as a back-door mechanism to subvert statutory or common law rules. Government-provided data is particularly powerful for data-driven technologies such as AI where it is sometimes more effective than traditional methods of regulation. But government-provided data is a policy tool beyond AI and can influence policy in any field. We illustrate why government-provided data is a compelling tool both for positive regulation and deregulation in contexts ranging from addressing healthcare discrimination, automating legal practice, smart power generation, and others. We then consider objections and limitations to the role of government-provided data as policy instrument, with substantial focus on privacy concerns and the possibility for autocratic abuse.

We build on the broad literature on regulation by introducing data as a regulatory tool. We also join—and diverge from—the growing literature on data by showing that while data can be privately produced purely for private gain, they do not need to be. Rather, government can be deeply involved in the generation and sharing of data, taking a much more publicly oriented view. Ultimately, while government-provided data are not a panacea for either regulatory or data problems, governments should view data provision as an understudied but useful tool in the innovation and governance toolbox…(More)”

How Being Watched Changes How You Think


Article by Simon Makin: “In 1785 English philosopher Jeremy Bentham designed the perfect prison: Cells circle a tower from which an unseen guard can observe any inmate at will. As far as a prisoner knows, at any given time, the guard may be watching—or may not be. Inmates have to assume they’re constantly observed and behave accordingly. Welcome to the Panopticon.

Many of us will recognize this feeling of relentless surveillance. Information about who we are, what we do and buy and where we go is increasingly available to completely anonymous third parties. We’re expected to present much of our lives to online audiences and, in some social circles, to share our location with friends. Millions of effectively invisible closed-circuit television (CCTV) cameras and smart doorbells watch us in public, and we know facial recognition with artificial intelligence can put names to faces.

So how does being watched affect us? “It’s one of the first topics to have been studied in psychology,” says Clément Belletier, a psychologist at University of Clermont Auvergne in France. In 1898 psychologist Norman Triplett showed that cyclists raced harder in the presence of others. From the 1970s onward, studies showed how we change our overt behavior when we are watched to manage our reputation and social consequences.

But being watched doesn’t just change our behavior; decades of research show it also infiltrates our mind to impact how we think. And now a new study reveals how being watched affects unconscious processing in our brain. In this era of surveillance, researchers say, the findings raise concerns about our collective mental health…(More)”.

Measuring the Shade Coverage of Trees and Buildings in Cambridge, Massachusetts


Paper by Amirhosein Shabrang, Mehdi Pourpeikari Heris, and Travis Flohr: “We investigated the spatial shade patterns of trees and buildings on sidewalks and bike lanes in Cambridge, Massachusetts. We used Lidar data and 3D modeling to analyze the spatial and temporal shade distribution across the City. Our analysis shows significant shade variations throughout the City. Western city areas receive more shade from trees, and the eastern regions receive more shade from buildings. The City’s northern areas lack shade, but natural and built sources of shade can improve shade coverage integration. This study’s findings help identify shade coverage gaps, which have implications for urban planning and design for more heat-resilient cities…(More)”

AI in Urban Life


Book by Patricia McKenna: “In exploring artificial intelligence (AI) in urban life, this book brings together and extends thinking on how human-AI interactions are continuously evolving. Through such interactions, people are aided on the one hand, while becoming more aware of their own capabilities and potentials on the other hand, pertaining, for example, to creativity, human sensing, and collaboration.

It is the particular focus of research questions developed in relation to awareness, smart cities, autonomy, privacy, transparency, theory, methods, practices, and collective intelligence, along with the wide range of perspectives and opportunities offered, that set this work apart from others. Conceptual frameworks are formulated for each of these areas to guide explorations and understandings in this work and going forward. A synthesis is provided in the final chapter for perspectives, challenges and opportunities, and conceptual frameworks for urban life in an era of AI, opening the way for evolving research and practice directions…(More)”.

Smart Cities to Smart Societies: Moving Beyond Technology


Book edited by Esmat Zaidan, Imad Antoine Ibrahim, and Elie Azar: “…explores the governance of smart cities from a holistic approach, arguing that the creation of smart cities must consider the specific circumstances of each country to improve the preservation, revitalisation, liveability, and sustainability of urban areas. The recent push for smart cities is part of an effort to reshape urban development through megaprojects, centralised master planning, and approaches that convey modernism and global affluence. However, moving towards a citywide smart transition is a major undertaking, and complexities are expected to grow exponentially. This book argues that a comprehensive approach is necessary to consider all relevant aspects. The chapters seek to identify the potential and pitfalls of the smart transformation of urban communities and its role in sustainability goals; share state-of-the-art practices concerning technology, policy, and social science dimensions in smart cities and communities; and develop opportunities for cooperation and partnership in wider and larger research and development programmes. Divided into three parts, the first part of the book highlights the significance of various societal elements and factors in facilitating a successful smart transition, with a particular emphasis on the role of human capital. The second part delves into the challenges associated with technology and its integration into smart city initiatives. The final part of the book examines the current state of regulations and policies governing smart cities. The book will be an important asset for students and researchers studying law, engineering, political science, international relations, geopolitics, economics, and engineering…(More)”.

How the UK could monetise ‘citizen data’ and turn it into a national asset


Article by Ashley Braganza and S. Asieh H. Tabaghdehi: “Across all sectors, UK citizens produce vast amounts of data. This data is increasingly needed to train AI systems. But it is also of enormous value to private companies, which use it to target adverts to consumers based on their behaviour or to personalise content to keep people on their site.

Yet the economic and social value of this citizen-generated data is rarely returned to the public, highlighting the need for more equitable and transparent models of data stewardship.

AI companies have demonstrated that datasets hold immense economic, social and strategic value. And the UK’s AI Opportunities Action Plan notes that access to new and high-quality datasets can confer a competitive edge in developing AI models. This in turn unlocks the potential for innovative products and services.

However, there’s a catch. Most citizens have signed over their data to companies by accepting standard terms and conditions. Once citizen data is “owned” by companies, this leaves others unable to access it or forced to pay to do so.

Commercial approaches to data tend to prioritise short-term profit, often at the expense of the public interest. The debate over the use of artistic and creative materials to train AI models without recompense to the creator exemplifies the broader trade-off between commercial use of data and the public interest.

Countries around the world are recognising the strategic value of public data. The UK government could lead in making public data into a strategic asset. What this might mean in practice is the government owning citizen data and monetising this through sale or licensing agreements with commercial companies.

In our evidence, we proposed a UK sovereign data fund to manage the monetisation of public datasets curated within the NDL. This fund could invest directly in UK companies, fund scale-ups and create joint ventures with local and international partners.

The fund would have powers to license anonymised, ethically governed data to companies for commercial use. It would also be in a position to fast-track projects that benefit the UK or have been deemed to be national priorities. (These priorities are drones and other autonomous technologies as well as engineering biology, space and AI in healthcare.)…(More)”.