Data as Policy


Paper by Janet Freilich and W. Nicholson Price II: “A large literature on regulation highlights the many different methods of policy-making: command-and-control rulemaking, informational disclosures, tort liability, taxes, and more. But the literature overlooks a powerful method to achieve policy objectives: data. The state can provide (or suppress) data as a regulatory tool to solve policy problems. For administrations with expansive views of government’s purpose, government-provided data can serve as infrastructure for innovation and push innovation in socially desirable directions; for administrations with deregulatory ambitions, suppressing or choosing not to collect data can reduce regulatory power or serve as a back-door mechanism to subvert statutory or common law rules. Government-provided data is particularly powerful for data-driven technologies such as AI where it is sometimes more effective than traditional methods of regulation. But government-provided data is a policy tool beyond AI and can influence policy in any field. We illustrate why government-provided data is a compelling tool both for positive regulation and deregulation in contexts ranging from addressing healthcare discrimination, automating legal practice, smart power generation, and others. We then consider objections and limitations to the role of government-provided data as policy instrument, with substantial focus on privacy concerns and the possibility for autocratic abuse.

We build on the broad literature on regulation by introducing data as a regulatory tool. We also join—and diverge from—the growing literature on data by showing that while data can be privately produced purely for private gain, they do not need to be. Rather, government can be deeply involved in the generation and sharing of data, taking a much more publicly oriented view. Ultimately, while government-provided data are not a panacea for either regulatory or data problems, governments should view data provision as an understudied but useful tool in the innovation and governance toolbox…(More)”

How Being Watched Changes How You Think


Article by Simon Makin: “In 1785 English philosopher Jeremy Bentham designed the perfect prison: Cells circle a tower from which an unseen guard can observe any inmate at will. As far as a prisoner knows, at any given time, the guard may be watching—or may not be. Inmates have to assume they’re constantly observed and behave accordingly. Welcome to the Panopticon.

Many of us will recognize this feeling of relentless surveillance. Information about who we are, what we do and buy and where we go is increasingly available to completely anonymous third parties. We’re expected to present much of our lives to online audiences and, in some social circles, to share our location with friends. Millions of effectively invisible closed-circuit television (CCTV) cameras and smart doorbells watch us in public, and we know facial recognition with artificial intelligence can put names to faces.

So how does being watched affect us? “It’s one of the first topics to have been studied in psychology,” says Clément Belletier, a psychologist at University of Clermont Auvergne in France. In 1898 psychologist Norman Triplett showed that cyclists raced harder in the presence of others. From the 1970s onward, studies showed how we change our overt behavior when we are watched to manage our reputation and social consequences.

But being watched doesn’t just change our behavior; decades of research show it also infiltrates our mind to impact how we think. And now a new study reveals how being watched affects unconscious processing in our brain. In this era of surveillance, researchers say, the findings raise concerns about our collective mental health…(More)”.

Measuring the Shade Coverage of Trees and Buildings in Cambridge, Massachusetts


Paper by Amirhosein Shabrang, Mehdi Pourpeikari Heris, and Travis Flohr: “We investigated the spatial shade patterns of trees and buildings on sidewalks and bike lanes in Cambridge, Massachusetts. We used Lidar data and 3D modeling to analyze the spatial and temporal shade distribution across the City. Our analysis shows significant shade variations throughout the City. Western city areas receive more shade from trees, and the eastern regions receive more shade from buildings. The City’s northern areas lack shade, but natural and built sources of shade can improve shade coverage integration. This study’s findings help identify shade coverage gaps, which have implications for urban planning and design for more heat-resilient cities…(More)”

AI in Urban Life


Book by Patricia McKenna: “In exploring artificial intelligence (AI) in urban life, this book brings together and extends thinking on how human-AI interactions are continuously evolving. Through such interactions, people are aided on the one hand, while becoming more aware of their own capabilities and potentials on the other hand, pertaining, for example, to creativity, human sensing, and collaboration.

It is the particular focus of research questions developed in relation to awareness, smart cities, autonomy, privacy, transparency, theory, methods, practices, and collective intelligence, along with the wide range of perspectives and opportunities offered, that set this work apart from others. Conceptual frameworks are formulated for each of these areas to guide explorations and understandings in this work and going forward. A synthesis is provided in the final chapter for perspectives, challenges and opportunities, and conceptual frameworks for urban life in an era of AI, opening the way for evolving research and practice directions…(More)”.

Smart Cities to Smart Societies: Moving Beyond Technology


Book edited by Esmat Zaidan, Imad Antoine Ibrahim, and Elie Azar: “…explores the governance of smart cities from a holistic approach, arguing that the creation of smart cities must consider the specific circumstances of each country to improve the preservation, revitalisation, liveability, and sustainability of urban areas. The recent push for smart cities is part of an effort to reshape urban development through megaprojects, centralised master planning, and approaches that convey modernism and global affluence. However, moving towards a citywide smart transition is a major undertaking, and complexities are expected to grow exponentially. This book argues that a comprehensive approach is necessary to consider all relevant aspects. The chapters seek to identify the potential and pitfalls of the smart transformation of urban communities and its role in sustainability goals; share state-of-the-art practices concerning technology, policy, and social science dimensions in smart cities and communities; and develop opportunities for cooperation and partnership in wider and larger research and development programmes. Divided into three parts, the first part of the book highlights the significance of various societal elements and factors in facilitating a successful smart transition, with a particular emphasis on the role of human capital. The second part delves into the challenges associated with technology and its integration into smart city initiatives. The final part of the book examines the current state of regulations and policies governing smart cities. The book will be an important asset for students and researchers studying law, engineering, political science, international relations, geopolitics, economics, and engineering…(More)”.

How the UK could monetise ‘citizen data’ and turn it into a national asset


Article by Ashley Braganza and S. Asieh H. Tabaghdehi: “Across all sectors, UK citizens produce vast amounts of data. This data is increasingly needed to train AI systems. But it is also of enormous value to private companies, which use it to target adverts to consumers based on their behaviour or to personalise content to keep people on their site.

Yet the economic and social value of this citizen-generated data is rarely returned to the public, highlighting the need for more equitable and transparent models of data stewardship.

AI companies have demonstrated that datasets hold immense economic, social and strategic value. And the UK’s AI Opportunities Action Plan notes that access to new and high-quality datasets can confer a competitive edge in developing AI models. This in turn unlocks the potential for innovative products and services.

However, there’s a catch. Most citizens have signed over their data to companies by accepting standard terms and conditions. Once citizen data is “owned” by companies, this leaves others unable to access it or forced to pay to do so.

Commercial approaches to data tend to prioritise short-term profit, often at the expense of the public interest. The debate over the use of artistic and creative materials to train AI models without recompense to the creator exemplifies the broader trade-off between commercial use of data and the public interest.

Countries around the world are recognising the strategic value of public data. The UK government could lead in making public data into a strategic asset. What this might mean in practice is the government owning citizen data and monetising this through sale or licensing agreements with commercial companies.

In our evidence, we proposed a UK sovereign data fund to manage the monetisation of public datasets curated within the NDL. This fund could invest directly in UK companies, fund scale-ups and create joint ventures with local and international partners.

The fund would have powers to license anonymised, ethically governed data to companies for commercial use. It would also be in a position to fast-track projects that benefit the UK or have been deemed to be national priorities. (These priorities are drones and other autonomous technologies as well as engineering biology, space and AI in healthcare.)…(More)”.

WorkflowHub: a registry for computational workflows


Paper by Ove Johan Ragnar Gustafsson et al: “The rising popularity of computational workflows is driven by the need for repetitive and scalable data processing, sharing of processing know-how, and transparent methods. As both combined records of analysis and descriptions of processing steps, workflows should be reproducible, reusable, adaptable, and available. Workflow sharing presents opportunities to reduce unnecessary reinvention, promote reuse, increase access to best practice analyses for non-experts, and increase productivity. In reality, workflows are scattered and difficult to find, in part due to the diversity of available workflow engines and ecosystems, and because workflow sharing is not yet part of research practice. WorkflowHub provides a unified registry for all computational workflows that links to community repositories, and supports both the workflow lifecycle and making workflows findable, accessible, interoperable, and reusable (FAIR). By interoperating with diverse platforms, services, and external registries, WorkflowHub adds value by supporting workflow sharing, explicitly assigning credit, enhancing FAIRness, and promoting workflows as scholarly artefacts. The registry has a global reach, with hundreds of research organisations involved, and more than 800 workflows registered…(More)”

Where Cloud Meets Cement


Report by Hanna Barakat, Chris Cameron, Alix Dunn and Prathm Juneja, and Emma Prest: “This report examines the global expansion of data centers driven by AI and cloud computing, highlighting both their economic promises and the often-overlooked social and environmental costs. Through case studies across five countries, it investigates how governments and tech companies influence development, how communities resist harmful effects, and what support is needed for effective advocacy…(More)”.

Designing Shared Data Futures: Engaging young people on how to re-use data responsibly for health and well-being


Report by Hannah Chafetz, Sampriti Saxena, Tracy Jo Ingram, Andrew J. Zahuranec, Jennifer Requejo and Stefaan Verhulst: “When young people are engaged in data decisions for or about them, they not only become more informed about this data, but can also contribute to new policies and programs that improve their health and well-being. However, oftentimes young people are left out of these discussions and are unaware of the data that organizations collect.

In October 2023, The Second Lancet Commission on Adolescent Health and well-being, the United Nations Children’s Fund (UNICEF), and The GovLab at New York University hosted six Youth Solutions Labs (or co-design workshops) with over 120 young people from 36 countries around the world. In addition to co-designing solutions to five key issues impacting their health and well-being, we sought to understand current sentiments around the re-use of data on those issues. The Labs provided several insights about young people’s preferences regarding: 1) the purposes for which data should be re-used to improve health and well-being, 2) the types and sources of data that should and should not be re-used, 3) who should have access to previously collected data, and 4) under what circumstances data re-use should take place. Additionally, participants provided suggestions of what ethical and responsible data re-use looks like to them and how young people can participate in decision making processes. In this paper, we elaborate on these findings and provide a series of recommendations to accelerate responsible data re-use for the health and well-being of young people…(More)”.

Why Generative AI Isn’t Transforming Government (Yet) — and What We Can Do About It


Article by Tiago C. Peixoto: “A few weeks ago, I reached out to a handful of seasoned digital services practitioners, NGOs, and philanthropies with a simple question: Where are the compelling generative AI (GenAI) use cases in public-sector workflows? I wasn’t looking for better search or smarter chatbots. I wanted examples of automation of real public workflows – something genuinely interesting and working. The responses, though numerous, were underwhelming.

That question has gained importance amid a growing number of reports forecasting AI’s transformative impact on government. The Alan Turing Institute, for instance, published a rigorous study estimating the potential of AI to help automate over 140 million government transactions in the UK. The Tony Blair Institute also weighed in, suggesting that a substantive portion of public-sector work could be automated. While the report helped bring welcome attention to the issue, its use of GPT-4 to assess task automatability has sparked a healthy discussion about how best to evaluate feasibility. Like other studies in this area, both reports highlight potential – but stop short of demonstrating real service automation.

Without testing technologies in real service environments – where workflows, incentives, and institutional constraints shape outcomes – and grounding each pilot in clear efficiency or well-being metrics, estimates risk becoming abstractions that underestimate feasibility.

This pattern aligns with what Arvind Narayanan and Sayash Kapoor argue in “AI as Normal Technology:” the impact of AI is realized only when methods translate into applications and diffuse through real-world systems. My own review, admittedly non-representative, confirms their call for more empirical work on the innovation-diffusion lag.

In the public sector, the gap between capability and impact is not only wide but also structural…(More)”