Stefaan Verhulst
Book by Patricia McKenna: “In exploring artificial intelligence (AI) in urban life, this book brings together and extends thinking on how human-AI interactions are continuously evolving. Through such interactions, people are aided on the one hand, while becoming more aware of their own capabilities and potentials on the other hand, pertaining, for example, to creativity, human sensing, and collaboration.
It is the particular focus of research questions developed in relation to awareness, smart cities, autonomy, privacy, transparency, theory, methods, practices, and collective intelligence, along with the wide range of perspectives and opportunities offered, that set this work apart from others. Conceptual frameworks are formulated for each of these areas to guide explorations and understandings in this work and going forward. A synthesis is provided in the final chapter for perspectives, challenges and opportunities, and conceptual frameworks for urban life in an era of AI, opening the way for evolving research and practice directions…(More)”.
Book edited by Esmat Zaidan, Imad Antoine Ibrahim, and Elie Azar: “…explores the governance of smart cities from a holistic approach, arguing that the creation of smart cities must consider the specific circumstances of each country to improve the preservation, revitalisation, liveability, and sustainability of urban areas. The recent push for smart cities is part of an effort to reshape urban development through megaprojects, centralised master planning, and approaches that convey modernism and global affluence. However, moving towards a citywide smart transition is a major undertaking, and complexities are expected to grow exponentially. This book argues that a comprehensive approach is necessary to consider all relevant aspects. The chapters seek to identify the potential and pitfalls of the smart transformation of urban communities and its role in sustainability goals; share state-of-the-art practices concerning technology, policy, and social science dimensions in smart cities and communities; and develop opportunities for cooperation and partnership in wider and larger research and development programmes. Divided into three parts, the first part of the book highlights the significance of various societal elements and factors in facilitating a successful smart transition, with a particular emphasis on the role of human capital. The second part delves into the challenges associated with technology and its integration into smart city initiatives. The final part of the book examines the current state of regulations and policies governing smart cities. The book will be an important asset for students and researchers studying law, engineering, political science, international relations, geopolitics, economics, and engineering…(More)”.
Book by Naci Karkin and Volkan Göçoğlu: “The book explores and positions citizen centricity within conventional public administration and public policy analysis theories and approaches. It seeks to define an appropriate perspective while utilizing popular, independent, and standalone concepts from the literature that support citizen centricity. Additionally, it illustrates the implementation part with practical cases. It ultimately presents a novel and descriptive approach to provide insights into how citizen centricity can be applied in practice. This descriptive novel approach has three essential components: a base and two pillars. The foundation includes new-age public policy making approaches and complexity theory. The first column reflects the conceptual dimension, which comprises supporting concepts from the literature on citizen centricity. The second column represents the practical dimension, a structure supported by academic research that provides practical cases and inspiration for future applications. The descriptive novel approach accepts citizen centricity as a fundamental approach in public policy making and aims to create a new awareness in the academic community on the subject. Additionally, the book provides refreshed conceptual and theoretical backgrounds, along with tangible participatory models and frameworks, benefiting academics, professionals, and graduate students…(More)”.
Article by Ashley Braganza and S. Asieh H. Tabaghdehi: “Across all sectors, UK citizens produce vast amounts of data. This data is increasingly needed to train AI systems. But it is also of enormous value to private companies, which use it to target adverts to consumers based on their behaviour or to personalise content to keep people on their site.
Yet the economic and social value of this citizen-generated data is rarely returned to the public, highlighting the need for more equitable and transparent models of data stewardship.
AI companies have demonstrated that datasets hold immense economic, social and strategic value. And the UK’s AI Opportunities Action Plan notes that access to new and high-quality datasets can confer a competitive edge in developing AI models. This in turn unlocks the potential for innovative products and services.
However, there’s a catch. Most citizens have signed over their data to companies by accepting standard terms and conditions. Once citizen data is “owned” by companies, this leaves others unable to access it or forced to pay to do so.
Commercial approaches to data tend to prioritise short-term profit, often at the expense of the public interest. The debate over the use of artistic and creative materials to train AI models without recompense to the creator exemplifies the broader trade-off between commercial use of data and the public interest.
Countries around the world are recognising the strategic value of public data. The UK government could lead in making public data into a strategic asset. What this might mean in practice is the government owning citizen data and monetising this through sale or licensing agreements with commercial companies.
In our evidence, we proposed a UK sovereign data fund to manage the monetisation of public datasets curated within the NDL. This fund could invest directly in UK companies, fund scale-ups and create joint ventures with local and international partners.
The fund would have powers to license anonymised, ethically governed data to companies for commercial use. It would also be in a position to fast-track projects that benefit the UK or have been deemed to be national priorities. (These priorities are drones and other autonomous technologies as well as engineering biology, space and AI in healthcare.)…(More)”.
Article by Bruce Katz and Julie Wagner: “A next wave of innovation districts is gaining momentum given the structural changes underway in the global economy. The examples cited above telegraph where existing innovation districts are headed and explain why new districts are forming. The districts highlighted and many others are responding to fast-changing and highly volatile macro forces and the need to de-risk, decarbonize, and diversify talent.
The next wave of innovation districts is distinctive for multiple reasons.
- The sectors leveraging this innovation geography expand way beyond the traditional focus on life sciences to include advanced manufacturing for military and civilian purposes.
- The deeper emphasis on decarbonization is driving the use of basic and applied R&D to invent new clean technology products and solutions as well as organizing energy generation and distribution within the districts themselves to meet crucial carbon targets.
- The stronger emphasis on the diversification of talent includes the upskilling of workers for new production activities and a broader set of systems to drive inclusive innovation to address long-standing inequities.
- The districts are attracting a broader group of stakeholders, including manufacturing companies, utilities, university industrial design and engineering departments and hard tech startups.
- The districts ultimately are looking to engage a wider base of investors given the disparate resources and traditions of capitalization that support defense tech, clean tech, med tech and other favored forms of innovation.
Some regions or states are also seeking ways to connect a constellation of districts and other economic hubs to harness the imperative to innovate accentuated by these and other macro forces. The state of South Australia is one such example. It has prioritized several innovation hubs across this region to foster South Australia’s knowledge and innovation ecosystem, as well as identify emerging economic clusters in industry sectors of global competitiveness to advance the broader economy…(More)”.
Paper by Ove Johan Ragnar Gustafsson et al: “The rising popularity of computational workflows is driven by the need for repetitive and scalable data processing, sharing of processing know-how, and transparent methods. As both combined records of analysis and descriptions of processing steps, workflows should be reproducible, reusable, adaptable, and available. Workflow sharing presents opportunities to reduce unnecessary reinvention, promote reuse, increase access to best practice analyses for non-experts, and increase productivity. In reality, workflows are scattered and difficult to find, in part due to the diversity of available workflow engines and ecosystems, and because workflow sharing is not yet part of research practice. WorkflowHub provides a unified registry for all computational workflows that links to community repositories, and supports both the workflow lifecycle and making workflows findable, accessible, interoperable, and reusable (FAIR). By interoperating with diverse platforms, services, and external registries, WorkflowHub adds value by supporting workflow sharing, explicitly assigning credit, enhancing FAIRness, and promoting workflows as scholarly artefacts. The registry has a global reach, with hundreds of research organisations involved, and more than 800 workflows registered…(More)”
Report by Hanna Barakat, Chris Cameron, Alix Dunn and Prathm Juneja, and Emma Prest: “This report examines the global expansion of data centers driven by AI and cloud computing, highlighting both their economic promises and the often-overlooked social and environmental costs. Through case studies across five countries, it investigates how governments and tech companies influence development, how communities resist harmful effects, and what support is needed for effective advocacy…(More)”.
Report by Hannah Chafetz, Sampriti Saxena, Tracy Jo Ingram, Andrew J. Zahuranec, Jennifer Requejo and Stefaan Verhulst: “When young people are engaged in data decisions for or about them, they not only become more informed about this data, but can also contribute to new policies and programs that improve their health and well-being. However, oftentimes young people are left out of these discussions and are unaware of the data that organizations collect.
In October 2023, The Second Lancet Commission on Adolescent Health and well-being, the United Nations Children’s Fund (UNICEF), and The GovLab at New York University hosted six Youth Solutions Labs (or co-design workshops) with over 120 young people from 36 countries around the world. In addition to co-designing solutions to five key issues impacting their health and well-being, we sought to understand current sentiments around the re-use of data on those issues. The Labs provided several insights about young people’s preferences regarding: 1) the purposes for which data should be re-used to improve health and well-being, 2) the types and sources of data that should and should not be re-used, 3) who should have access to previously collected data, and 4) under what circumstances data re-use should take place. Additionally, participants provided suggestions of what ethical and responsible data re-use looks like to them and how young people can participate in decision making processes. In this paper, we elaborate on these findings and provide a series of recommendations to accelerate responsible data re-use for the health and well-being of young people…(More)”.
Article by Steven M. Bellovin: “There were three U.S. technical/legal developments occurring in approximately 1993 that had a profound effect on the technology industry and on many technologists. More such developments are occurring with increasing frequency.
The three developments were, in fact, technically unrelated. One was a bill before the U.S. Congress for a standardized wiretap interface in phone switches, a concept that spread around the world under the generic name of “lawful intercept.” The second was an update to the copyright statute to adapt to the digital age. While there were some useful changes—caching proxies and ISPs transmitting copyrighted material were no longer to be held liable for making illegal copies of protected content—it also provided an easy way for careless or unscrupulous actors—including bots—to request takedown of perfectly legal material. The third was the infamous Clipper chip, an encryption device that provided a backdoor for the U.S.—and only the U.S.—government.
All three of these developments could be and were debated on purely legal or policy grounds. But there were also technical issues. Thus, one could argue on legal grounds that the Clipper chip granted the government unprecedented powers, powers arguably in violation of the Fourth Amendment to the U.S. Constitution. That, of course, is a U.S. issue—but technologists, including me, pointed out the technical risks of deploying a complex cryptographic protocol, anywhere in the world (and many other countries have since expressed similar desires). Sure enough, Matt Blaze showed how to abuse the Clipper chip to let it do backdoor-free encryption, and at least two other mechanisms for adding backdoors to encryption protocols were shown to have flaws that allowed malefactors to read data that others had encrypted.
These posed a problem: debating some issues intelligently required not just a knowledge of law or of technology, but of both. That is, some problems cannot be discussed purely on technical grounds or purely on legal grounds; the crux of the matter lies in the intersection.
Consider, for example, the difference between content and metadata in a communication. Metadata alone is extremely powerful; indeed, Michael Hayden, former director of both the CIA and the NSA, once said, “We kill people based on metadata.” The combination of content and metadata is of course even more powerful. However, under U.S. law (and the legal reasoning is complex and controversial), the content of a phone call is much more strongly protected than the metadata: who called whom, when, and for how long they spoke. But how does this doctrine apply to the Internet, a network that provides far more powerful abilities to the endpoints in a conversation? (Metadata analysis is not an Internet-specific phenomenon. The militaries of the world have likely been using it for more than a century.) You cannot begin to answer that question without knowing not just how the Internet actually works, but also the legal reasoning behind the difference. It took more than 100 pages for some colleagues and I, three computer scientists and a former Federal prosecutor, to show how the line between content and metadata can be drawn in some cases (and that the Department of Justice’s manuals and some Federal judges got the line wrong), but that in other cases, there is no possible line1
Newer technologies pose the same sorts of risks…(More)”.
Paper by Marij Swinkels, Olivier de Vette & Victor Toom: “Long-term public issues face the intergenerational problem: current policy decisions place a disproportionate burden on future generations while primarily benefitting those in the present. The interests of present generations trump those of future generations, as the latter play no explicit part as stakeholders in policy making processes. How can the interests of future generations be voiced in the present? In this paper, we explore an innovative method to incorporate the interests of future generations in the process of policymaking: future design. First, we situate future design in the policy process and relate it to other intergenerational policymaking initiatives that aim to redeem the intergenerational problem. Second, we show how we applied future design and provide insights into three pilots that we organized on two long-term public issues in the Netherlands: housing shortages and water management. We conclude that future design can effectively contribute to representing the interests of future generations, but that adoption of future design in different contexts also requires adaptation of the method. The findings increase our understanding of the value of future design as an innovative policymaking practice to strengthen intergenerational policymaking. As such, it provides policymakers with insights into how to use this method…(More)”.