Stefaan Verhulst
Book by Alfredo M. Ronchi: “…This book explores a society currently being transformed by the influence of advanced information
Today, various parameters actively influence e-Services’ success or failure: cultural aspects,
Book edited by Sarah B. Macfarlane and Carla AbouZahr: “This handbook compiles methods for gathering, organizing and disseminating data to inform policy and manage health systems worldwide. Contributing authors describe national and international structures for generating data and explain the relevance of ethics, policy, epidemiology, health economics, demography, statistics, geography and qualitative methods to describing population health. The reader, whether a student of global health, public health practitioner, programme manager, data analyst or policymaker, will appreciate the methods, context
Book by Leigh Phillips and Michal Rozworski: “For the left and the right, major multinational companies are held up as the ultimate expressions of free-market capitalism. Their remarkable success appears to vindicate the old idea that modern society is too complex to be subjected to a plan. And yet, as Leigh Phillips and Michal Rozworski argue, much of the economy of the West is centrally planned at present. Not only is planning on vast scales possible, we already have it and it works. The real question is whether planning can be democratic. Can it be transformed to work for us?
An engaging, polemical romp through economic theory, computational complexity, and the history of planning, The People’s Republic of Walmart revives the conversation about how society can extend democratic decision-making to all economic matters. With the advances in information technology in recent decades and the emergence of globe-straddling collective enterprises, democratic planning in the interest of all humanity is more important and closer to attainment than ever before….(More)”.
(Open Access) Book by Roxana Radu: “… provides an incisive analysis of the emergence and evolution of global Internet governance, revealing its mechanisms, key actors and dominant community practices. Based on extensive empirical analysis covering more than four decades, it presents the evolution of Internet regulation from the early days of networking to more recent debates on algorithms and artificial intelligence, putting into perspective its politically-mediated system of rules built on technical features and power differentials.
For anyone interested in understanding contemporary global developments, this book is a primer on how norms of
Paper for the European Parliamentary Research Service: “This study examines the consequences of the increasingly prevalent use of artificial intelligence (AI) disinformation initiatives upon freedom of expression, pluralism and the functioning of a democratic polity. The study examines the trade-offs in using automated technology to limit the spread of disinformation online. It presents options (from self-regulatory to legislative) to regulate automated content recognition (ACR) technologies in this context. Special attention is paid to the opportunities for the European Union as a whole to take the lead in setting the framework for designing these technologies in a way that enhances accountability and transparency and respects free speech. The present project reviews some of the key academic and policy ideas on technology and disinformation and highlights their relevance to European policy.
Chapter 1 introduces the background to the study and presents the definitions used. Chapter 2 scopes the policy boundaries of disinformation from economic, societal and technological perspectives, focusing on the media context,
Hasan S. Merali, Li-Yi Lin, Qingfeng Li, and Kavi Bhalla in Injury Prevention: “The majority of Thailand’s road traffic deaths occur on
Using Google Maps, 3000 intersections in Bangkok were selected at random. At each intersection, hyperlinks of four images 90° apart were extracted. These 12 000 images were processed in Amazon Mechanical Turk using crowdsourcing to identify images containing motorcycles. The remaining images were sorted manually to determine helmet use
After processing, 462 unique motorcycle drivers were
This novel method of estimating helmet use has produced results similar to traditional methods. Applying this technology can reduce
Paper by Elizabeth J.Traut and Aaron Steinfeld: “Public transit is an important contributor to sustainable transportation as well as a public service that makes necessary travel possible for many. Poor transit transfers can lead to both a real and perceived reduction in convenience and safety, especially for people with disabilities. Poor transfers can expose riders to inclement weather and crime, and they can reduce transit ridership by motivating riders who have the option of driving or using paratransit to elect a more expensive and inefficient travel mode. Unfortunately, knowledge about inconvenient, missed, and unsafe transit transfers is sparse and incomplete.
We show that crowdsourced public transit ridership data, which is more scalable than conducting traditional surveys, can be used to analyze transit transfers. The Tiramisu Transit app merges open transit data with information contributed by users about which trips they take. We use Tiramisu data to do origin-destination analysis and identify connecting trips to create a better understanding of where and when poor transfers are occurring in the Pittsburgh region. We merge the results with data from other open public data sources, including crime data, to create a data resource that can be used for planning and identification of locations where bus shelters and other infrastructure improvements may facilitate safer and more comfortable waits and more accessible transfers. We use generalizable methods to ensure broader value to both science and practitioners.
We present a case study of the Pittsburgh region, in which we identified and characterized 338 transfers from 142 users. We found that 66.6% of transfers were within 0.4 km (0.25 mi.) and 44.1% of transfers were less than 10 min. We identified the geographical distribution of transfers and found several highly-utilized transfer locations that were not identified by the Port Authority of Allegheny County as recommended transfer points, and so might need more planning attention. We cross-referenced transfer location and wait time data with crime levels to provide additional planning insight….(More)”.
Working Paper by Ajjit Narayanan and Graham MacDonald: “Data is a critical resource for government decisionmaking, and in recent years, local governments, in a bid for transparency, community engagement, and innovation, have released many municipal datasets on publicly accessible open data portals. In recent years, advocates, reporters, and others have voiced concerns about the bias of algorithms used to guide public decisions and the data that power them.
Although significant progress is being made in developing tools for algorithmic bias and transparency, we could not find any standardized tools available for assessing bias in open data itself. In other words, how can policymakers, analysts, and advocates systematically measure the level of bias in the data that power city decisionmaking, whether an algorithm is used or not?
To fill this gap, we present a prototype of an automated bias assessment tool for geographic data. This new tool will allow city officials, concerned residents, and other stakeholders to quickly assess the bias and representativeness of their data. The tool allows users to upload a file with latitude and longitude coordinates and receive simple metrics of spatial and demographic bias across their city.
The tool is built on geographic and demographic data from the Census and assumes that the population distribution in a city represents the “ground truth” of the underlying distribution in the data uploaded. To provide an illustrative example of the tool’s use and output, we test our bias assessment on three datasets—bikeshare station locations, 311 service request locations, and Low Income Housing Tax Credit (LIHTC) building locations—across a few, hand-selected example cities….(More)”
Gideon Rachman at the Financial Times: “The 19th-century popularised the idea of the “
It is an idea that is gaining ground in states as diverse as China, India, Russia, Turkey and, even, the US. The notion of the
One reason that the idea of the
What is more surprising is that rightwing thinkers in the US are also retreating from the idea of “universal values” — in

First Volume of Circular City, A Research Journal by New Lab edited by André Corrêa d’Almeida: “…Circular City Data is the topic being explored in the first iteration of New Lab’s The Circular City program, which looks at data and knowledge as the energy, flow, and medium of collaboration. Circular data refers to the collection, production, and exchange of data, and business insights, between a series of collaborators around a shared set of inquiries. In some scenarios, data may be produced by start-ups and of high value to the city; in other cases, data may be produced by the city and of potential value to the public, start-ups, or enterprise companies. The conditions that need to be in place to safely, ethically, and efficiently extrapolate the highest potential value from data are what this program aims to uncover.
Similar to living systems, urban systems can be enhanced if the total pool of data available, i.e., energy, can be democratized and decentralized and data analytics used widely to positively impact quality of life. The abundance of data available, the vast differences in capacity across organizations to handle it, and the growing complexity of urban challenges provides an opportunity to test how principles of circular city data can help establish new forms of public and private partnerships that make cities more economically prosperous, livable, and resilient. Though we talk of an overabundance of data, it is often still not visible or tactically wielded at the local level in a way that benefits people.
Circular City Data is an effort to build a safe environment whereby start-ups, city agencies, and larger firms can collect, produce, access and exchange data, as well as business insights, through transaction mechanisms that do not necessarily require currency, i.e., through reciprocity. Circular data is data that travels across a number of stakeholders, helping to deliver insights and make clearer the opportunities where such stakeholders can work together to improve outcomes. It includes cases where a set of “circular” relationships need to be in place in order to produce such data and business insights. For example, if an AI company lacks access to raw data from the city, they won’t be able to provide valuable insights to the city. Or, Numina required an established relationship with the DBP in order to access infrastructure necessary for them to install their product and begin generating data that could be shared back with them. ***
Next, the case study documents and explains how The Circular City program was conceived, designed, and implemented, with the goal of offering lessons for scalability at New Lab and replicability in other cities around the world. The three papers that follow investigate and methodologically test the value of circular data applied to three different, but related, urban challenges: economic growth, mobility, and resilience.
Contents
- Introduction to The Circular City Research Program (André Corrêa d’Almeida)
- The Circular City Program: The Case Study (André Corrêa d’Almeida and Caroline McHeffey)
- Circular Data for a Circular City: Value Propositions for Economic Development (Stefaan G. Verhulst, Andrew Young, and Andrew J. Zahuranec)
- Circular Data for a Circular City: Value Propositions for Mobility (Arnaud Sahuguet)
- Circular Data for a Circular City: Value Propositions for Resilience and Sustainability (Nilda Mesa)
Conclusio (André Corrêa d’Almeida)