Explore our articles
View All Results

Stefaan Verhulst

Blog by Aleem Walji: “Crises do not create inequity and fault lines in society, they expose them. The systems and structures that give rise to inequality and inequity are deep-rooted and powerful. In recent months, we have seen the coronavirus bring into high relief many social and economic vulnerabilities across the world. It is now clear that Hispanics and Blacks are even more vulnerable to Covid-19 because of underlying health conditions, more frequent exposure to the virus, and broken social safety nets. This trend will only accelerate as the virus gains a foothold in Africa, parts of Asia, and Latin America.

The impact of the virus in places where health systems are weak, poverty is high, and large numbers of people are immunocompromised could be devastating. How do we mitigate the medium-term and second-order effects of a pandemic that will shrink economic growth and exacerbate inequality? This year alone, more than 500 million people are expected to fall into poverty, mostly in Africa and Asia. To defeat a virus that does not respect geographic boundaries, it is urgent for public and private actors, philanthropies, and global development institutions to use every tool available to alleviate a global humanitarian emergency and attendant economic collapse.

Technology, data science, and digital readiness are crucial elements for an effective emergency response and foundational to sustain a long-term recovery. Already, scientists and researchers across the world are leveraging data and digital platforms to accelerate the development of a vaccine, fast-track clinical trials, and contact tracing using mobile-enabled tools. Sensors are collecting huge amounts of data, and machine learning algorithms are helping policymakers decide when to relax physical distancing and where to open the economy and for how long.

Access to reliable information for decisionmaking, however, is not evenly spread. High frequency, granular, and anonymized datasets are essential for public-health officials and community health workers to target interventions and reach vulnerable populations faster and at a lower cost. Equipped with reliable data, civic technologists can leverage tools like artificial intelligence and machine learning to flatten the curve of Covid-19 and also the curve of inequity and unequal access to services and support.

This will not happen on its own. Preventing a much deeper digital divide will require forward-leaning policymakers, far-sighted investors and grant makers, civic-minded tech innovators and businesses, and a robust, digitally savvy civil society to work collaboratively for social and economic inclusion. It will require political will and improved data governance to deploy digital platforms to serve populations furthest behind. It is in our collective interest to ensure the health and well-being of every segment of society. Digital inclusion is part of the solution.

There are certain pathways public, private and social actors can follow to leverage data science, digital tools, and platforms today….(More)”.

Digital in the Time of the Coronavirus: Data Science and Technology as a Force for Inclusion

Urban Futures Studio: “In July 2020, we published our new essay ‘What, How and Who? Designing inclusive interactions in the energy transition’ (Bronsvoort, Hoffman and Hajer, 2020). In this essay, we argue that how the interactions between citizens and governments are shaped and enacted, has a large influence on who gets involved and to what extend people feel heard. To apply this approach to cases, we distinguish between three dimensions of interaction:

  • What (the defined object or issue at hand)
  • How (the setting and staging of the interaction)
  • Who (the target groups and protagonists of the process)

Focusing on the issue of form, we argue that processes for interaction between citizens and governments should be designed in a way that is more future oriented, organized over the long term, in closer proximity to citizens and with attention to the powerful role of ‘in-betweeners’ and ‘in-between’ places such as community houses, where people can meet to deliberate on the wide range of possible futures for their neighbourhood. 

Towards a multiplicity of future visions for sustainable cities
The energy transition has major consequences for the way we live, work, move and consume. For such complex transitions, governments need to engage and collaborate with citizens and other stakeholders. Their engagement enriches existing visions on future neighbourhoods, inform local policies and stimulate change. But how do you shape and organize such a participatory process? While governments use a wide range of public participation methods, many researchers have emphasized the limitations of many of these conventional methods with regard to the inclusion of diverse groups of citizens and in bridging discrepancies between government approaches and people’s lived experiences.

Rethinking citizen engagement for an inclusive energy transition
To help rethink citizen engagement, the Urban Futures Studio investigates existing and new approaches to citizen engagement and how they are practised by governments and societal actors. Following our essay research, our next project on citizen engagement includes a study on its relation to experimentation as a novel mode of governance. The goal of this research is to show insights into how citizen engagement manifests itself in the context of experimental governance on the neighbourhood level. By investigating the interactions between citizens, governments and other stakeholders in different types of participatory projects, we aim to gain a better understanding of how citizens are engaged and included in energy transition experiments and how we can improve its level of inclusion.

We use a relational approach of citizen engagement, by which we view participatory processes as collective practices that both shape and are shaped by their ‘matter of concern’, their public and their setting and staging. This view places emphasis on the form and conditions under which the interaction takes place. For example, the initiative of Places of Hope showed that engagement can be organised in diverse ways and can create new collectives….(More)”.

Rethinking citizen engagement for an inclusive energy transition

Electronic Frontier Foundation: “Law enforcement surveillance isn’t always secret. These technologies can be discovered in news articles and government meeting agendas, in company press releases and social media posts. It just hasn’t been aggregated before.

That’s the starting point for the Atlas of Surveillance, a collaborative effort between the Electronic Frontier Foundation and the University of Nevada, Reno Reynolds School of Journalism. Through a combination of crowdsourcing and data journalism, we are creating the largest-ever repository of information on which law enforcement agencies are using what surveillance technologies. The aim is to generate a resource for journalists, academics, and, most importantly, members of the public to check what’s been purchased locally and how technologies are spreading across the country.

We specifically focused on the most pervasive technologies, including drones, body-worn cameras, face recognition, cell-site simulators, automated license plate readers, predictive policing, camera registries, and gunshot detection. Although we have amassed more than 5,000 datapoints in 3,000 jurisdictions, our research only reveals the tip of the iceberg and underlines the need for journalists and members of the public to continue demanding transparency from criminal justice agencies….(More)”.

The Atlas of Surveillance

Oxford Commission on AI and Good Governance: “Many governments, public agencies and institutions already employ AI in providing public services, the distribution of resources and the delivery of governance goods. In the public sector, AI-enabled governance may afford new efficiencies that have the potential to transform a wide array of public service tasks.
But short-sighted design and use of AI can create new problems, entrench existing inequalities, and calcify and ultimately undermine government organizations.

Frameworks for the procurement and implementation of AI in public service have widely remained undeveloped. Frequently, existing regulations and national laws are no longer fit for purpose to ensure
good behaviour (of either AI or private suppliers) and are ill-equipped to provide guidance on the democratic use of AI.
As technology evolves rapidly, we need rules to guide the use of AI in ways that safeguard democratic values. Under what conditions can AI be put into service for good governance?

We offer a framework for integrating AI with good governance. We believe that with dedicated attention and evidence-based policy research, it should be possible to overcome the combined technical and organizational challenges of successfully integrating AI with good governance. Doing so requires working towards:


Inclusive Design: issues around discrimination and bias of AI in relation to inadequate data sets, exclusion of minorities and under-represented
groups, and the lack of diversity in design.
Informed Procurement: issues around the acquisition and development in relation to due diligence, design and usability specifications and the assessment of risks and benefits.
Purposeful Implementation: issues around the use of AI in relation to interoperability, training needs for public servants, and integration with decision-making processes.
Persistent Accountability: issues around the accountability and transparency of AI in relation to ‘black box’ algorithms, the interpretability and explainability of systems, monitoring and auditing…(More)”

Four Principles for Integrating AI & Good Governance

MIT Open Learning: “Can you recognize a digitally manipulated video when you see one? It’s harder than most people realize. As the technology to produce realistic “deepfakes” becomes more easily available, distinguishing fact from fiction will only get more challenging. A new digital storytelling project from MIT’s Center for Advanced Virtuality aims to educate the public about the world of deepfakes with “In Event of Moon Disaster.”

This provocative website showcases a “complete” deepfake (manipulated audio and video) of U.S. President Richard M. Nixon delivering the real contingency speech written in 1969 for a scenario in which the Apollo 11 crew were unable to return from the moon. The team worked with a voice actor and a company called Respeecher to produce the synthetic speech using deep learning techniques. They also worked with the company Canny AI to use video dialogue replacement techniques to study and replicate the movement of Nixon’s mouth and lips. Through these sophisticated AI and machine learning technologies, the seven-minute film shows how thoroughly convincing deepfakes can be….

Alongside the film, moondisaster.org features an array of interactive and educational resources on deepfakes. Led by Panetta and Halsey Burgund, a fellow at MIT Open Documentary Lab, an interdisciplinary team of artists, journalists, filmmakers, designers, and computer scientists has created a robust, interactive resource site where educators and media consumers can deepen their understanding of deepfakes: how they are made and how they work; their potential use and misuse; what is being done to combat deepfakes; and teaching and learning resources….(More)”.

Tackling the misinformation epidemic with “In Event of Moon Disaster”

Essay by Scott E. Page: “The total impact of the coronavirus pandemic—the loss of life and the economic, social, and psychological costs arising from both the pandemic itself and the policies implemented to prevent its spread—defy any characterization. Though the pandemic continues to unsettle, disrupt, and challenge communities, we might take a moment to appreciate and applaud the diversity, breadth, and scope of our responses—from individual actions to national policies—and even more important, to reflect on how they will produce a post–Covid-19 world far better than the world that preceded it.

In this brief essay, I describe how our adaptive responses to the coronavirus will lead to beneficial policy innovations. I do so from the perspective of a many-model thinker. By that I mean that I will use several formal models to theoretically elucidate the potential pathways to creating a better world. I offer this with the intent that it instills optimism that our current efforts to confront this tragic and difficult challenge will do more than combat the virus now and teach us how to combat future viruses. They will, in the long run, result in an enormous number of innovations in policy, business practices, and our daily lives….(More)”.

The Coronavirus and Innovation

Federica Cocco and Alan Smith at the Financial Times: “… To understand the historical roots of black data activism, we have to return to October 1899. Back then, Thomas Calloway, a clerk in the War Department, wrote to the educator Booker T Washington about his pitch for an “American Negro Exhibit” at the 1900 Exposition Universelle in Paris. It was right in the middle of the scramble for Africa and Europeans had developed a morbid fascination with the people they were trying to subjugate.

To Calloway, the Paris exhibition offered a unique venue to sway the global elite to acknowledge “the possibilities of the Negro” and to influence cultural change in the US from an international platform.

It is hard to overstate the importance of international fairs at the time. They were a platform to bolster the prestige of nations. In Delivering Views: Distant Cultures in Early Postcards, Robert Rydell writes that fairs had become “a vehicle that, perhaps next to the church, had the greatest capacity to influence a mass audience”….

For the Paris World Fair, Du Bois and a team of Atlanta University students and alumni designed and drew by hand more than 60 bold data portraits. A first set used Georgia as a case study to illustrate the progress made by African Americans since the Civil War.

A second set showed how “the descendants of former African slaves now in residence in the United States of America” had become lawyers, doctors, inventors and musicians. For the first time, the growth of literacy and employment rates, the value of assets and land owned by African Americans and their growing consumer power were there for everyone to see. At the 1900 World Fair, the “Exhibit of American Negroes” took up a prominent spot in the Palace of Social Economy. “As soon as they entered the building, visitors were inundated by examples of black excellence,” says Whitney Battle-Baptiste, director of the WEB Du Bois Center at the University of Massachusetts Amherst and co-author of WEB Du Bois’s Data Portraits: Visualizing Black America….(More)”

Working with students and alumni from Atlanta University, Du Bois created 60 bold data portraits for the ‘Exhibit of American Negroes’

Working with students and alumni from Atlanta University, Du Bois created 60 bold data portraits for the ‘Exhibit of American Negroes’ © Library of Congress, Prints & Photographs Division

Race and America: why data matters

Courtney Linder at Popular Mechanics: “Several prominent academic mathematicians want to sever ties with police departments across the U.S., according to a letter submitted to Notices of the American Mathematical Society on June 15. The letter arrived weeks after widespread protests against police brutality, and has inspired over 1,500 other researchers to join the boycott.

These mathematicians are urging fellow researchers to stop all work related to predictive policing software, which broadly includes any data analytics tools that use historical data to help forecast future crime, potential offenders, and victims. The technology is supposed to use probability to help police departments tailor their neighborhood coverage so it puts officers in the right place at the right time….

a flow chart showing how predictive policing works

RAND

According to a 2013 research briefing from the RAND Corporation, a nonprofit think tank in Santa Monica, California, predictive policing is made up of a four-part cycle (shown above). In the first two steps, researchers collect and analyze data on crimes, incidents, and offenders to come up with predictions. From there, police intervene based on the predictions, usually taking the form of an increase in resources at certain sites at certain times. The fourth step is, ideally, reducing crime.

“Law enforcement agencies should assess the immediate effects of the intervention to ensure that there are no immediately visible problems,” the authors note. “Agencies should also track longer-term changes by examining collected data, performing additional analysis, and modifying operations as needed.”

In many cases, predictive policing software was meant to be a tool to augment police departments that are facing budget crises with less officers to cover a region. If cops can target certain geographical areas at certain times, then they can get ahead of the 911 calls and maybe even reduce the rate of crime.

But in practice, the accuracy of the technology has been contested—and it’s even been called racist….(More)”.

Why Hundreds of Mathematicians Are Boycotting Predictive Policing

Introduction to a Special Blog Series by NIST: “…How can we use data to learn about a population, without learning about specific individuals within the population? Consider these two questions:

  1.  “How many people live in Vermont?”
  2. “How many people named Joe Near live in Vermont?”

The first reveals a property of the whole population, while the second reveals information about one person. We need to be able to learn about trends in the population while preventing the ability to learn anything new about a particular individual. This is the goal of many statistical analyses of data, such as the statistics published by the U.S. Census Bureau, and machine learning more broadly. In each of these settings, models are intended to reveal trends in populations, not reflect information about any single individual.

But how can we answer the first question “How many people live in Vermont?” — which we’ll refer to as a query — while preventing the second question from being answered “How many people name Joe Near live in Vermont?” The most widely used solution is called de-identification (or anonymization), which removes identifying information from the dataset. (We’ll generally assume a dataset contains information collected from many individuals.) Another option is to allow only aggregate queries, such as an average over the data. Unfortunately, we now understand that neither approach actually provides strong privacy protection. De-identified datasets are subject to database-linkage attacks. Aggregation only protects privacy if the groups being aggregated are sufficiently large, and even then, privacy attacks are still possible [1, 2, 3, 4]. 

Differential Privacy

Differential privacy [5, 6] is a mathematical definition of what it means to have privacy. It is not a specific process like de-identification, but a property that a process can have. For example, it is possible to prove that a specific algorithm “satisfies” differential privacy.

Informally, differential privacy guarantees the following for each individual who contributes data for analysis: the output of a differentially private analysis will be roughly the same, whether or not you contribute your data. A differentially private analysis is often called a mechanism, and we denote it ℳ.

Figure 1: Informal Definition of Differential Privacy
Figure 1: Informal Definition of Differential Privacy

Figure 1 illustrates this principle. Answer “A” is computed without Joe’s data, while answer “B” is computed with Joe’s data. Differential privacy says that the two answers should be indistinguishable. This implies that whoever sees the output won’t be able to tell whether or not Joe’s data was used, or what Joe’s data contained.

We control the strength of the privacy guarantee by tuning the privacy parameter ε, also called a privacy loss or privacy budget. The lower the value of the ε parameter, the more indistinguishable the results, and therefore the more each individual’s data is protected.

Figure 2: Formal Definition of Differential Privacy
Figure 2: Formal Definition of Differential Privacy

We can often answer a query with differential privacy by adding some random noise to the query’s answer. The challenge lies in determining where to add the noise and how much to add. One of the most commonly used mechanisms for adding noise is the Laplace mechanism [5, 7]. 

Queries with higher sensitivity require adding more noise in order to satisfy a particular `epsilon` quantity of differential privacy, and this extra noise has the potential to make results less useful. We will describe sensitivity and this tradeoff between privacy and usefulness in more detail in future blog posts….(More)”.

Differential Privacy for Privacy-Preserving Data Analysis

Paper by Robert M Gonzalez, Matthew Harvey and Foteini Tzachrista: “Empirical evidence on the effectiveness of grassroots monitoring is mixed. This paper proposes a previously unexplored mechanism that may explain this result. We argue that the presence of credible and effective top-down monitoring alternatives can undermine citizen participation in grassroots monitoring efforts. Building on Olken’s (2009) road-building field experiment in Indonesia; we find a large and robust effect of the participation interventions on missing expenditures in villages without an audit in place. However, this effect vanishes as soon as an audit is simultaneously implemented in the village. We find evidence of crowding-out effects: in government audit villages, individuals are less likely to attend, talk, and actively participate in accountability meetings. They are also significantly less likely to voice general problems, corruption-related problems, and to take serious actions to address these problems. Despite policies promoting joint implementation of top-down and bottom-up interventions, this paper shows that top-down monitoring can undermine rather than complement grassroots efforts….(More)”.

Monitoring Corruption: Can Top-down Monitoring Crowd-Out Grassroots Participation?

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday