Explore our articles
View All Results

Stefaan Verhulst

Electronic Frontier Foundation: “Law enforcement surveillance isn’t always secret. These technologies can be discovered in news articles and government meeting agendas, in company press releases and social media posts. It just hasn’t been aggregated before.

That’s the starting point for the Atlas of Surveillance, a collaborative effort between the Electronic Frontier Foundation and the University of Nevada, Reno Reynolds School of Journalism. Through a combination of crowdsourcing and data journalism, we are creating the largest-ever repository of information on which law enforcement agencies are using what surveillance technologies. The aim is to generate a resource for journalists, academics, and, most importantly, members of the public to check what’s been purchased locally and how technologies are spreading across the country.

We specifically focused on the most pervasive technologies, including drones, body-worn cameras, face recognition, cell-site simulators, automated license plate readers, predictive policing, camera registries, and gunshot detection. Although we have amassed more than 5,000 datapoints in 3,000 jurisdictions, our research only reveals the tip of the iceberg and underlines the need for journalists and members of the public to continue demanding transparency from criminal justice agencies….(More)”.

The Atlas of Surveillance

Oxford Commission on AI and Good Governance: “Many governments, public agencies and institutions already employ AI in providing public services, the distribution of resources and the delivery of governance goods. In the public sector, AI-enabled governance may afford new efficiencies that have the potential to transform a wide array of public service tasks.
But short-sighted design and use of AI can create new problems, entrench existing inequalities, and calcify and ultimately undermine government organizations.

Frameworks for the procurement and implementation of AI in public service have widely remained undeveloped. Frequently, existing regulations and national laws are no longer fit for purpose to ensure
good behaviour (of either AI or private suppliers) and are ill-equipped to provide guidance on the democratic use of AI.
As technology evolves rapidly, we need rules to guide the use of AI in ways that safeguard democratic values. Under what conditions can AI be put into service for good governance?

We offer a framework for integrating AI with good governance. We believe that with dedicated attention and evidence-based policy research, it should be possible to overcome the combined technical and organizational challenges of successfully integrating AI with good governance. Doing so requires working towards:


Inclusive Design: issues around discrimination and bias of AI in relation to inadequate data sets, exclusion of minorities and under-represented
groups, and the lack of diversity in design.
Informed Procurement: issues around the acquisition and development in relation to due diligence, design and usability specifications and the assessment of risks and benefits.
Purposeful Implementation: issues around the use of AI in relation to interoperability, training needs for public servants, and integration with decision-making processes.
Persistent Accountability: issues around the accountability and transparency of AI in relation to ‘black box’ algorithms, the interpretability and explainability of systems, monitoring and auditing…(More)”

Four Principles for Integrating AI & Good Governance

MIT Open Learning: “Can you recognize a digitally manipulated video when you see one? It’s harder than most people realize. As the technology to produce realistic “deepfakes” becomes more easily available, distinguishing fact from fiction will only get more challenging. A new digital storytelling project from MIT’s Center for Advanced Virtuality aims to educate the public about the world of deepfakes with “In Event of Moon Disaster.”

This provocative website showcases a “complete” deepfake (manipulated audio and video) of U.S. President Richard M. Nixon delivering the real contingency speech written in 1969 for a scenario in which the Apollo 11 crew were unable to return from the moon. The team worked with a voice actor and a company called Respeecher to produce the synthetic speech using deep learning techniques. They also worked with the company Canny AI to use video dialogue replacement techniques to study and replicate the movement of Nixon’s mouth and lips. Through these sophisticated AI and machine learning technologies, the seven-minute film shows how thoroughly convincing deepfakes can be….

Alongside the film, moondisaster.org features an array of interactive and educational resources on deepfakes. Led by Panetta and Halsey Burgund, a fellow at MIT Open Documentary Lab, an interdisciplinary team of artists, journalists, filmmakers, designers, and computer scientists has created a robust, interactive resource site where educators and media consumers can deepen their understanding of deepfakes: how they are made and how they work; their potential use and misuse; what is being done to combat deepfakes; and teaching and learning resources….(More)”.

Tackling the misinformation epidemic with “In Event of Moon Disaster”

Essay by Scott E. Page: “The total impact of the coronavirus pandemic—the loss of life and the economic, social, and psychological costs arising from both the pandemic itself and the policies implemented to prevent its spread—defy any characterization. Though the pandemic continues to unsettle, disrupt, and challenge communities, we might take a moment to appreciate and applaud the diversity, breadth, and scope of our responses—from individual actions to national policies—and even more important, to reflect on how they will produce a post–Covid-19 world far better than the world that preceded it.

In this brief essay, I describe how our adaptive responses to the coronavirus will lead to beneficial policy innovations. I do so from the perspective of a many-model thinker. By that I mean that I will use several formal models to theoretically elucidate the potential pathways to creating a better world. I offer this with the intent that it instills optimism that our current efforts to confront this tragic and difficult challenge will do more than combat the virus now and teach us how to combat future viruses. They will, in the long run, result in an enormous number of innovations in policy, business practices, and our daily lives….(More)”.

The Coronavirus and Innovation

Federica Cocco and Alan Smith at the Financial Times: “… To understand the historical roots of black data activism, we have to return to October 1899. Back then, Thomas Calloway, a clerk in the War Department, wrote to the educator Booker T Washington about his pitch for an “American Negro Exhibit” at the 1900 Exposition Universelle in Paris. It was right in the middle of the scramble for Africa and Europeans had developed a morbid fascination with the people they were trying to subjugate.

To Calloway, the Paris exhibition offered a unique venue to sway the global elite to acknowledge “the possibilities of the Negro” and to influence cultural change in the US from an international platform.

It is hard to overstate the importance of international fairs at the time. They were a platform to bolster the prestige of nations. In Delivering Views: Distant Cultures in Early Postcards, Robert Rydell writes that fairs had become “a vehicle that, perhaps next to the church, had the greatest capacity to influence a mass audience”….

For the Paris World Fair, Du Bois and a team of Atlanta University students and alumni designed and drew by hand more than 60 bold data portraits. A first set used Georgia as a case study to illustrate the progress made by African Americans since the Civil War.

A second set showed how “the descendants of former African slaves now in residence in the United States of America” had become lawyers, doctors, inventors and musicians. For the first time, the growth of literacy and employment rates, the value of assets and land owned by African Americans and their growing consumer power were there for everyone to see. At the 1900 World Fair, the “Exhibit of American Negroes” took up a prominent spot in the Palace of Social Economy. “As soon as they entered the building, visitors were inundated by examples of black excellence,” says Whitney Battle-Baptiste, director of the WEB Du Bois Center at the University of Massachusetts Amherst and co-author of WEB Du Bois’s Data Portraits: Visualizing Black America….(More)”

Working with students and alumni from Atlanta University, Du Bois created 60 bold data portraits for the ‘Exhibit of American Negroes’

Working with students and alumni from Atlanta University, Du Bois created 60 bold data portraits for the ‘Exhibit of American Negroes’ © Library of Congress, Prints & Photographs Division

Race and America: why data matters

Courtney Linder at Popular Mechanics: “Several prominent academic mathematicians want to sever ties with police departments across the U.S., according to a letter submitted to Notices of the American Mathematical Society on June 15. The letter arrived weeks after widespread protests against police brutality, and has inspired over 1,500 other researchers to join the boycott.

These mathematicians are urging fellow researchers to stop all work related to predictive policing software, which broadly includes any data analytics tools that use historical data to help forecast future crime, potential offenders, and victims. The technology is supposed to use probability to help police departments tailor their neighborhood coverage so it puts officers in the right place at the right time….

a flow chart showing how predictive policing works

RAND

According to a 2013 research briefing from the RAND Corporation, a nonprofit think tank in Santa Monica, California, predictive policing is made up of a four-part cycle (shown above). In the first two steps, researchers collect and analyze data on crimes, incidents, and offenders to come up with predictions. From there, police intervene based on the predictions, usually taking the form of an increase in resources at certain sites at certain times. The fourth step is, ideally, reducing crime.

“Law enforcement agencies should assess the immediate effects of the intervention to ensure that there are no immediately visible problems,” the authors note. “Agencies should also track longer-term changes by examining collected data, performing additional analysis, and modifying operations as needed.”

In many cases, predictive policing software was meant to be a tool to augment police departments that are facing budget crises with less officers to cover a region. If cops can target certain geographical areas at certain times, then they can get ahead of the 911 calls and maybe even reduce the rate of crime.

But in practice, the accuracy of the technology has been contested—and it’s even been called racist….(More)”.

Why Hundreds of Mathematicians Are Boycotting Predictive Policing

Introduction to a Special Blog Series by NIST: “…How can we use data to learn about a population, without learning about specific individuals within the population? Consider these two questions:

  1.  “How many people live in Vermont?”
  2. “How many people named Joe Near live in Vermont?”

The first reveals a property of the whole population, while the second reveals information about one person. We need to be able to learn about trends in the population while preventing the ability to learn anything new about a particular individual. This is the goal of many statistical analyses of data, such as the statistics published by the U.S. Census Bureau, and machine learning more broadly. In each of these settings, models are intended to reveal trends in populations, not reflect information about any single individual.

But how can we answer the first question “How many people live in Vermont?” — which we’ll refer to as a query — while preventing the second question from being answered “How many people name Joe Near live in Vermont?” The most widely used solution is called de-identification (or anonymization), which removes identifying information from the dataset. (We’ll generally assume a dataset contains information collected from many individuals.) Another option is to allow only aggregate queries, such as an average over the data. Unfortunately, we now understand that neither approach actually provides strong privacy protection. De-identified datasets are subject to database-linkage attacks. Aggregation only protects privacy if the groups being aggregated are sufficiently large, and even then, privacy attacks are still possible [1, 2, 3, 4]. 

Differential Privacy

Differential privacy [5, 6] is a mathematical definition of what it means to have privacy. It is not a specific process like de-identification, but a property that a process can have. For example, it is possible to prove that a specific algorithm “satisfies” differential privacy.

Informally, differential privacy guarantees the following for each individual who contributes data for analysis: the output of a differentially private analysis will be roughly the same, whether or not you contribute your data. A differentially private analysis is often called a mechanism, and we denote it ℳ.

Figure 1: Informal Definition of Differential Privacy
Figure 1: Informal Definition of Differential Privacy

Figure 1 illustrates this principle. Answer “A” is computed without Joe’s data, while answer “B” is computed with Joe’s data. Differential privacy says that the two answers should be indistinguishable. This implies that whoever sees the output won’t be able to tell whether or not Joe’s data was used, or what Joe’s data contained.

We control the strength of the privacy guarantee by tuning the privacy parameter ε, also called a privacy loss or privacy budget. The lower the value of the ε parameter, the more indistinguishable the results, and therefore the more each individual’s data is protected.

Figure 2: Formal Definition of Differential Privacy
Figure 2: Formal Definition of Differential Privacy

We can often answer a query with differential privacy by adding some random noise to the query’s answer. The challenge lies in determining where to add the noise and how much to add. One of the most commonly used mechanisms for adding noise is the Laplace mechanism [5, 7]. 

Queries with higher sensitivity require adding more noise in order to satisfy a particular `epsilon` quantity of differential privacy, and this extra noise has the potential to make results less useful. We will describe sensitivity and this tradeoff between privacy and usefulness in more detail in future blog posts….(More)”.

Differential Privacy for Privacy-Preserving Data Analysis

Paper by Robert M Gonzalez, Matthew Harvey and Foteini Tzachrista: “Empirical evidence on the effectiveness of grassroots monitoring is mixed. This paper proposes a previously unexplored mechanism that may explain this result. We argue that the presence of credible and effective top-down monitoring alternatives can undermine citizen participation in grassroots monitoring efforts. Building on Olken’s (2009) road-building field experiment in Indonesia; we find a large and robust effect of the participation interventions on missing expenditures in villages without an audit in place. However, this effect vanishes as soon as an audit is simultaneously implemented in the village. We find evidence of crowding-out effects: in government audit villages, individuals are less likely to attend, talk, and actively participate in accountability meetings. They are also significantly less likely to voice general problems, corruption-related problems, and to take serious actions to address these problems. Despite policies promoting joint implementation of top-down and bottom-up interventions, this paper shows that top-down monitoring can undermine rather than complement grassroots efforts….(More)”.

Monitoring Corruption: Can Top-down Monitoring Crowd-Out Grassroots Participation?

Essay by Benjamin Kumpf: “…Here are some of the relevant trade-offs I identified. 

Rigour vs. Speed

How to best balance high-quality rigorous research and the need to gain actionable insights rapidly?  

Responding to a pandemic requires working at pace, while investing in ongoing research and the cross-fertilization of disciplines. In our response, we witness the importance of strong networks with academia and DFID’s focus on high-quality research. In parallel, we invest in supporting partners with rapid data collection through methods such as phone surveys, field visits, onsite interviews where possible as well as big data analysis and more. For example, through the International Growth Centre, DFID has supported a Sierra Leone COVID-19 dashboard, providing real time data on current economic conditions and trends from phone–based surveys from 195 towns and villages across Sierra Leone. ….

Breadth vs. depth

How to best balance providing services to large proportions of populations in need, while addressing challenges of specific communities?  

We are seeing emerging evidence that the virus and measures to prevent spread are disproportionately impacting marginalized communities and minorities. For example, in indigenous people are disproportionally affected by the virus in Brazil, Dalits are among the worst affected in India. In development and humanitarian contexts, it is paramount to guide innovation efforts with explicit values, including on the trade-off between scale and addressing last-mile challenges to leaveno–one behind. For example, to facilitate behaviour-change and embed insights from behavioural science and adaptive practices, DFID is supporting the Hygiene Hub, hosted at the London School for Hygiene and Tropical Medicine. The Hub provides free-of-charge advisory services to governments and non-governmental organizations working on COVID-19 related challenges in low and medium-income countries, balancing the need to reach large audiences and to design bespoke interventions for specific communities.  

Exploration vs. adaptation

How to best diversify innovation efforts and investments betweensearching for local solution and adapting proven approaches? 

Adaptive vs. locally-led

How to best learn and adapt, while providing ownership to local players?

Single-point solutions vs. systems-practices

How to advance specific tech and non-tech innovations that address urgent needs, while further improving existing systems? 

Supporting domestic innovators vs. strengthening local solutions and ecosystems

We need explicit conversations to ensure better transparency about this trade-off in innovation investments generally.…(More)”.

Trade-offs and considerations for the future: Innovation and the COVID-19 response

Chas Kissick, Elliot Setzer, and Jacob Schulz at Lawfare: “In May of this year, Prime Minister Boris Johnson pledged the United Kingdom would develop a “world beating” track and trace system by June 1 to stop the spread of the novel coronavirus. But on June 18, the government quietly abandoned its coronavirus contact-tracing app, a key piece of the “world beating” strategy, and instead promised to switch to a model designed by Apple and Google. The delayed app will not be ready until winter, and the U.K.’s Junior Health Minister told reporters that “it isn’t a priority for us at the moment.” When Johnson came under fire in Parliament for the abrupt U-turn, he replied: “I wonder whether the right honorable and learned Gentleman can name a single country in the world that has a functional contact tracing app—there isn’t one.”

Johnson’s rebuttal is perhaps a bit reductive, but he’s not that far off.

You probably remember the idea of contact-tracing apps: the technological intervention that seemed to have the potential to save lives while enabling a hamstrung economy to safely inch back open; it was a fixation of many public health and privacy advocates; it was the thing that was going to help us get out of this mess if we could manage the risks.

Yet nearly three months after Google and Apple announced with great fanfare their partnership to build a contact-tracing API, contact-tracing apps have made an unceremonious exit from the front pages of American newspapers. Countries, states and localities continue to try to develop effective digital tracing strategies. But as Jonathan Zittrain puts it, the “bigger picture momentum appears to have waned.”

What’s behind contact-tracing apps’ departure from the spotlight? For one, there’s the onset of a larger pandemic apathy in the U.S; many politicians and Americans seem to have thrown up their hands or put all their hopes in the speedy development of a vaccine. Yet, the apps haven’t even made much of a splash in countries that havetaken the pandemic more seriously. Anxieties about privacy persist. But technical shortcomings in the apps deserve the lion’s share of the blame. Countries have struggled to get bespoke apps developed by government technicians to work on Apple phones. The functionality of some Bluetooth-enabled models vary widely depending on small changes in phone positioning. And most countries have only convinced a small fraction of their populace to use national tracing apps.

Maybe it’s still possible that contact-tracing apps will make a miraculous comeback and approach the level of efficacy observers once anticipated.

But even if technical issues implausibly subside, the apps are operating in a world of unknowns.

Most centrally, researchers still have no real idea what level of adoption is required for the apps to actually serve their function. Some estimates suggest that 80 percent of current smartphone owners in a given area would need to use an app and follow its recommendations for digital contact tracing to be effective. But other researchers have noted that the apps could slow the rate of infections even if little more than 10 percent of a population used a tracing app. It will be an uphill battle even to hit the 10 percent mark in America, though. Survey data show that fewer than three in 10 Americans intend to use contact-tracing apps if they become available…(More).

What Ever Happened to Digital Contact Tracing?

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday