Explore our articles
View All Results

Stefaan Verhulst

Book by Ruha Benjamin: “From everyday apps to complex algorithms, Ruha Benjamin cuts through tech-industry hype to understand how emerging technologies can reinforce White supremacy and deepen social inequity.

Benjamin argues that automation, far from being a sinister story of racist programmers scheming on the dark web, has the potential to hide, speed up, and deepen discrimination while appearing neutral and even benevolent when compared to the racism of a previous era. Presenting the concept of the “New Jim Code,” she shows how a range of discriminatory designs encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite. Moreover, she makes a compelling case for race itself as a kind of technology, designed to stratify and sanctify social injustice in the architecture of everyday life.

This illuminating guide provides conceptual tools for decoding tech promises with sociologically informed skepticism. In doing so, it challenges us to question not only the technologies we are sold but also the ones we ourselves manufacture….(More)”.

Race After Technology: Abolitionist Tools for the New Jim Code

Axios: “The ways Americans capture and share records of racist violence and police misconduct keep changing, but the pain of the underlying injustices they chronicle remains a stubborn constant.

Driving the news: After George Floyd’s death at the hands of Minneapolis police sparked wide protests, Minnesota Gov. Tim Walz said, “Thank God a young person had a camera to video it.”

Why it matters: From news photography to TV broadcasts to camcorders to smartphones, improvements in the technology of witness over the past century mean we’re more instantly and viscerally aware of each new injustice.

  • But unless our growing power to collect and distribute evidence of injustice can drive actual social change, the awareness these technologies provide just ends up fueling frustration and despair.

For decades, still news photography was the primary channel through which the public became aware of incidents of racial injustice.

  • horrific 1930 photo of the lynching of J. Thomas Shipp and Abraham S. Smith, two black men in Marion, Indiana, brought the incident to national attention and inspired the song “Strange Fruit.” But the killers were never brought to justice.
  • Photos of the mutilated body of Emmett Till catalyzed a nationwide reaction to his 1955 lynching in Mississippi.

In the 1960s, television news footage brought scenes of police turning dogs and water cannons on peaceful civil rights protesters in Birmingham and Selma, Alabama into viewers’ living rooms.

  • The TV coverage was moving in both senses of the word.

In 1991, a camcorder tape shot by a Los Angeles plumber named George Holliday captured images of cops brutally beating Rodney King.

  • In the pre-internet era, it was only after the King tape was broadcast on TV that Americans could see it for themselves.

Over the past decade, smartphones have enabled witnesses and protesters to capture and distribute photos and videos of injustice quickly — sometimes, as it’s happening.

  • This power helped catalyze the Black Lives Matter movement beginning in 2013 and has played a growing role in broader public awareness of police brutality.

Between the lines: For a brief moment mid-decade, some hoped that the combination of a public well-supplied with video recording devices and requirements that police wear bodycams would introduce a new level of accountability to law enforcement.

The bottom line: Smartphones and social media deliver direct accounts of grief- and rage-inducing stories…(More)”.

The technology of witnessing brutality

Toolkit by AISP: “Societal “progress” is often marked by the construction of new infrastructure that fuels change and innovation. Just as railroads and interstate highways were the defining infrastructure projects of the 1800 and 1900s, the development of data infrastructure is a critical innovation of our century. Railroads and highways were drivers of development and prosperity for some investors and sites. Yet other individuals and communities were harmed, displaced, bypassed, ignored, and forgotten by
those efforts.

At this moment in our history, we can co-create data infrastructure to promote racial equity and the public good, or we can invest in data infrastructure that disregards the historical, social, and political context—reinforcing racial inequity that continues to harm communities. Building data infrastructure without a racial equity lens and understanding of historical context will exacerbate existing inequalities along the lines of race, gender, class, and ability. Instead, we commit to contextualize our work in the historical and structural oppression that shapes it, and organize stakeholders across geography, sector, and experience to center racial equity throughout data integration….(More)”.

Centering Racial Equity Throughout Data Integration

Andrew Curry at the New York Times: “With people around the globe sheltering at home amid the pandemic, an archive of records documenting Nazi atrocities asked for help indexing them. Thousands joined the effort….

As the virus prompted lockdowns across Europe, the director of the Arolsen Archives — the world’s largest devoted to the victims of Nazi persecution — joined millions of others working remotely from home and spending lots more time in front of her computer.

“We thought, ‘Here’s an opportunity,’” said the director, Floriane Azoulay.

Two months later, the archive’s “Every Name Counts” project has attracted thousands of online volunteers to work as amateur archivists, indexing names from the archive’s enormous collection of papers. To date, they have added over 120,000 names, birth dates and prisoner numbers in the database.

“There’s been much more interest than we expected,” Ms. Azoulay said. “The fact that people were locked at home and so many cultural offerings have moved online has played a big role.”

It’s a big job: The Arolsen Archives are the largest collection of their kind in the world, with more than 30 million original documents. They contain information on the wartime experiences of as many as 40 million people, including Jews executed in extermination camps and forced laborers conscripted from across Nazi-occupied Europe.

The documents, which take up 16 miles of shelving, include things like train manifests, delousing records, work detail assignments and execution records…(More)”.

How Crowdsourcing Aided a Push to Preserve the Histories of Nazi Victims

Book by Christoph Bartneck, Christoph Lütge, Alan Wagner and Sean Welsh: “This open access book introduces the reader to the foundations of AI and ethics. It discusses issues of trust, responsibility, liability, privacy and risk. It focuses on the interaction between people and the AI systems and Robotics they use. Designed to be accessible for a broad audience, reading this book does not require prerequisite technical, legal or philosophical expertise. Throughout, the authors use examples to illustrate the issues at hand and conclude the book with a discussion on the application areas of AI and Robotics, in particular autonomous vehicles, automatic weapon systems and biased algorithms. A list of questions and further readings is also included for students willing to explore the topic further….(More)”.

An Introduction to Ethics in Robotics and AI

Kayte Spector-Bagdady et al at the New England Journal of Medicine: “The advent of standardized electronic health records, sustainable biobanks, consumer-wellness applications, and advanced diagnostics has resulted in new health information repositories. As highlighted by the Covid-19 pandemic, these repositories create an opportunity for advancing health research by means of secondary use of data and biospecimens. Current regulations in this space give substantial discretion to individual organizations when it comes to sharing deidentified data and specimens. But some recent examples of health care institutions sharing individual-level data and specimens with companies have generated controversy. Academic medical centers are therefore both practically and ethically compelled to establish best practices for governing the sharing of such contributions with outside entities.1 We believe that the approach we have taken at Michigan Medicine could help inform the national conversation on this issue.

The Federal Policy for the Protection of Human Subjects offers some safeguards for research participants from whom data and specimens have been collected. For example, researchers must notify participants if commercial use of their specimens is a possibility. These regulations generally cover only federally funded work, however, and they don’t apply to deidentified data or specimens. Because participants value transparency regarding industry access to their data and biospecimens, our institution set out to create standards that would better reflect participants’ expectations and honor their trust. Using a principlist approach that balances beneficence and nonmaleficence, respect for persons, and justice, buttressed by recent analyses and findings regarding contributors’ preferences, Michigan Medicine established a formal process to guide our approach….(More)”.

Sharing Health Data and Biospecimens with Industry — A Principle-Driven, Practical Approach

Report on General and Child-specific Ethical Issues by Gabrielle Berman, Karen Carter, Manuel García-Herranz and Vedran Sekara: “The last few years have seen a proliferation of means and approaches being used to collect sensitive or identifiable data on children. Technologies such as facial recognition and other biometrics, increased processing capacity for ‘big data’ analysis and data linkage, and the roll-out of mobile and internet services and access have substantially changed the nature of data collection, analysis, and use.

Real-time data are essential to support decision-makers in government, development and humanitarian agencies such as UNICEF to better understand the issues facing children, plan appropriate action, monitor progress and ensure that no one is left behind. But the collation and use of personally identifiable data may also pose significant risks to children’s rights.

UNICEF has undertaken substantial work to provide a foundation to understand and balance the potential benefits and risks to children of data collection. This work includes the Industry Toolkit on Children’s Online Privacy and Freedom of Expression and a partnership with GovLab on Responsible Data for Children (RD4C) – which promotes good practice principles and has developed practical tools to assist field offices, partners and governments to make responsible data management decisions.

Balancing the need to collect data to support good decision-making versus the need to protect children from harm created through the collection of the data has never been more challenging than in the context of the global COVID-19 pandemic. The response to the pandemic has seen an unprecedented rapid scaling up of technologies to support digital contact tracing and surveillance. The initial approach has included:

  • tracking using mobile phones and other digital devices (tablet computers, the Internet of Things, etc.)
  • surveillance to support movement restrictions, including through the use of location monitoring and facial recognition
  • a shift from in-person service provision and routine data collection to the use of remote or online platforms (including new processes for identity verification)
  • an increased focus on big data analysis and predictive modelling to fill data gaps…(More)”.
Digital contact tracing and surveillance during COVID-19

Andrew Jack at the Financial Times: “When Mozambique was hit by two cyclones in rapid succession last year — causing death and destruction from a natural disaster on a scale not seen in Africa for a generation — government officials added an unusual recruit to their relief efforts. Apart from the usual humanitarian and health agencies, the National Health Institute also turned to Zenysis, a Silicon Valley start-up.

As the UN and non-governmental organisations helped to rebuild lives and tackle outbreaks of disease including cholera, Zenysis began gathering and analysing large volumes of disparate data. “When we arrived, there were 400 new cases of cholera a day and they were doubling every 24 hours,” says Jonathan Stambolis, the company’s chief executive. “None of the data was shared [between agencies]. Our software harmonised and integrated fragmented sources to produce a coherent picture of the outbreak, the health system’s ability to respond and the resources available.

“Three and a half weeks later, they were able to get infections down to zero in most affected provinces,” he adds. The government attributed that achievement to the availability of high-quality data to brief the public and international partners.

“They co-ordinated the response in a way that drove infections down,” he says. Zenysis formed part of a “virtual control room”, integrating information to help decision makers understand what was happening in the worst hit areas, identify sources of water contamination and where to prioritise cholera vaccinations.

It supported an “mAlert system”, which integrated health surveillance data into a single platform for analysis. The output was daily reports distilled from data issued by health facilities and accommodation centres in affected areas, disease monitoring and surveillance from laboratory testing….(More)”.

How data analysis helped Mozambique stem a cholera outbreak

Article by Abdullah Almaatouq and Alex “Sandy” Pentland: “The idea of collective intelligence is not new. Research has long shown that in a wide range of settings, groups of people working together outperform individuals toiling alone. But how do drastic shifts in circumstances, such as people working mostly at a distance during the COVID-19 pandemic, affect the quality of collective decision-making? After all, public health decisions can be a matter of life and death, and business decisions in crisis periods can have lasting effects on the economy.

During a crisis, it’s crucial to manage the flow of ideas deliberatively and strategically so that communication pathways and decision-making are optimized. Our recently published research shows that optimal communication networks can emerge from within an organization when decision makers interact dynamically and receive frequent performance feedback. The results have practical implications for effective decision-making in times of dramatic change….

Our experiments illustrate the importance of dynamically configuring network structures and enabling decision makers to obtain useful, recurring feedback. But how do you apply such findings to real-world decision-making, whether remote or face to face, when constrained by a worldwide pandemic? In such an environment, connections among individuals, teams, and networks of teams must be continually reorganized in response to shifting circumstances and challenges. No single network structure is optimal for every decision, a fact that is clear in a variety of organizational contexts.

Public sector. Consider the teams of advisers working with governments in creating guidelines to flatten the curve and help restart national economies. The teams are frequently reconfigured to leverage pertinent expertise and integrate data from many domains. They get timely feedback on how decisions affect daily realities (rates of infection, hospitalization, death) — and then adjust recommended public health protocols accordingly. Some team members move between levels, perhaps being part of a state-level team for a while, then federal, and then back to state. This flexibility ensures that people making big-picture decisions have input from those closer to the front lines.

Witness how Germany considered putting a brake on some of its reopening measures in response to a substantial, unexpected uptick in COVID-19 infections. Such time-sensitive decisions are not made effectively without a dynamic exchange of ideas and data. Decision makers must quickly adapt to facts reported by subject-area experts and regional officials who have the relevant information and analyses at a given moment….(More)“.

Dynamic Networks Improve Remote Decision-Making

Book by Khaled El Emam, Lucy Mosquera, and Richard Hoptroff: “Building and testing machine learning models requires access to large and diverse data. But where can you find usable datasets without running into privacy issues? This practical book introduces techniques for generating synthetic data—fake data generated from real data—so you can perform secondary analysis to do research, understand customer behaviors, develop new products, or generate new revenue.

Data scientists will learn how synthetic data generation provides a way to make such data broadly available for secondary purposes while addressing many privacy concerns. Analysts will learn the principles and steps for generating synthetic data from real datasets. And business leaders will see how synthetic data can help accelerate time to a product or solution.

This book describes:

  • Steps for generating synthetic data using multivariate normal distributions
  • Methods for distribution fitting covering different goodness-of-fit metrics
  • How to replicate the simple structure of original data
  • An approach for modeling data structure to consider complex relationships
  • Multiple approaches and metrics you can use to assess data utility
  • How analysis performed on real data can be replicated with synthetic data
  • Privacy implications of synthetic data and methods to assess identity disclosure…(More)”.
Practical Synthetic Data Generation: Balancing Privacy and the Broad Availability of Data

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday