How to Sustain Your Activism Against Police Brutality Beyond this Moment


Article by Bethany Gordon: “…Despite the haunting nature of these details and the different features of this moment, I am worried that empathetic voices lifting up this cause will quiet too soon for lasting change to occur. But it doesn’t have to happen this way. Gaining a better understanding of the empathy we feel in these moments of awareness and advocacy can help us take a more behaviorally sustainable approach.

Empathy is a complex psychological phenomenon, describing eight distinct ways that we respond to one another’s experiences and emotions, but most commonly defined in the dictionary as “the ability to understand and share the feelings of another.” Using this broader definition, scholars and activists have debated how effective empathy is as a tool for behavior change—particularly when it comes to fighting racism. Paul Bloom argues that empathy allows our bias to drive our decision-making, bell hooks states that empathy is not a promising avenue to systemic racial change, and Alisha Gaines analyzes how an overemphasis on racial empathy in a 1944 landmark study, “An American Dilemma,” led to a blindness about the impact of systemic and institutional racial barriers. This more general understanding and application of empathy has not been an effective aid to fighting systemic oppression and has led to a lot of (well-meaning?) blackface.

A more nuanced understanding of empathy—and its related concepts—may help us use it more effectively in the fight against racism. There are two strains of empathy that are relevant to the George Floyd protests and can help us better understand (and possibly change) our response: empathic distress and empathic concern, also known as compassion.

Empathic distress is a type of empathy we feel when we are disturbed by witnessing another’s suffering. Empathic distress is an egocentric response—a reaction that places our own well-being at its center. When we’re motivated to act through empathic distress, our ultimate goal is to alleviate our own suffering. This may mean we take action to help another person, but it could also mean we distract ourselves from their suffering.

Compassion is a type of empathy that is other-oriented. Compassion operates when you feel for another person rather than being distressed by their suffering, thereby making your ultimate goal about fixing the actual problem….(More)’

IoT Security Is a Mess. Privacy ‘Nutrition’ Labels Could Help


Lily Hay Newman at Wired: “…Given that IoT security seems unlikely to magically improve anytime soon, researchers and regulators are rallying behind a new approach to managing IoT risk. Think of it as nutrition labels for embedded devices.

At the IEEE Symposium on Security & Privacy last month, researchers from Carnegie Mellon University presented a prototype security and privacy label they created based on interviews and surveys of people who own IoT devices, as well as privacy and security experts. They also published a tool for generating their labels. The idea is to shed light on a device’s security posture but also explain how it manages user data and what privacy controls it has. For example, the labels highlight whether a device can get security updates and how long a company has pledged to support it, as well as the types of sensors present, the data they collect, and whether the company shares that data with third parties.

“In an IoT setting, the amount of sensors and information you have about users is potentially invasive and ubiquitous,” says Yuvraj Agarwal, a networking and embedded systems researcher who worked on the project. “It’s like trying to fix a leaky bucket. So transparency is the most important part. This work shows and enumerates all the choices and factors for consumers.”

Nutrition labels on packaged foods have a certain amount of standardization around the world, but they’re still more opaque than they could be. And security and privacy issues are even less intuitive to most people than soluble and insoluble fiber. So the CMU researchers focused a lot of their efforts on making their IoT label as transparent and accessible as possible. To that end, they included both a primary and secondary layer to the label. The primary label is what would be printed on device boxes. To access the secondary label, you could follow a URL or scan a QR code to see more granular information about a device….(More)”.

How Data-Driven Cities Respond Swiftly and Effectively to COVID-19


Blog Post by Jennifer Park, Lauren Su, Lisa Fiedler, and Madeleine Weatherhead: “Since January of this year, the novel coronavirus has swept rapidly throughout the United States, leaving no city untouched. To contain the virus’ spread and protect residents’ health and livelihoods, local leaders have had to act swiftly and decisively. It is a challenge in scope and scale unlike any other in recent history — and it has underscored the power of data to guide life-and-death decisions and build trust.

Take, for example, Los Angeles. As cities across the country began issuing states of emergency and acting to promote public health, Mayor Eric Garcetti quickly identified the city’s response priorities: supporting families, small businesses, healthcare workers, and unhoused Angelenos, and increasing the healthcare equipment and testing kits available for the city. Mayor Garcetti tapped his Chief Information Officer and Innovation Team to collect and analyze data, to inform decisions, and share real-time information publicly.

A snapshot of Los Angeles’ publicly shared data from one of the city’s daily COVID-19 summary briefings. Image courtesy of the City of Los Angeles’ Innovation Team.

The Mayor was soon conducting daily briefings, updating the public on the latest virus-related data and informing city residents about various decisions made by the city — from pausing parking rules enforcement to opening thousands of temporary shelter beds. He used data to justify key decisions, linking stay-at-home orders to a decrease in COVID-19 cases from week to week.

Los Angeles’ swift response built on an existing culture of leveraging data to set goals, make decisions, and communicate with the public. Its leaders are now seeing the positive impact of having invested in foundational data capacity — regular tracking of cases, hospital capacity, and infection rates have proven to be vital to helping and accelerating the city’s responses to COVID-19.

Other cities, too, have leaned on established data practices and infrastructure in their response efforts, both to the benefit of their residents and to lay a stronger foundation to guide recovery….(More)“.

How Data Can Map and Make Racial Inequality More Visible (If Done Responsibly)


Reflection Document by The GovLab: “Racism is a systemic issue that pervades every aspect of life in the United States and around the world. In recent months, its corrosive influence has been made starkly visible, especially on Black people. Many people are hurting. Their rage and suffering stem from centuries of exclusion and from being subject to repeated bias and violence. Across the country, there have been protests decrying racial injustice. Activists have called upon the government to condemn bigotry and racism, to act against injustice, to address systemic and growing inequality.

Institutions need to take meaningful action to address such demands. Though racism is not experienced in the same way by all communities of color, policymakers must respond to the anxieties and apprehensions of Black people as well as those of communities of color more generally. This work will require institutions and individuals to reflect on how they may be complicit in perpetuating structural and systematic inequalities and harm and to ask better questions about the inequities that exist in society (laid bare in both recent acts of violence and in racial disadvantages in health outcomes during the ongoing COVID-19 crisis). This work is necessary but unlikely to be easy. As Rashida Richardson, Director of Policy Research at the AI Now Institute at NYU notes:

“Social and political stratifications also persist and worsen because they are embedded into our social and legal systems and structures. Thus, it is difficult for most people to see and understand how bias and inequalities have been automated or operationalized over time.”

We believe progress can be made, at least in part, through responsible data access and analysis, including increased availability of (disaggregated) data through data collaboration. Of course, data is only one part of the overall picture, and we make no claims that data alone can solve such deeply entrenched problems. Nonetheless, data can have an impact by making inequalities resulting from racism more quantifiable and inaction less excusable.

…Prioritizing any of these topics will also require increased community engagement and participatory agenda setting. Likewise, we are deeply conscious that data can have a negative as well as positive impact and that technology can perpetuate racism when designed and implemented without the input and participation of minority communities and organizations. While our report here focuses on the promise of data, we need to remain aware of the potential to weaponize data against vulnerable and already disenfranchised communities. In addition, (hidden) biases in data collected and used in AI algorithms, as well as in a host of other areas across the data life cycle, will only exacerbate racial inequalities if not addressed….(More)”

ALSO: The piece is supplemented by a crowdsourced listing of Data-Driven Efforts to Address Racial Inequality.

Fear of a Black and Brown Internet: Policing Online Activism


Paper by Sahar F. Aziz and Khaled A. Beydoun: “Virtual surveillance is the modern extension of established policing models that tie dissident Muslim advocacy to terror suspicion and Black activism to political subversion. Countering Violent Extremism (“CVE”) and Black Identity Extremism (“BIE”) programs that specifically target Muslim and Black populations are shifting from on the ground to online.

Law enforcement exploits social media platforms — where activism and advocacy is robust — to monitor and crack down on activists. In short, the new policing is the old policing, but it is stealthily morphing and moving onto virtual platforms where activism is fluidly unfolding in real time. This Article examines how the law’s failure to keep up with technological advancements in social media poses serious risks to the ability of minority communities to mobilize against racial and religious injustice….(More)”.

Terms of Disservice: How Silicon Valley is Destructive by Design


Book by Dipayan Ghosh on “Designing a new digital social contact for our technological future…High technology presents a paradox. In just a few decades, it has transformed the world, making almost limitless quantities of information instantly available to billions of people and reshaping businesses, institutions, and even entire economies. But it also has come to rule our lives, addicting many of us to the march of megapixels across electronic screens both large and small.

Despite its undeniable value, technology is exacerbating deep social and political divisions in many societies. Elections influenced by fake news and unscrupulous hidden actors, the cyber-hacking of trusted national institutions, the vacuuming of private information by Silicon Valley behemoths, ongoing threats to vital infrastructure from terrorist groups and even foreign governments—all these concerns are now part of the daily news cycle and are certain to become increasingly serious into the future.

In this new world of endless technology, how can individuals, institutions, and governments harness its positive contributions while protecting each of us, no matter who or where we are?

In this book, a former Facebook public policy adviser who went on to assist President Obama in the White House offers practical ideas for using technology to create an open and accessible world that protects all consumers and civilians. As a computer scientist turned policymaker, Dipayan Ghosh answers the biggest questions about technology facing the world today. Proving clear and understandable explanations for complex issues, Terms of Disservice will guide industry leaders, policymakers, and the general public as we think about how we ensure that the Internet works for everyone, not just Silicon Valley….(More)”.

Toward Inclusive Urban Technology


Report by Denise Linn Riedl: “Our cities are changing at an incredible pace. The technology being deployed on our sidewalks and streetlights has the potential to improve mobility, sustainability, connectivity, and city services.

Public value and public inclusion in this change, however, are not inevitable. Depending on how these technologies are deployed, they have the potential to increase inequities and distrust as much as they can create responsive government services.

Recognizing this tension, an initial coalition of local practitioners began collaborating in 2019 with the support of the Benton Institute for Broadband & Society. We combined knowledge of and personal experience with local governments to tackle a common question: What does procedural justice look like when cities deploy new technology?

This guide is meant for any local worker—inside or outside of government—who is helping to plan or implement technological change in their community. It’s a collection of experiences, cases, and best practices that we hope will be valuable and will make projects stronger, more sustainable, and more inclusive….(More)”.

IBM quits facial recognition, joins call for police reforms


AP Article by Matt O’Brien: “IBM is getting out of the facial recognition business, saying it’s concerned about how the technology can be used for mass surveillance and racial profiling.

Ongoing protests responding to the death of George Floyd have sparked a broader reckoning over racial injustice and a closer look at the use of police technology to track demonstrators and monitor American neighborhoods.

IBM is one of several big tech firms that had earlier sought to improve the accuracy of their face-scanning software after research found racial and gender disparities. But its new CEO is now questioning whether it should be used by police at all.

“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” wrote CEO Arvind Krishna in a letter sent Monday to U.S. lawmakers.

IBM’s decision to stop building and selling facial recognition software is unlikely to affect its bottom line, since the tech giant is increasingly focused on cloud computing while an array of lesser-known firms have cornered the market for government facial recognition contracts.

“But the symbolic nature of this is important,” said Mutale Nkonde, a research fellow at Harvard and Stanford universities who directs the nonprofit AI For the People.

Nkonde said IBM shutting down a business “under the guise of advancing anti-racist business practices” shows that it can be done and makes it “socially unacceptable for companies who tweet Black Lives Matter to do so while contracting with the police.”…(More)”.

Using Algorithms to Address Trade-Offs Inherent in Predicting Recidivism


Paper by Jennifer L. Skeem and Christopher Lowenkamp: “Although risk assessment has increasingly been used as a tool to help reform the criminal justice system, some stakeholders are adamantly opposed to using algorithms. The principal concern is that any benefits achieved by safely reducing rates of incarceration will be offset by costs to racial justice claimed to be inherent in the algorithms themselves. But fairness tradeoffs are inherent to the task of predicting recidivism, whether the prediction is made by an algorithm or human.

Based on a matched sample of 67,784 Black and White federal supervisees assessed with the Post Conviction Risk Assessment (PCRA), we compare how three alternative strategies for “debiasing” algorithms affect these tradeoffs, using arrest for a violent crime as the criterion. These candidate algorithms all strongly predict violent re-offending (AUCs=.71-72), but vary in their association with race (r= .00-.21) and shift tradeoffs between balance in positive predictive value and false positive rates. Providing algorithms with access to race (rather than omitting race or ‘blinding’ its effects) can maximize calibration and minimize imbalanced error rates. Implications for policymakers with value preferences for efficiency vs. equity are discussed…(More)”.

Technical Excellence and Scale


Cory Doctorow at EFF: “In America, we hope that businesses will grow by inventing amazing things that people love – rather than through deep-pocketed catch-and-kill programs in which every competitor is bought and tamed before it can grow to become a threat. We want vibrant, competitive, innovative markets where companies vie to create the best products. Growth solely through merger-and-acquisition helps create a world in which new firms compete to be bought up and absorbed into the dominant players, and customers who grow dissatisfied with a product or service and switch to a “rival” find that they’re still patronizing the same company—just another division.

To put it bluntly: we want companies that are good at making things as well as buying things.

This isn’t the whole story, though.

Small companies with successful products can become victims of their own success. As they are overwhelmed by eager new customers, they are strained beyond their technical and financial limits – for example, they may be unable to buy server hardware fast enough, and unable to lash that hardware together in efficient ways that let them scale up to meet demand.

When we look at the once small, once beloved companies that are now mere divisions of large, widely mistrusted ones—Instagram and Facebook; YouTube and Google; Skype and Microsoft; DarkSkies and Apple—we can’t help but notice that they are running at unimaginable scale, and moreover, they’re running incredibly well.

These services were once plagued with outages, buffering delays, overcapacity errors, slowdowns, and a host of other evils of scale. Today, they run so well that outages are newsworthy events.

There’s a reason for that: big tech companies are really good at being big. Whatever you think of Amazon, you can’t dispute that it gets a lot of parcels from A to B with remarkably few bobbles. Google’s search results arrive in milliseconds, Instagram photos load as fast as you can scroll them, and even Skype is far more reliable than in the pre-Microsoft days. These services have far more users than they ever did as independents, and yet, they are performing better than they did in those early days.

Can we really say that this is merely “buying things” and not also “making things?” Isn’t this innovation? Isn’t this technical accomplishment? It is. Does that mean big = innovative? It does not….(More)”.