IBM quits facial recognition, joins call for police reforms


AP Article by Matt O’Brien: “IBM is getting out of the facial recognition business, saying it’s concerned about how the technology can be used for mass surveillance and racial profiling.

Ongoing protests responding to the death of George Floyd have sparked a broader reckoning over racial injustice and a closer look at the use of police technology to track demonstrators and monitor American neighborhoods.

IBM is one of several big tech firms that had earlier sought to improve the accuracy of their face-scanning software after research found racial and gender disparities. But its new CEO is now questioning whether it should be used by police at all.

“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” wrote CEO Arvind Krishna in a letter sent Monday to U.S. lawmakers.

IBM’s decision to stop building and selling facial recognition software is unlikely to affect its bottom line, since the tech giant is increasingly focused on cloud computing while an array of lesser-known firms have cornered the market for government facial recognition contracts.

“But the symbolic nature of this is important,” said Mutale Nkonde, a research fellow at Harvard and Stanford universities who directs the nonprofit AI For the People.

Nkonde said IBM shutting down a business “under the guise of advancing anti-racist business practices” shows that it can be done and makes it “socially unacceptable for companies who tweet Black Lives Matter to do so while contracting with the police.”…(More)”.

Using Algorithms to Address Trade-Offs Inherent in Predicting Recidivism


Paper by Jennifer L. Skeem and Christopher Lowenkamp: “Although risk assessment has increasingly been used as a tool to help reform the criminal justice system, some stakeholders are adamantly opposed to using algorithms. The principal concern is that any benefits achieved by safely reducing rates of incarceration will be offset by costs to racial justice claimed to be inherent in the algorithms themselves. But fairness tradeoffs are inherent to the task of predicting recidivism, whether the prediction is made by an algorithm or human.

Based on a matched sample of 67,784 Black and White federal supervisees assessed with the Post Conviction Risk Assessment (PCRA), we compare how three alternative strategies for “debiasing” algorithms affect these tradeoffs, using arrest for a violent crime as the criterion. These candidate algorithms all strongly predict violent re-offending (AUCs=.71-72), but vary in their association with race (r= .00-.21) and shift tradeoffs between balance in positive predictive value and false positive rates. Providing algorithms with access to race (rather than omitting race or ‘blinding’ its effects) can maximize calibration and minimize imbalanced error rates. Implications for policymakers with value preferences for efficiency vs. equity are discussed…(More)”.

Technical Excellence and Scale


Cory Doctorow at EFF: “In America, we hope that businesses will grow by inventing amazing things that people love – rather than through deep-pocketed catch-and-kill programs in which every competitor is bought and tamed before it can grow to become a threat. We want vibrant, competitive, innovative markets where companies vie to create the best products. Growth solely through merger-and-acquisition helps create a world in which new firms compete to be bought up and absorbed into the dominant players, and customers who grow dissatisfied with a product or service and switch to a “rival” find that they’re still patronizing the same company—just another division.

To put it bluntly: we want companies that are good at making things as well as buying things.

This isn’t the whole story, though.

Small companies with successful products can become victims of their own success. As they are overwhelmed by eager new customers, they are strained beyond their technical and financial limits – for example, they may be unable to buy server hardware fast enough, and unable to lash that hardware together in efficient ways that let them scale up to meet demand.

When we look at the once small, once beloved companies that are now mere divisions of large, widely mistrusted ones—Instagram and Facebook; YouTube and Google; Skype and Microsoft; DarkSkies and Apple—we can’t help but notice that they are running at unimaginable scale, and moreover, they’re running incredibly well.

These services were once plagued with outages, buffering delays, overcapacity errors, slowdowns, and a host of other evils of scale. Today, they run so well that outages are newsworthy events.

There’s a reason for that: big tech companies are really good at being big. Whatever you think of Amazon, you can’t dispute that it gets a lot of parcels from A to B with remarkably few bobbles. Google’s search results arrive in milliseconds, Instagram photos load as fast as you can scroll them, and even Skype is far more reliable than in the pre-Microsoft days. These services have far more users than they ever did as independents, and yet, they are performing better than they did in those early days.

Can we really say that this is merely “buying things” and not also “making things?” Isn’t this innovation? Isn’t this technical accomplishment? It is. Does that mean big = innovative? It does not….(More)”.

Individualism During Crises: Big Data Analytics of Collective Actions amid COVID-19


Paper by Bo Bian et al: “Collective actions, such as charitable crowdfunding and social distancing, are useful for alleviating the negative impact of the COVID-19 pandemic. However, engagements in these actions across the U.S. are “consistently inconsistent” and are frequently linked to individualism in the press. We present the first evidence on how individualism shapes online and offline collective actions during a crisis through big data analytics. Following economic historical studies, we leverage GIS techniques to construct a U.S. county-level individualism measure that traces the time each county spent on the American frontier between 1790 and 1890. We then use high-dimensional fixed-effect models, text mining, geo-distributed big data computing and a novel identification strategy based on migrations to analyze GoFundMe fundraising activities as well as county- and individual-level social distancing compliance.

Our analysis uncovers several insights. First, higher individualism reduces both online donations and social distancing during the COVID-19 pandemic. An interquartile increase in individualism reduces COVID-related charitable campaigns and funding by 48% and offsets the effect of state lockdown orders on social distancing by 41%. Second, government interventions, such as stimulus checks, can potentially mitigate the negative effect of individualism on charitable crowdfunding. Third, the individualism effect may be partly driven by a failure to internalize the externality of collective actions: we find stronger results in counties where social distancing generates higher externalities (those with higher population densities or more seniors). Our research is the first to uncover the potential downsides of individualism during crises. It also highlights the importance of big data-driven, culture-aware policymaking….(More)”.

Characterizing Disinformation Risk to Open Data in the Post-Truth Era


Paper by Adrienne Colborne and Michael Smit: “Curated, labeled, high-quality data is a valuable commodity for tasks such as business analytics and machine learning. Open data is a common source of such data—for example, retail analytics draws on open demographic data, and weather forecast systems draw on open atmospheric and ocean data. Open data is released openly by governments to achieve various objectives, such as transparency, informing citizen engagement, or supporting private enterprise.

Critical examination of ongoing social changes, including the post-truth phenomenon, suggests the quality, integrity, and authenticity of open data may be at risk. We introduce this risk through various lenses, describe some of the types of risk we expect using a threat model approach, identify approaches to mitigate each risk, and present real-world examples of cases where the risk has already caused harm. As an initial assessment of awareness of this disinformation risk, we compare our analysis to perspectives captured during open data stakeholder consultations in Canada…(More)”.

Race After Technology: Abolitionist Tools for the New Jim Code


Book by Ruha Benjamin: “From everyday apps to complex algorithms, Ruha Benjamin cuts through tech-industry hype to understand how emerging technologies can reinforce White supremacy and deepen social inequity.

Benjamin argues that automation, far from being a sinister story of racist programmers scheming on the dark web, has the potential to hide, speed up, and deepen discrimination while appearing neutral and even benevolent when compared to the racism of a previous era. Presenting the concept of the “New Jim Code,” she shows how a range of discriminatory designs encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite. Moreover, she makes a compelling case for race itself as a kind of technology, designed to stratify and sanctify social injustice in the architecture of everyday life.

This illuminating guide provides conceptual tools for decoding tech promises with sociologically informed skepticism. In doing so, it challenges us to question not only the technologies we are sold but also the ones we ourselves manufacture….(More)”.

The technology of witnessing brutality


Axios: “The ways Americans capture and share records of racist violence and police misconduct keep changing, but the pain of the underlying injustices they chronicle remains a stubborn constant.

Driving the news: After George Floyd’s death at the hands of Minneapolis police sparked wide protests, Minnesota Gov. Tim Walz said, “Thank God a young person had a camera to video it.”

Why it matters: From news photography to TV broadcasts to camcorders to smartphones, improvements in the technology of witness over the past century mean we’re more instantly and viscerally aware of each new injustice.

  • But unless our growing power to collect and distribute evidence of injustice can drive actual social change, the awareness these technologies provide just ends up fueling frustration and despair.

For decades, still news photography was the primary channel through which the public became aware of incidents of racial injustice.

  • horrific 1930 photo of the lynching of J. Thomas Shipp and Abraham S. Smith, two black men in Marion, Indiana, brought the incident to national attention and inspired the song “Strange Fruit.” But the killers were never brought to justice.
  • Photos of the mutilated body of Emmett Till catalyzed a nationwide reaction to his 1955 lynching in Mississippi.

In the 1960s, television news footage brought scenes of police turning dogs and water cannons on peaceful civil rights protesters in Birmingham and Selma, Alabama into viewers’ living rooms.

  • The TV coverage was moving in both senses of the word.

In 1991, a camcorder tape shot by a Los Angeles plumber named George Holliday captured images of cops brutally beating Rodney King.

  • In the pre-internet era, it was only after the King tape was broadcast on TV that Americans could see it for themselves.

Over the past decade, smartphones have enabled witnesses and protesters to capture and distribute photos and videos of injustice quickly — sometimes, as it’s happening.

  • This power helped catalyze the Black Lives Matter movement beginning in 2013 and has played a growing role in broader public awareness of police brutality.

Between the lines: For a brief moment mid-decade, some hoped that the combination of a public well-supplied with video recording devices and requirements that police wear bodycams would introduce a new level of accountability to law enforcement.

The bottom line: Smartphones and social media deliver direct accounts of grief- and rage-inducing stories…(More)”.

Centering Racial Equity Throughout Data Integration


Toolkit by AISP: “Societal “progress” is often marked by the construction of new infrastructure that fuels change and innovation. Just as railroads and interstate highways were the defining infrastructure projects of the 1800 and 1900s, the development of data infrastructure is a critical innovation of our century. Railroads and highways were drivers of development and prosperity for some investors and sites. Yet other individuals and communities were harmed, displaced, bypassed, ignored, and forgotten by
those efforts.

At this moment in our history, we can co-create data infrastructure to promote racial equity and the public good, or we can invest in data infrastructure that disregards the historical, social, and political context—reinforcing racial inequity that continues to harm communities. Building data infrastructure without a racial equity lens and understanding of historical context will exacerbate existing inequalities along the lines of race, gender, class, and ability. Instead, we commit to contextualize our work in the historical and structural oppression that shapes it, and organize stakeholders across geography, sector, and experience to center racial equity throughout data integration….(More)”.

Sharing Health Data and Biospecimens with Industry — A Principle-Driven, Practical Approach


Kayte Spector-Bagdady et al at the New England Journal of Medicine: “The advent of standardized electronic health records, sustainable biobanks, consumer-wellness applications, and advanced diagnostics has resulted in new health information repositories. As highlighted by the Covid-19 pandemic, these repositories create an opportunity for advancing health research by means of secondary use of data and biospecimens. Current regulations in this space give substantial discretion to individual organizations when it comes to sharing deidentified data and specimens. But some recent examples of health care institutions sharing individual-level data and specimens with companies have generated controversy. Academic medical centers are therefore both practically and ethically compelled to establish best practices for governing the sharing of such contributions with outside entities.1 We believe that the approach we have taken at Michigan Medicine could help inform the national conversation on this issue.

The Federal Policy for the Protection of Human Subjects offers some safeguards for research participants from whom data and specimens have been collected. For example, researchers must notify participants if commercial use of their specimens is a possibility. These regulations generally cover only federally funded work, however, and they don’t apply to deidentified data or specimens. Because participants value transparency regarding industry access to their data and biospecimens, our institution set out to create standards that would better reflect participants’ expectations and honor their trust. Using a principlist approach that balances beneficence and nonmaleficence, respect for persons, and justice, buttressed by recent analyses and findings regarding contributors’ preferences, Michigan Medicine established a formal process to guide our approach….(More)”.

How Congress can improve productivity by looking to the rest of the world


Beth Noveck and Dane Gambrell at the Hill: “…While an important first step in helping to resume operations, Congress needs to follow the lead of those many legislatures around the world who have changed their laws and rules and are using technology to continue to legislate, conduct oversight and even innovate. 

Though efforts to restart by adopting proxy voting are a step in the right direction, they do not go far enough to create what Georgetown University’s Lorelei Kelly calls the “modern and safe digital infrastructure for the world’s most powerful national legislature.” 

Congress has all but shut down since March. While the Senate formally “re-opened” on May 4, the chamber is operating under restrictive new guidelines, with hearings largely closed to the public and lawmakers advised to bring only a skeleton crew to run their offices. Considering that the average age of a senator is 63 and the average age of a Member of the House is 58, this caution comes as no surprise.

Yet when we take into account that parliaments around the world from New Zealand to the Maldives are holding committee meetings, running plenary sessions, voting and even engaging the public in the lawmaking process online, we should be asking Congress to do more faster. 

Instead, bitter partisan wrangling — with Republicans accusing Democrats of taking advantage of social distancing to launch a power grab and Democrats accusing Republicans of failing to exercise oversight — is delaying the adoption of long available and easy to use technologies. More than a left-right issue, moving online is a top-down issue with leadership of both parties using the crisis to consolidate power.

Working online

The Parliament of the United Kingdom, for example, is one of dozens of legislatures turning to online video conferencing tools such as Zoom, Microsoft Teams, Cisco Web Meetings and Google Hangouts to do plenary or committee meetings. After 800 years, lawmakers in the House of Commons convened the first-ever “virtual Parliament” at the end of April. In this hybrid approach, some MPs were present in the legislative chamber while most joined remotely using Zoom…(More)”.