The Exploited Labor Behind Artificial Intelligence


Essay by Adrienne Williams, Milagros Miceli, and Timnit Gebru: “The public’s understanding of artificial intelligence (AI) is largely shaped by pop culture — by blockbuster movies like “The Terminator” and their doomsday scenarios of machines going rogue and destroying humanity. This kind of AI narrative is also what grabs the attention of news outlets: a Google engineer claiming that its chatbot was sentient was among the most discussed AI-related news in recent months, even reaching Stephen Colbert’s millions of viewers. But the idea of superintelligent machines with their own agency and decision-making power is not only far from reality — it distracts us from the real risks to human lives surrounding the development and deployment of AI systems. While the public is distracted by the specter of nonexistent sentient machines, an army of precarized workers stands behind the supposed accomplishments of artificial intelligence systems today.

Many of these systems are developed by multinational corporations located in Silicon Valley, which have been consolidating power at a scale that, journalist Gideon Lewis-Kraus notes, is likely unprecedented in human history. They are striving to create autonomous systems that can one day perform all of the tasks that people can do and more, without the required salaries, benefits or other costs associated with employing humans. While this corporate executives’ utopia is far from reality, the march to attempt its realization has created a global underclass, performing what anthropologist Mary L. Gray and computational social scientist Siddharth Suri call ghost work: the downplayed human labor driving “AI”.

Tech companies that have branded themselves “AI first” depend on heavily surveilled gig workers like data labelers, delivery drivers and content moderators. Startups are even hiring people to impersonate AI systems like chatbots, due to the pressure by venture capitalists to incorporate so-called AI into their products. In fact, London-based venture capital firm MMC Ventures surveyed 2,830 AI startups in the EU and found that 40% of them didn’t use AI in a meaningful way…(More)”.

Innovative Data Science Approaches to Identify Individuals, Populations, and Communities at High Risk for Suicide


Report by the National Academies of Sciences, Engineering, and Medicine: “Emerging real-time data sources, together with innovative data science techniques and methods – including artificial intelligence and machine learning – can help inform upstream suicide prevention efforts. Select social media platforms have proactively deployed these methods to identify individual platform users at high risk for suicide, and in some cases may activate local law enforcement, if needed, to prevent imminent suicide. To explore the current scope of activities, benefits, and risks of leveraging innovative data science techniques to help inform upstream suicide prevention at the individual and population level, the Forum on Mental Health and Substance Use Disorders of the National Academies of Sciences, Engineering, and Medicine convened a virtual workshop series consisting of three webinars held on April 28, May 12, and June 30, 2022. This Proceedings highlights presentations and discussions from the workshop…(More)”

We the Dead: Preserving Data at the End of the World


Book by Brian Michael Murphy: “Locked away in refrigerated vaults, sanitized by gas chambers, and secured within bombproof caverns deep under mountains are America’s most prized materials: the ever-expanding collection of records that now accompany each of us from birth to death. This data complex backs up and protects our most vital information against decay and destruction, and yet it binds us to corporate and government institutions whose power is also preserved in its bunkers, infrastructures, and sterilized spaces.

We the Dead traces the emergence of the data complex in the early twentieth century and guides readers through its expansion in a series of moments when Americans thought they were living just before the end of the world. Depression-era eugenicists feared racial contamination and the downfall of the white American family, while contemporary technologists seek ever denser and more durable materials for storing data, from microetched metal discs to cryptocurrency keys encoded in synthetic DNA. Artfully written and packed with provocative ideas, this haunting book illuminates the dark places of the data complex and the ways it increasingly blurs the lines between human and machine, biological body and data body, life and digital afterlife…(More)”.

CNSTAT Report Emphasizes the Need for a National Data Infrastructure


Article by Molly Gahagen: “Having credible and accessible data is essential for various sectors of society to function. In the recent report, “Toward a 21st Century National Data Infrastructure: Mobilizing Information for the Common Good,” by the Committee on National Statistics (CNSTAT) of the National Academies of Sciences, Engineering, and Medicine, the importance of national data infrastructure is emphasized…

Emphasizing the need for reliable statistics for national, state and local government officials, as well as businesses and citizens, the report cites the need for a modern national data infrastructure incorporating data from multiple federal agencies. Initial recommendations and potential outcomes of such a system are contained in the report.

Recommendations include practices to incorporate data from many sources, safeguard privacy, freely share statistics with the public, ensure transparency and create a modern system that would allow for easy access and enhanced security.

Potential outcomes of this infrastructure highlighted by the authors of the report include increased evidence-based policymaking on several levels of government, uniform regulations for data reporting and users accessing the data and increased security. The report describes how this would tie into increased initiatives to promote research and evidence-based policymaking, including through the passing of the Evidence-Based Policymaking Act of 2018 in Congress.

CNSTAT’s future reports seek to address blending multiple data sources, data equity, technology and tools, among other topics…(More)”.

Nudging the Nudger: A Field Experiment on the Effect of Performance Feedback to Service Agents on Increasing Organ Donor Registrations


Paper by Julian House, Nicola Lacetera, Mario Macis & Nina Mazar: “We conducted a randomized controlled trial involving nearly 700 customer-service representatives (CSRs) in a Canadian government service agency to study whether providing CSRs with performance feedback with or without peer comparison affected their subsequent organ donor registration rates. Despite having no tie to remuneration or promotion, the provision of individual performance feedback three times over one year resulted in a 25% increase in daily signups, compared to otherwise similar encouragement and reminders. Adding benchmark information that compared CSRs performance to average and top peer performance did not further enhance this effect. Registrations increased more among CSRs whose performance was already above average, and there was no negative effect on lower-performing CSRs. A post-intervention survey showed that CSRs found the information included in the treatments helpful and encouraging. However, performance feedback without benchmark information increased perceived pressure to perform…(More)”.

Can Smartphones Help Predict Suicide?


Ellen Barry in The New York Times: “In March, Katelin Cruz left her latest psychiatric hospitalization with a familiar mix of feelings. She was, on the one hand, relieved to leave the ward, where aides took away her shoelaces and sometimes followed her into the shower to ensure that she would not harm herself.

But her life on the outside was as unsettled as ever, she said in an interview, with a stack of unpaid bills and no permanent home. It was easy to slide back into suicidal thoughts. For fragile patients, the weeks after discharge from a psychiatric facility are a notoriously difficult period, with a suicide rate around 15 times the national rate, according to one study.

This time, however, Ms. Cruz, 29, left the hospital as part of a vast research project which attempts to use advances in artificial intelligence to do something that has eluded psychiatrists for centuries: to predict who is likely to attempt suicide and when that person is likely to attempt it, and then, to intervene.

On her wrist, she wore a Fitbit programmed to track her sleep and physical activity. On her smartphone, an app was collecting data about her moods, her movement and her social interactions. Each device was providing a continuous stream of information to a team of researchers on the 12th floor of the William James Building, which houses Harvard’s psychology department.

In the field of mental health, few new areas generate as much excitement as machine learning, which uses computer algorithms to better predict human behavior. There is, at the same time, exploding interest in biosensors that can track a person’s mood in real time, factoring in music choices, social media posts, facial expression and vocal expression.

Matthew K. Nock, a Harvard psychologist who is one of the nation’s top suicide researchers, hopes to knit these technologies together into a kind of early-warning system that could be used when an at-risk patient is released from the hospital…(More)”.

Hurricane Ian Destroyed Their Homes. Algorithms Sent Them Money


Article by Chris Stokel-Walker: “The algorithms that power Skai’s damage assessments are trained by manually labeling satellite images of a couple of hundred buildings in a disaster-struck area that are known to have been damaged. The software can then, at speed, detect damaged buildings across the whole affected area. A research paper on the underlying technology presented at a 2020 academic workshop on AI for disaster response claimed the auto-generated damage assessments match those of human experts with between 85 and 98 percent accuracy.

In Florida this month, GiveDirectly sent its push notification offering $700 to any user of the Providers app with a registered address in neighborhoods of Collier, Charlotte, and Lee Counties where Google’s AI system deemed more than 50 percent of buildings had been damaged. So far, 900 people have taken up the offer, and half of those have been paid. If every recipient takes up GiveDirectly’s offer, the organization will pay out $2.4 million in direct financial aid.

Some may be skeptical of automated disaster response. But in the chaos after an event like a hurricane making landfall, the conventional, human response can be far from perfect. Diaz points to an analysis GiveDirectly conducted looking at their work after Hurricane Harvey, which hit Texas and Louisiana in 2017, before the project with Google. Two out of the three areas that were most damaged and economically depressed were initially overlooked. A data-driven approach is “much better than what we’ll have from boots on the ground and word of mouth,” Diaz says.

GiveDirectly and Google’s hands-off, algorithm-led approach to aid distribution has been welcomed by some disaster assistance experts—with caveats. Reem Talhouk, a research fellow at Northumbria University’s School of Design and Centre for International Development in the UK, says that the system appears to offer a more efficient way of delivering aid. And it protects the dignity of recipients, who don’t have to queue up for handouts in public…(More)”.

Is This the Beginning of the End of the Internet?


Article by Charlie Warzel: “…occasionally, something happens that is so blatantly and obviously misguided that trying to explain it rationally makes you sound ridiculous. Such is the case with the Fifth Circuit Court of Appeals’s recent ruling in NetChoice v. Paxton. Earlier this month, the court upheld a preposterous Texas law stating that online platforms with more than 50 million monthly active users in the United States no longer have First Amendment rights regarding their editorial decisions. Put another way, the law tells big social-media companies that they can’t moderate the content on their platforms. YouTube purging terrorist-recruitment videos? Illegal. Twitter removing a violent cell of neo-Nazis harassing people with death threats? Sorry, that’s censorship, according to Andy Oldham, a judge of the United States Court of Appeals and the former general counsel to Texas Governor Greg Abbott.

A state compelling social-media companies to host all user content without restrictions isn’t merely, as the First Amendment litigation lawyer Ken White put it on Twitter, “the most angrily incoherent First Amendment decision I think I’ve ever read.” It’s also the type of ruling that threatens to blow up the architecture of the internet. To understand why requires some expertise in First Amendment law and content-moderation policy, and a grounding in what makes the internet a truly transformational technology. So I called up some legal and tech-policy experts and asked them to explain the Fifth Circuit ruling—and its consequences—to me as if I were a precocious 5-year-old with a strange interest in jurisprudence…(More)”

The European Union-U.S. Data Privacy Framework


White House Fact Sheet: “Today, President Biden signed an Executive Order on Enhancing Safeguards for United States Signals Intelligence Activities (E.O.) directing the steps that the United States will take to implement the U.S. commitments under the European Union-U.S. Data Privacy Framework (EU-U.S. DPF) announced by President Biden and European Commission President von der Leyen in March of 2022. 

Transatlantic data flows are critical to enabling the $7.1 trillion EU-U.S. economic relationship.  The EU-U.S. DPF will restore an important legal basis for transatlantic data flows by addressing concerns that the Court of Justice of the European Union raised in striking down the prior EU-U.S. Privacy Shield framework as a valid data transfer mechanism under EU law. 

The Executive Order bolsters an already rigorous array of privacy and civil liberties safeguards for U.S. signals intelligence activities. It also creates an independent and binding mechanism enabling individuals in qualifying states and regional economic integration organizations, as designated under the E.O., to seek redress if they believe their personal data was collected through U.S. signals intelligence in a manner that violated applicable U.S. law.

U.S. and EU companies large and small across all sectors of the economy rely upon cross-border data flows to participate in the digital economy and expand economic opportunities. The EU-U.S. DPF represents the culmination of a joint effort by the United States and the European Commission to restore trust and stability to transatlantic data flows and reflects the strength of the enduring EU-U.S. relationship based on our shared values…(More)”.

Call it data liberation day: Patients can now access all their health records digitally  


Article by Casey Ross: “The American Revolution had July 4. The allies had D-Day. And now U.S. patients, held down for decades by information hoarders, can rally around a new turning point, October 6, 2022 — the day they got their health data back.

Under federal rules taking effect Thursday, health care organizations must give patients unfettered access to their full health records in digital format. No more long delays. No more fax machines. No more exorbitant charges for printed pages.

Just the data, please — now…The new federal rules — passed under the 21st Century Cures Act — are designed to shift the balance of power to ensure that patients can not only get their data, but also choose who else to share it with. It is the jumping-off point for a patient-mediated data economy that lets consumers in health care benefit from the fluidity they’ve had for decades in banking: they can move their information easily and electronically, and link their accounts to new services and software applications.

“To think that we actually have greater transparency about our personal finances than about our own health is quite an indictment,” said Isaac Kohane, a professor of biomedical informatics at Harvard Medical School. “This will go some distance toward reversing that.”

Even with the rules now in place, health data experts said change will not be fast or easy. Providers and other data holders — who have dug in their heels at every step  —  can still withhold information under certain exceptions. And many questions remain about protocols for sharing digital records, how to verify access rights, and even what it means to give patients all their data. Does that extend to every measurement in the ICU? Every log entry? Every email? And how will it all get standardized?…(More)”