Philanthropy to Protect US Democracy


Essay by Lukas Haynes: “…Given the threat of election subversion, philanthropists who care about democracy across the political spectrum must now deploy donations as effectively as they can. In their seminal book, Money Well Spent: A Strategic Plan for Smart Philanthropy, Paul Brest and Hal Harvey argue that generating “alternative solutions” to hard problems “requires creativity or innovation akin to that of a scientist or engineer—creativity that is goal-oriented, that aims to come up with pragmatic solutions to a problem.”

In seeking the most effective solutions, Brest and Harvey do not find that nonpartisan, charitable efforts are the only legitimate form of strategic giving. Instead, they encourage donors to identify clear problem-solving goals, sound strategy, and clarity about risk tolerance.

Given the concerted attack on democratic norms by political candidates, there is no more effective alternative at hand than using political donations to defeat those candidates. If it is not already part of donors’ philanthropic toolkit to protect democracy, it needs to be and soon.

Once Big Lie-promoting candidates win and take power over elections, it will be too late to repeal their authority, especially in states where Republicans control the state legislatures. Should they successfully subvert a national presidential election in a deeply polarized nation, the United States will have crossed an undemocratic Rubicon no well-intentioned American wants to witness. So what are the most effective ways for political donors to respond to this perilous moment?…(More)”.

Charting an Equity-Centered Public Health Data System


Introduction to Special Issue by Alonzo L. Plough: “…The articles in this special issue were written with that vision in mind; several of them even informed the commission’s deliberations. Each article addresses an issue essential to the challenge of building an equity-focused public health data system:

  • Why Equity Matters in Public Health Data. Authors Anita Chandra, Laurie T. Martin, Joie D. Acosta, Christopher Nelson, Douglas Yeung, Nabeel Qureshi, and Tara Blagg explore where and how equity has been lacking in public health data and the implications of considering equity to the tech and data sectors.
  • What is Public Health Data? As authors Joie D. Acosta, Anita Chandra, Douglas Yeung, Christopher Nelson, Nabeel Qureshi, Tara Blagg, and Laurie T. Martin explain, good public health data are more than just health data. We need to reimagine the types of data we collect and from where, as well data precision, granularity, timeliness, and more.
  • Public Health Data and Special Populations. People of color, women, people with disabilities, and people who are lesbian, gay bisexual trans-gendered queer are among the populations that have been inconsistently represented in public health data over time. This article by authors Tina J. Kauh and Maryam Khojasteh reviews findings for each population, as well as commonalities across populations.
  • Public health data interoperability and connectedness. What are challenges to connecting public health data swiftly yet accurately? What gaps need to be filled? How can the data and tech sector help address these issues? These are some of the questions explored in this article by authors Laurie T. Martin, Christopher Nelson, Douglas Yeung, Joie D. Acosta, Nabeel Qureshi, Tara Blagg, and Anita Chandra.
  • Integrating Tech and Data Expertise into the Public Health Workforce. This article by authors Laurie T. Martin, Anita Chandra, Christopher Nelson, Douglas Yeung, Joie D. Acosta, Nabeel Qureshi, and Tara Blag envisions what a tech-savvy public health workforce will look like and how it can be achieved through new workforce models, opportunities to expand capacity, and training….(More)”.

The Exploited Labor Behind Artificial Intelligence


Essay by Adrienne Williams, Milagros Miceli, and Timnit Gebru: “The public’s understanding of artificial intelligence (AI) is largely shaped by pop culture — by blockbuster movies like “The Terminator” and their doomsday scenarios of machines going rogue and destroying humanity. This kind of AI narrative is also what grabs the attention of news outlets: a Google engineer claiming that its chatbot was sentient was among the most discussed AI-related news in recent months, even reaching Stephen Colbert’s millions of viewers. But the idea of superintelligent machines with their own agency and decision-making power is not only far from reality — it distracts us from the real risks to human lives surrounding the development and deployment of AI systems. While the public is distracted by the specter of nonexistent sentient machines, an army of precarized workers stands behind the supposed accomplishments of artificial intelligence systems today.

Many of these systems are developed by multinational corporations located in Silicon Valley, which have been consolidating power at a scale that, journalist Gideon Lewis-Kraus notes, is likely unprecedented in human history. They are striving to create autonomous systems that can one day perform all of the tasks that people can do and more, without the required salaries, benefits or other costs associated with employing humans. While this corporate executives’ utopia is far from reality, the march to attempt its realization has created a global underclass, performing what anthropologist Mary L. Gray and computational social scientist Siddharth Suri call ghost work: the downplayed human labor driving “AI”.

Tech companies that have branded themselves “AI first” depend on heavily surveilled gig workers like data labelers, delivery drivers and content moderators. Startups are even hiring people to impersonate AI systems like chatbots, due to the pressure by venture capitalists to incorporate so-called AI into their products. In fact, London-based venture capital firm MMC Ventures surveyed 2,830 AI startups in the EU and found that 40% of them didn’t use AI in a meaningful way…(More)”.

Innovative Data Science Approaches to Identify Individuals, Populations, and Communities at High Risk for Suicide


Report by the National Academies of Sciences, Engineering, and Medicine: “Emerging real-time data sources, together with innovative data science techniques and methods – including artificial intelligence and machine learning – can help inform upstream suicide prevention efforts. Select social media platforms have proactively deployed these methods to identify individual platform users at high risk for suicide, and in some cases may activate local law enforcement, if needed, to prevent imminent suicide. To explore the current scope of activities, benefits, and risks of leveraging innovative data science techniques to help inform upstream suicide prevention at the individual and population level, the Forum on Mental Health and Substance Use Disorders of the National Academies of Sciences, Engineering, and Medicine convened a virtual workshop series consisting of three webinars held on April 28, May 12, and June 30, 2022. This Proceedings highlights presentations and discussions from the workshop…(More)”

We the Dead: Preserving Data at the End of the World


Book by Brian Michael Murphy: “Locked away in refrigerated vaults, sanitized by gas chambers, and secured within bombproof caverns deep under mountains are America’s most prized materials: the ever-expanding collection of records that now accompany each of us from birth to death. This data complex backs up and protects our most vital information against decay and destruction, and yet it binds us to corporate and government institutions whose power is also preserved in its bunkers, infrastructures, and sterilized spaces.

We the Dead traces the emergence of the data complex in the early twentieth century and guides readers through its expansion in a series of moments when Americans thought they were living just before the end of the world. Depression-era eugenicists feared racial contamination and the downfall of the white American family, while contemporary technologists seek ever denser and more durable materials for storing data, from microetched metal discs to cryptocurrency keys encoded in synthetic DNA. Artfully written and packed with provocative ideas, this haunting book illuminates the dark places of the data complex and the ways it increasingly blurs the lines between human and machine, biological body and data body, life and digital afterlife…(More)”.

CNSTAT Report Emphasizes the Need for a National Data Infrastructure


Article by Molly Gahagen: “Having credible and accessible data is essential for various sectors of society to function. In the recent report, “Toward a 21st Century National Data Infrastructure: Mobilizing Information for the Common Good,” by the Committee on National Statistics (CNSTAT) of the National Academies of Sciences, Engineering, and Medicine, the importance of national data infrastructure is emphasized…

Emphasizing the need for reliable statistics for national, state and local government officials, as well as businesses and citizens, the report cites the need for a modern national data infrastructure incorporating data from multiple federal agencies. Initial recommendations and potential outcomes of such a system are contained in the report.

Recommendations include practices to incorporate data from many sources, safeguard privacy, freely share statistics with the public, ensure transparency and create a modern system that would allow for easy access and enhanced security.

Potential outcomes of this infrastructure highlighted by the authors of the report include increased evidence-based policymaking on several levels of government, uniform regulations for data reporting and users accessing the data and increased security. The report describes how this would tie into increased initiatives to promote research and evidence-based policymaking, including through the passing of the Evidence-Based Policymaking Act of 2018 in Congress.

CNSTAT’s future reports seek to address blending multiple data sources, data equity, technology and tools, among other topics…(More)”.

Nudging the Nudger: A Field Experiment on the Effect of Performance Feedback to Service Agents on Increasing Organ Donor Registrations


Paper by Julian House, Nicola Lacetera, Mario Macis & Nina Mazar: “We conducted a randomized controlled trial involving nearly 700 customer-service representatives (CSRs) in a Canadian government service agency to study whether providing CSRs with performance feedback with or without peer comparison affected their subsequent organ donor registration rates. Despite having no tie to remuneration or promotion, the provision of individual performance feedback three times over one year resulted in a 25% increase in daily signups, compared to otherwise similar encouragement and reminders. Adding benchmark information that compared CSRs performance to average and top peer performance did not further enhance this effect. Registrations increased more among CSRs whose performance was already above average, and there was no negative effect on lower-performing CSRs. A post-intervention survey showed that CSRs found the information included in the treatments helpful and encouraging. However, performance feedback without benchmark information increased perceived pressure to perform…(More)”.

Can Smartphones Help Predict Suicide?


Ellen Barry in The New York Times: “In March, Katelin Cruz left her latest psychiatric hospitalization with a familiar mix of feelings. She was, on the one hand, relieved to leave the ward, where aides took away her shoelaces and sometimes followed her into the shower to ensure that she would not harm herself.

But her life on the outside was as unsettled as ever, she said in an interview, with a stack of unpaid bills and no permanent home. It was easy to slide back into suicidal thoughts. For fragile patients, the weeks after discharge from a psychiatric facility are a notoriously difficult period, with a suicide rate around 15 times the national rate, according to one study.

This time, however, Ms. Cruz, 29, left the hospital as part of a vast research project which attempts to use advances in artificial intelligence to do something that has eluded psychiatrists for centuries: to predict who is likely to attempt suicide and when that person is likely to attempt it, and then, to intervene.

On her wrist, she wore a Fitbit programmed to track her sleep and physical activity. On her smartphone, an app was collecting data about her moods, her movement and her social interactions. Each device was providing a continuous stream of information to a team of researchers on the 12th floor of the William James Building, which houses Harvard’s psychology department.

In the field of mental health, few new areas generate as much excitement as machine learning, which uses computer algorithms to better predict human behavior. There is, at the same time, exploding interest in biosensors that can track a person’s mood in real time, factoring in music choices, social media posts, facial expression and vocal expression.

Matthew K. Nock, a Harvard psychologist who is one of the nation’s top suicide researchers, hopes to knit these technologies together into a kind of early-warning system that could be used when an at-risk patient is released from the hospital…(More)”.

Hurricane Ian Destroyed Their Homes. Algorithms Sent Them Money


Article by Chris Stokel-Walker: “The algorithms that power Skai’s damage assessments are trained by manually labeling satellite images of a couple of hundred buildings in a disaster-struck area that are known to have been damaged. The software can then, at speed, detect damaged buildings across the whole affected area. A research paper on the underlying technology presented at a 2020 academic workshop on AI for disaster response claimed the auto-generated damage assessments match those of human experts with between 85 and 98 percent accuracy.

In Florida this month, GiveDirectly sent its push notification offering $700 to any user of the Providers app with a registered address in neighborhoods of Collier, Charlotte, and Lee Counties where Google’s AI system deemed more than 50 percent of buildings had been damaged. So far, 900 people have taken up the offer, and half of those have been paid. If every recipient takes up GiveDirectly’s offer, the organization will pay out $2.4 million in direct financial aid.

Some may be skeptical of automated disaster response. But in the chaos after an event like a hurricane making landfall, the conventional, human response can be far from perfect. Diaz points to an analysis GiveDirectly conducted looking at their work after Hurricane Harvey, which hit Texas and Louisiana in 2017, before the project with Google. Two out of the three areas that were most damaged and economically depressed were initially overlooked. A data-driven approach is “much better than what we’ll have from boots on the ground and word of mouth,” Diaz says.

GiveDirectly and Google’s hands-off, algorithm-led approach to aid distribution has been welcomed by some disaster assistance experts—with caveats. Reem Talhouk, a research fellow at Northumbria University’s School of Design and Centre for International Development in the UK, says that the system appears to offer a more efficient way of delivering aid. And it protects the dignity of recipients, who don’t have to queue up for handouts in public…(More)”.

Is This the Beginning of the End of the Internet?


Article by Charlie Warzel: “…occasionally, something happens that is so blatantly and obviously misguided that trying to explain it rationally makes you sound ridiculous. Such is the case with the Fifth Circuit Court of Appeals’s recent ruling in NetChoice v. Paxton. Earlier this month, the court upheld a preposterous Texas law stating that online platforms with more than 50 million monthly active users in the United States no longer have First Amendment rights regarding their editorial decisions. Put another way, the law tells big social-media companies that they can’t moderate the content on their platforms. YouTube purging terrorist-recruitment videos? Illegal. Twitter removing a violent cell of neo-Nazis harassing people with death threats? Sorry, that’s censorship, according to Andy Oldham, a judge of the United States Court of Appeals and the former general counsel to Texas Governor Greg Abbott.

A state compelling social-media companies to host all user content without restrictions isn’t merely, as the First Amendment litigation lawyer Ken White put it on Twitter, “the most angrily incoherent First Amendment decision I think I’ve ever read.” It’s also the type of ruling that threatens to blow up the architecture of the internet. To understand why requires some expertise in First Amendment law and content-moderation policy, and a grounding in what makes the internet a truly transformational technology. So I called up some legal and tech-policy experts and asked them to explain the Fifth Circuit ruling—and its consequences—to me as if I were a precocious 5-year-old with a strange interest in jurisprudence…(More)”