We urgently need data for equitable personalized medicine


Article by Manuel Corpas: “…As a bioinformatician, I am now focusing my attention on gathering the statistics to show just how biased medical research data are. There are problems across the board, ranging from which research questions get asked in the first place, to who participates in clinical trials, to who gets their genomes sequenced. The world is moving toward “precision medicine,” where any individual can have their DNA analyzed and that information can be used to help prescribe the right drugs in the right dosages. But this won’t work if a person’s genetic variants have never been identified or studied in the first place.

It’s astonishing how powerful our genetics can be in mediating medicines. Take the gene CYP2D6, which is known to play a vital role in how fast humans metabolize 25 percent of all the pharmaceuticals on the market. If you have a genetic variant of CYP2D6 that makes you metabolize drugs more quickly, or less quickly, it can have a huge impact on how well those drugs work and the dangers you face from taking them. Codeine was banned from all of Ethiopia in 2015, for example, because a high proportion of people in the country (perhaps 30 percent) have a genetic variant of CYP2D6 that makes them quickly metabolize that drug into morphine, making it more likely to cause respiratory distress and even death…(More)”

Nobody knows how to audit AI


Axios: “Some legislators and experts are pushing independent auditing of AI systems to minimize risks and build trust, Ryan reports.

Why it matters: Consumers don’t trust big tech to self-regulate and government standards may come slowly or never.

The big picture: Failure to manage risk and articulate values early in the development of an AI system can lead to problems ranging from biased outcomes from unrepresentative data to lawsuits alleging stolen intellectual property.

Driving the news: Sen. John Hickenlooper (D-Colo.) announced in a speech on Monday that he will push for the auditing of AI systems, because AI models are using our data “in ways we never imagined and certainly never consented to.”

  • “We need qualified third parties to effectively audit generative AI systems,” Hickenlooper said, “We cannot rely on self-reporting alone. We should trust but verify” claims of compliance with federal laws and regulations, he said.

Catch up quick: The National Institute of Standards and Technology (NIST) developed an AI Risk Management Framework to help organizations think about and measure AI risks, but it does not certify or validate AI products.

  • President Biden’s executive order on AI mandated that NIST expand its support for generative AI developers and “create guidance and benchmarks for evaluating and auditing AI capabilities,” especially in risky areas such as cybersecurity and bioweapons.

What’s happening: A growing range of companies provide services that evaluate whether AI models are complying with local regulations or promises made by their developers — but some AI companies remain committed to their own internal risk research and processes.

  • NIST is only the “tip of the spear” in AI safety, Hickenlooper said. He now wants to establish criteria and a path to certification for third-party auditors.

The “Big Four” accounting firms — Deloitte, EY, KPMG and PwC — sense business opportunities in applying audit methodologies to AI systems, Nicola Morini Bianzino, EY’s global chief technology officer, tells Axios.

  • Morini Bianzino cautions that AI audits might “look more like risk management for a financial institution, as opposed to audit as a certifying mark. Because, honestly, I don’t know technically how we would do that.”
  • Laura Newinski, KPMG’s COO, tells Axios the firm is developing AI auditing services and “attestation about whether data sets are accurate and follow certain standards.”

Established players such as IBM and startups such as Credo provide AI governance dashboards that tell clients in real time where AI models could be causing problems — around data privacy, for example.

  • Anthropic believes NIST should focus on “building a robust and standardized benchmark for generative AI systems” that all private AI companies can adhere to.

Market leader OpenAI announced in October that it’s creating a “risk-informed development policy” and has invited experts to apply to join its OpenAI Red Teaming Network.

Yes, but: An AI audit industry without clear standards could be a recipe for confusion, both for corporate customers and consumers using AI…(More)”.

Revolutionizing Governance: AI-Driven Citizen Engagement


Article by Komal Goyal: “Government-citizen engagement has come a long way over the past decade, with governments increasingly adopting AI-powered analytics, automated processes and chatbots to engage with citizens and gain insights into their concerns. A 2023 Stanford University report found that the federal government spent $3.3 billion on AI in the fiscal year 2022, highlighting the remarkable upswing in AI adoption across various government sectors.

As the demands of a digitally empowered and information-savvy society constantly evolve, it is becoming imperative for government agencies to revolutionize how they interact with their constituents. I’ll discuss how AI can help achieve this and pave the way for a more responsive, inclusive and effective form of governance…(More)”.

The story of the R number: How an obscure epidemiological figure took over our lives


Article by Gavin Freeguard: “Covid-19 did not only dominate our lives in April 2020. It also dominated the list of new words entered into the Oxford English Dictionary.

Alongside Covid-19 itself (noun, “An acute respiratory illness in humans caused by a coronavirus”), the vocabulary of the virus included “self-quarantine”, “social distancing”, “infodemic”, “flatten the curve”, “personal protective equipment”, “elbow bump”, “WFH” and much else. But nestled among this pantheon of new pandemic words was a number, one that would shape our conversations, our politics, our lives for the next 18 months like no other: “Basic reproduction number (R0): The average number of cases of an infectious disease arising by transmission from a single infected individual, in a population that has not previously encountered the disease.”

graphic

“There have been many important figures in this pandemic,” wrote The Times in January 2021, “but one has come to tower over the rest: the reproduction rate. The R number, as everyone calls it, has been used by the government to justify imposing and lifting lockdowns. Indeed while there are many important numbers — gross domestic product, parliamentary majorities, interest rates — few can compete right now with R” (tinyurl.com/v7j6cth9).

Descriptions of it at the start of the pandemic made R the star of the disaster movie reality we lived through. And it wasn’t just a breakout star of the UK’s coronavirus press conferences; in Germany, (then) Chancellor Angela Merkel made the most of her scientific background to explain the meaning of R and its consequences to the public (tinyurl.com/mva7urw5).

But for others, the “obsession” (Professor Linda Bauld, University of Edinburgh) with “the pandemic’s misunderstood metric” (Naturetinyurl.com/y3sr6n6m) has been “a distraction”, an “unhelpful focus”; as the University of Edinburgh’s Professor Mark Woolhouse told one parliamentary select committee, “we’ve created a monster”.

How did this epidemiological number come to dominate our discourse? How useful is it? And where does it come from?…(More)”.

Why China Can’t Export Its Model of Surveillance


Article by Minxin Pei: “t’s Not the Tech That Empowers Big Brother in Beijing—It’s the Informants…Over the past two decades, Chinese leaders have built a high-tech surveillance system of seemingly extraordinary sophistication. Facial recognition software, Internet monitoring, and ubiquitous video cameras give the impression that the ruling Chinese Communist Party (CCP) has finally accomplished the dictator’s dream of building a surveillance state like the one imagined in George Orwell’s 1984

A high-tech surveillance network now blankets the entire country, and the potency of this system was on full display in November 2022, when nationwide protests against China’s COVID lockdown shocked the party. Although the protesters were careful to conceal their faces with masks and hats, the police used mobile-phone location data to track them down. Mass arrests followed.

Beijing’s surveillance state is not only a technological feat. It also relies on a highly labor-intensive organization. Over the past eight decades, the CCP has constructed a vast network of millions of informers and spies whose often unpaid work has been critical to the regime’s survival. It is these men and women, more than cameras or artificial intelligence, that have allowed Beijing to suppress dissent. Without a network of this size, the system could not function. This means that, despite the party’s best efforts, the Chinese security apparatus is impossible to export…(More)”.

The Cult of AI


Article by Robert Evans: “…Cult members are often depicted in the media as weak-willed and foolish. But the Church of Scientology — long accused of being a cult, an allegation they have endlessly denied — recruits heavily among the rich and powerful. The Finders, a D.C.-area cult that started in the 1970s, included a wealthy oil-company owner and multiple members with Ivy League degrees. All of them agreed to pool their money and hand over control of where they worked and how they raised their children to their cult leader. Haruki Murakami wrote that Aum Shinrikyo members, many of whom were doctors or engineers, “actively sought to be controlled.”

Perhaps this feels like a reach. But the deeper you dive into the people — and subcultures that are pushing AI forward — the more cult dynamics you begin to notice.

I should offer a caveat here: There’s nothing wrong with the basic technology we call “AI.” That wide banner term includes tools as varied as text- or facial-recognition programs, chatbots, and of course sundry tools to clone voices and generate deepfakes or rights-free images with odd numbers of fingers. CES featured some real products that harnessed the promise of machine learning (I was particularly impressed by a telescope that used AI to clean up light pollution in images). But the good stuff lived alongside nonsense like “ChatGPT for dogs” (really just an app to read your dog’s body language) and an AI-assisted fleshlight for premature ejaculators. 

And, of course, bad ideas and irrational exuberance are par for the course at CES. Since 1967, the tech industry’s premier trade show has provided anyone paying attention with a preview of how Big Tech talks about itself, and our shared future. But what I saw this year and last year, from both excited futurist fanboys and titans of industry, is a kind of unhinged messianic fervor that compares better to Scientology than to the iPhone…(More)”.

Can the Internet be Governed?


Article by Akash Kapur: “…During the past decade or so, however, governments around the world have grown impatient with the notion of Internet autarky. A trickle of halfhearted interventions has built into what the legal scholar Anu Bradford calls a “cascade of regulation.” In “Digital Empires” (Oxford), her comprehensive and insightful book on global Internet policy, she describes a series of skirmishes—between regulators and companies, and among regulators themselves—whose outcomes will “shape the future ethos of the digital society and define the soul of the digital economy.”

Other recent books echo this sense of the network as being at a critical juncture. Tom Wheeler, a former chairman of the F.C.C., argues in “Techlash: Who Makes the Rules in the Digital Gilded Age?” (Brookings) that we are at “a legacy moment for this generation to determine whether, and how, it will assert the public interest in the new digital environment.” In “The Internet Con” (Verso), Doctorow makes a passionate case for “relief from manipulation, high-handed moderation, surveillance, price-gouging, disgusting or misleading algorithmic suggestions”; he argues that it is time to “dismantle Big Tech’s control over our digital lives and devolve control to the people.” In “Read Write Own” (Random House), Chris Dixon, a venture capitalist, says that a network dominated by a handful of private interests “is neither the internet I want to see nor the world I wish to live in.” He writes, “Think about how much of your life you live online, how much of your identity resides there. . . . Whom do you want in control of that world?”…(More)”.

Don’t Talk to People Like They’re Chatbots


Article by Albert Fox Cahn and Bruce Schneier: “For most of history, communicating with a computer has not been like communicating with a person. In their earliest years, computers required carefully constructed instructions, delivered through punch cards; then came a command-line interface, followed by menus and options and text boxes. If you wanted results, you needed to learn the computer’s language.

This is beginning to change. Large language models—the technology undergirding modern chatbots—allow users to interact with computers through natural conversation, an innovation that introduces some baggage from human-to-human exchanges. Early on in our respective explorations of ChatGPT, the two of us found ourselves typing a word that we’d never said to a computer before: “Please.” The syntax of civility has crept into nearly every aspect of our encounters; we speak to this algebraic assemblage as if it were a person—even when we know that it’s not.

Right now, this sort of interaction is a novelty. But as chatbots become a ubiquitous element of modern life and permeate many of our human-computer interactions, they have the potential to subtly reshape how we think about both computers and our fellow human beings.

One direction that these chatbots may lead us in is toward a society where we ascribe humanity to AI systems, whether abstract chatbots or more physical robots. Just as we are biologically primed to see faces in objects, we imagine intelligence in anything that can hold a conversation. (This isn’t new: People projected intelligence and empathy onto the very primitive 1960s chatbot, Eliza.) We say “please” to LLMs because it feels wrong not to…(More)”.

Name Your Industry—or Else!


Essay by Sarah M. Brownsberger on “The dehumanizing way economics data describes us”: “…My alma mater wants to know what industry I belong to. In a wash of good feeling after seeing old friends, I have gone to the school website to update my contact information. Name and address, easy, marital status, well and good—but next comes a drop-down menu asking for my “industry.”

In my surprise, I have an impulse to type “Where the bee sucks, there suck I!” But you can’t quote Shakespeare in a drop-down menu. You can only opt only for its options.

The school is certainly cutting-edge. Like a fashion item that you see once and assume is aberrant and then see ten times in a week, the word “industry” is all over town. Cryptocurrency is an industry. So are Elvis-themed marriages. Outdoor recreation is an industry. A brewery in my city hosts “Industry Night,” a happy hour “for those who work in the industry”—tapsters and servers.

Are we all in an industry? What happened to “occupation”?…(More)”.

When Farmland Becomes the Front Line, Satellite Data and Analysis Can Fight Hunger


Article by Inbal Becker-Reshef and Mary Mitkish: “When a shock to the global food system occurs—such as during the Russian invasion of Ukraine in 2022—collecting the usual ground-based data is all but impossible. The Russia–Ukraine war has turned farmland into the front lines of a war zone. In this situation, it is unreasonable to expect civilians to walk onto fields riddled with land mines and damaged by craters to collect information on what has been planted, where it was planted, and if it could be harvested. The inherent danger of ground-based data collection, especially in occupied territories of the conflict, has demanded a different way to assess planted and harvested areas and forecast crop production.

Satellite-based information can provide this evidence quickly and reliably. At NASA Harvest, NASA’s Global Food Security and Agriculture Consortium, one of our main aims is to use satellite-based information to fill gaps in the agriculture information ecosystem. Since the start of the Russia–Ukraine conflict, we have been using satellite imagery to estimate the impact of the war on Ukraine’s agricultural lands at the request of the Ministry of Agrarian Policy and Food of Ukraine. Our work demonstrates how effective this approach can be for delivering critical and timely insights for decisionmakers.

Prior to the war, Ukraine accounted for over 10% of the world’s wheat, corn, and barley trade and was the number one sunflower oil exporter, accounting for close to 50% of the global market. In other words, food produced in Ukraine is critical for its national economy, for global trade, and for feeding millions across the globe…(More)”.