How Health Data Integrity Can Earn Trust and Advance Health


Article by Jochen Lennerz, Nick Schneider and Karl Lauterbach: “Efforts to share health data across borders snag on legal and regulatory barriers. Before detangling the fine print, let’s agree on overarching principles.

Imagine a scenario in which Mary, an individual with a rare disease, has agreed to share her medical records for a research project aimed at finding better treatments for genetic disorders. Mary’s consent is grounded in trust that her data will be handled with the utmost care, protected from unauthorized access, and used according to her wishes. 

It may sound simple, but meeting these standards comes with myriad complications. Whose job is it to weigh the risk that Mary might be reidentified, even if her information is de-identified and stored securely? How should that assessment be done? How can data from Mary’s records be aggregated with patients from health systems in other countries, each with their own requirements for data protection and formats for record keeping? How can Mary’s wishes be respected, both in terms of what research is conducted and in returning relevant results to her?

From electronic medical records to genomic sequencing, health care providers and researchers now have an unprecedented wealth of information that could help tailor treatments to individual needs, revolutionize understanding of disease, and enhance the overall quality of health care. Data protection, privacy safeguards, and cybersecurity are all paramount for safeguarding sensitive medical information, but much of the potential that lies in this abundance of data is being lost because well-intentioned regulations have not been set up to allow for data sharing and collaboration. This stymies efforts to study rare diseases, map disease patterns, improve public health surveillance, and advance evidence-based policymaking (for instance, by comparing effectiveness of interventions across regions and demographics). Projects that could excel with enough data get bogged down in bureaucracy and uncertainty. For example, Germany now has strict data protection laws—with heavy punishment for violations—that should allow de-identified health insurance claims to be used for research within secure processing environments, but the legality of such use has been challenged…(More)”.

Data and density: Two tools to boost health equity in cities


Article by Ann Aerts and Diana Rodríguez Franco: “Improving health and health equity for vulnerable populations requires addressing the social determinants of health. In the US, it is estimated that medical care only accounts for 10-20% of health outcomes while social determinants like education and income account for the remaining 80-90%.

Place-based interventions, however, are showing promise for improving health outcomes despite persistent inequalities. Research and practice increasingly point to the role of cities in promoting health equity — or reversing health inequities — as 56% of the global population lives in cities, and several social determinants of health are directly tied to urban factors like opportunity, environmental health, neighbourhoods and physical environments, access to food and more.

Thus, it is critical to identify both true drivers of good health and poor health outcomes so that underserved populations can be better served.

Place-based strategies can address health inequities and lead to meaningful improvements for vulnerable populations…

Initial data analysis revealed a strong correlation between cardiovascular disease risk in city residents and social determinants such as higher education, commuting time, access to Medicaid, rental costs and internet access.

Understanding which data points are correlated with health risks is key to effectively tailoring interventions.

Determined to reverse this trend, city authorities have launched a “HealthyNYC” campaign and are working with the Novartis Foundation to uncover the behavioural and social determinants behind non-communicable diseases (NCDs) (e.g. diabetes and cardiovascular disease), which cause 87% of all deaths in New York City…(More)”

AI is too important to be monopolised


Article by Marietje Schaake: “…From the promise of medical breakthroughs to the perils of election interference, the hopes of helpful climate research to the challenge of cracking fundamental physics, AI is too important to be monopolised.

Yet the market is moving in exactly that direction, as resources and talent to develop the most advanced AI sit firmly in the hands of a very small number of companies. That is particularly true for resource-intensive data and computing power (termed “compute”), which are required to train large language models for a variety of AI applications. Researchers and small and medium-sized enterprises risk fatal dependency on Big Tech once again, or else they will miss out on the latest wave of innovation. 

On both sides of the Atlantic, feverish public investments are being made in an attempt to level the computational playing field. To ensure scientists have access to capacities comparable to those of Silicon Valley giants, the US government established the National AI Research Resource last month. This pilot project is being led by the US National Science Foundation. By working with 10 other federal agencies and 25 civil society groups, it will facilitate government-funded data and compute to help the research and education community build and understand AI. 

The EU set up a decentralised network of supercomputers with a similar aim back in 2018, before the recent wave of generative AI created a new sense of urgency. The EuroHPC has lived in relative obscurity and the initiative appears to have been under-exploited. As European Commission president Ursula von der Leyen said late last year: we need to put this power to useThe EU now imagines that democratised supercomputer access can also help with the creation of “AI factories,” where small businesses pool their resources to develop new cutting-edge models. 

There has long been talk of considering access to the internet a public utility, because of how important it is for education, employment and acquiring information. Yet rules to that end were never adopted. But with the unlocking of compute as a shared good, the US and the EU are showing real willingness to make investments into public digital infrastructure.

Even if the latest measures are viewed as industrial policy in a new jacket, they are part of a long overdue step to shape the digital market and offset the outsized power of big tech companies in various corners of our societies…(More)”.

Tech Strikes Back


Essay by Nadia Asparouhova: “A new tech ideology is ascendant online. “Introducing effective accelerationism,” the pseudonymous user Beff Jezos tweeted, rather grandly, in May 2022. “E/acc” — pronounced ee-ack — “is a direct product [of the] tech Twitter schizosphere,” he wrote. “We hope you join us in this new endeavour.”

The reaction from Jezos’s peers was a mix of positive, critical, and perplexed. “What the f*** is e/acc,” posted multiple users. “Accelerationism is unfortunately now just a buzzword,” sighed political scientist Samo Burja, referring to a related concept popularized around 2017. “I guess unavoidable for Twitter subcultures?” “These [people] are absolutely bonkers,” grumbled Timnit Gebru, an artificial intelligence researcher and activist who frequently criticizes the tech industry. “Their fanaticism + god complex is exhausting.”

Despite the criticism, e/acc persists, and is growing, in the tech hive mind. E/acc’s founders believe that the tech world has become captive to a monoculture. If it becomes paralyzed by a fear of the future, it will never produce meaningful benefits. Instead, e/acc encourages more ideas, more growth, more competition, more action. “Whether you’re building a family, a startup, a spaceship, a robot, or better energy policy, just build,” writes one anonymous poster. “Do something hard. Do it for everyone who comes next. That’s it. Existence will take care of the rest.”…(More)”.

AI cannot be used to deny health care coverage, feds clarify to insurers


Article by Beth Mole: “Health insurance companies cannot use algorithms or artificial intelligence to determine care or deny coverage to members on Medicare Advantage plans, the Centers for Medicare & Medicaid Services (CMS) clarified in a memo sent to all Medicare Advantage insurers.

The memo—formatted like an FAQ on Medicare Advantage (MA) plan rules—comes just months after patients filed lawsuits claiming that UnitedHealth and Humana have been using a deeply flawed AI-powered tool to deny care to elderly patients on MA plans. The lawsuits, which seek class-action status, center on the same AI tool, called nH Predict, used by both insurers and developed by NaviHealth, a UnitedHealth subsidiary.

According to the lawsuits, nH Predict produces draconian estimates for how long a patient will need post-acute care in facilities like skilled nursing homes and rehabilitation centers after an acute injury, illness, or event, like a fall or a stroke. And NaviHealth employees face discipline for deviating from the estimates, even though they often don’t match prescribing physicians’ recommendations or Medicare coverage rules. For instance, while MA plans typically provide up to 100 days of covered care in a nursing home after a three-day hospital stay, using nH Predict, patients on UnitedHealth’s MA plan rarely stay in nursing homes for more than 14 days before receiving payment denials, the lawsuits allege…(More)”

We urgently need data for equitable personalized medicine


Article by Manuel Corpas: “…As a bioinformatician, I am now focusing my attention on gathering the statistics to show just how biased medical research data are. There are problems across the board, ranging from which research questions get asked in the first place, to who participates in clinical trials, to who gets their genomes sequenced. The world is moving toward “precision medicine,” where any individual can have their DNA analyzed and that information can be used to help prescribe the right drugs in the right dosages. But this won’t work if a person’s genetic variants have never been identified or studied in the first place.

It’s astonishing how powerful our genetics can be in mediating medicines. Take the gene CYP2D6, which is known to play a vital role in how fast humans metabolize 25 percent of all the pharmaceuticals on the market. If you have a genetic variant of CYP2D6 that makes you metabolize drugs more quickly, or less quickly, it can have a huge impact on how well those drugs work and the dangers you face from taking them. Codeine was banned from all of Ethiopia in 2015, for example, because a high proportion of people in the country (perhaps 30 percent) have a genetic variant of CYP2D6 that makes them quickly metabolize that drug into morphine, making it more likely to cause respiratory distress and even death…(More)”

Nobody knows how to audit AI


Axios: “Some legislators and experts are pushing independent auditing of AI systems to minimize risks and build trust, Ryan reports.

Why it matters: Consumers don’t trust big tech to self-regulate and government standards may come slowly or never.

The big picture: Failure to manage risk and articulate values early in the development of an AI system can lead to problems ranging from biased outcomes from unrepresentative data to lawsuits alleging stolen intellectual property.

Driving the news: Sen. John Hickenlooper (D-Colo.) announced in a speech on Monday that he will push for the auditing of AI systems, because AI models are using our data “in ways we never imagined and certainly never consented to.”

  • “We need qualified third parties to effectively audit generative AI systems,” Hickenlooper said, “We cannot rely on self-reporting alone. We should trust but verify” claims of compliance with federal laws and regulations, he said.

Catch up quick: The National Institute of Standards and Technology (NIST) developed an AI Risk Management Framework to help organizations think about and measure AI risks, but it does not certify or validate AI products.

  • President Biden’s executive order on AI mandated that NIST expand its support for generative AI developers and “create guidance and benchmarks for evaluating and auditing AI capabilities,” especially in risky areas such as cybersecurity and bioweapons.

What’s happening: A growing range of companies provide services that evaluate whether AI models are complying with local regulations or promises made by their developers — but some AI companies remain committed to their own internal risk research and processes.

  • NIST is only the “tip of the spear” in AI safety, Hickenlooper said. He now wants to establish criteria and a path to certification for third-party auditors.

The “Big Four” accounting firms — Deloitte, EY, KPMG and PwC — sense business opportunities in applying audit methodologies to AI systems, Nicola Morini Bianzino, EY’s global chief technology officer, tells Axios.

  • Morini Bianzino cautions that AI audits might “look more like risk management for a financial institution, as opposed to audit as a certifying mark. Because, honestly, I don’t know technically how we would do that.”
  • Laura Newinski, KPMG’s COO, tells Axios the firm is developing AI auditing services and “attestation about whether data sets are accurate and follow certain standards.”

Established players such as IBM and startups such as Credo provide AI governance dashboards that tell clients in real time where AI models could be causing problems — around data privacy, for example.

  • Anthropic believes NIST should focus on “building a robust and standardized benchmark for generative AI systems” that all private AI companies can adhere to.

Market leader OpenAI announced in October that it’s creating a “risk-informed development policy” and has invited experts to apply to join its OpenAI Red Teaming Network.

Yes, but: An AI audit industry without clear standards could be a recipe for confusion, both for corporate customers and consumers using AI…(More)”.

Revolutionizing Governance: AI-Driven Citizen Engagement


Article by Komal Goyal: “Government-citizen engagement has come a long way over the past decade, with governments increasingly adopting AI-powered analytics, automated processes and chatbots to engage with citizens and gain insights into their concerns. A 2023 Stanford University report found that the federal government spent $3.3 billion on AI in the fiscal year 2022, highlighting the remarkable upswing in AI adoption across various government sectors.

As the demands of a digitally empowered and information-savvy society constantly evolve, it is becoming imperative for government agencies to revolutionize how they interact with their constituents. I’ll discuss how AI can help achieve this and pave the way for a more responsive, inclusive and effective form of governance…(More)”.

The story of the R number: How an obscure epidemiological figure took over our lives


Article by Gavin Freeguard: “Covid-19 did not only dominate our lives in April 2020. It also dominated the list of new words entered into the Oxford English Dictionary.

Alongside Covid-19 itself (noun, “An acute respiratory illness in humans caused by a coronavirus”), the vocabulary of the virus included “self-quarantine”, “social distancing”, “infodemic”, “flatten the curve”, “personal protective equipment”, “elbow bump”, “WFH” and much else. But nestled among this pantheon of new pandemic words was a number, one that would shape our conversations, our politics, our lives for the next 18 months like no other: “Basic reproduction number (R0): The average number of cases of an infectious disease arising by transmission from a single infected individual, in a population that has not previously encountered the disease.”

graphic

“There have been many important figures in this pandemic,” wrote The Times in January 2021, “but one has come to tower over the rest: the reproduction rate. The R number, as everyone calls it, has been used by the government to justify imposing and lifting lockdowns. Indeed while there are many important numbers — gross domestic product, parliamentary majorities, interest rates — few can compete right now with R” (tinyurl.com/v7j6cth9).

Descriptions of it at the start of the pandemic made R the star of the disaster movie reality we lived through. And it wasn’t just a breakout star of the UK’s coronavirus press conferences; in Germany, (then) Chancellor Angela Merkel made the most of her scientific background to explain the meaning of R and its consequences to the public (tinyurl.com/mva7urw5).

But for others, the “obsession” (Professor Linda Bauld, University of Edinburgh) with “the pandemic’s misunderstood metric” (Naturetinyurl.com/y3sr6n6m) has been “a distraction”, an “unhelpful focus”; as the University of Edinburgh’s Professor Mark Woolhouse told one parliamentary select committee, “we’ve created a monster”.

How did this epidemiological number come to dominate our discourse? How useful is it? And where does it come from?…(More)”.

Why China Can’t Export Its Model of Surveillance


Article by Minxin Pei: “t’s Not the Tech That Empowers Big Brother in Beijing—It’s the Informants…Over the past two decades, Chinese leaders have built a high-tech surveillance system of seemingly extraordinary sophistication. Facial recognition software, Internet monitoring, and ubiquitous video cameras give the impression that the ruling Chinese Communist Party (CCP) has finally accomplished the dictator’s dream of building a surveillance state like the one imagined in George Orwell’s 1984

A high-tech surveillance network now blankets the entire country, and the potency of this system was on full display in November 2022, when nationwide protests against China’s COVID lockdown shocked the party. Although the protesters were careful to conceal their faces with masks and hats, the police used mobile-phone location data to track them down. Mass arrests followed.

Beijing’s surveillance state is not only a technological feat. It also relies on a highly labor-intensive organization. Over the past eight decades, the CCP has constructed a vast network of millions of informers and spies whose often unpaid work has been critical to the regime’s survival. It is these men and women, more than cameras or artificial intelligence, that have allowed Beijing to suppress dissent. Without a network of this size, the system could not function. This means that, despite the party’s best efforts, the Chinese security apparatus is impossible to export…(More)”.