AI cannot be used to deny health care coverage, feds clarify to insurers


Article by Beth Mole: “Health insurance companies cannot use algorithms or artificial intelligence to determine care or deny coverage to members on Medicare Advantage plans, the Centers for Medicare & Medicaid Services (CMS) clarified in a memo sent to all Medicare Advantage insurers.

The memo—formatted like an FAQ on Medicare Advantage (MA) plan rules—comes just months after patients filed lawsuits claiming that UnitedHealth and Humana have been using a deeply flawed AI-powered tool to deny care to elderly patients on MA plans. The lawsuits, which seek class-action status, center on the same AI tool, called nH Predict, used by both insurers and developed by NaviHealth, a UnitedHealth subsidiary.

According to the lawsuits, nH Predict produces draconian estimates for how long a patient will need post-acute care in facilities like skilled nursing homes and rehabilitation centers after an acute injury, illness, or event, like a fall or a stroke. And NaviHealth employees face discipline for deviating from the estimates, even though they often don’t match prescribing physicians’ recommendations or Medicare coverage rules. For instance, while MA plans typically provide up to 100 days of covered care in a nursing home after a three-day hospital stay, using nH Predict, patients on UnitedHealth’s MA plan rarely stay in nursing homes for more than 14 days before receiving payment denials, the lawsuits allege…(More)”

Nobody knows how to audit AI


Axios: “Some legislators and experts are pushing independent auditing of AI systems to minimize risks and build trust, Ryan reports.

Why it matters: Consumers don’t trust big tech to self-regulate and government standards may come slowly or never.

The big picture: Failure to manage risk and articulate values early in the development of an AI system can lead to problems ranging from biased outcomes from unrepresentative data to lawsuits alleging stolen intellectual property.

Driving the news: Sen. John Hickenlooper (D-Colo.) announced in a speech on Monday that he will push for the auditing of AI systems, because AI models are using our data “in ways we never imagined and certainly never consented to.”

  • “We need qualified third parties to effectively audit generative AI systems,” Hickenlooper said, “We cannot rely on self-reporting alone. We should trust but verify” claims of compliance with federal laws and regulations, he said.

Catch up quick: The National Institute of Standards and Technology (NIST) developed an AI Risk Management Framework to help organizations think about and measure AI risks, but it does not certify or validate AI products.

  • President Biden’s executive order on AI mandated that NIST expand its support for generative AI developers and “create guidance and benchmarks for evaluating and auditing AI capabilities,” especially in risky areas such as cybersecurity and bioweapons.

What’s happening: A growing range of companies provide services that evaluate whether AI models are complying with local regulations or promises made by their developers — but some AI companies remain committed to their own internal risk research and processes.

  • NIST is only the “tip of the spear” in AI safety, Hickenlooper said. He now wants to establish criteria and a path to certification for third-party auditors.

The “Big Four” accounting firms — Deloitte, EY, KPMG and PwC — sense business opportunities in applying audit methodologies to AI systems, Nicola Morini Bianzino, EY’s global chief technology officer, tells Axios.

  • Morini Bianzino cautions that AI audits might “look more like risk management for a financial institution, as opposed to audit as a certifying mark. Because, honestly, I don’t know technically how we would do that.”
  • Laura Newinski, KPMG’s COO, tells Axios the firm is developing AI auditing services and “attestation about whether data sets are accurate and follow certain standards.”

Established players such as IBM and startups such as Credo provide AI governance dashboards that tell clients in real time where AI models could be causing problems — around data privacy, for example.

  • Anthropic believes NIST should focus on “building a robust and standardized benchmark for generative AI systems” that all private AI companies can adhere to.

Market leader OpenAI announced in October that it’s creating a “risk-informed development policy” and has invited experts to apply to join its OpenAI Red Teaming Network.

Yes, but: An AI audit industry without clear standards could be a recipe for confusion, both for corporate customers and consumers using AI…(More)”.

Future-Proofing Transparency: Re-Thinking Public Record Governance For the Age of Big Data


Paper by Beatriz Botero Arcila: “Public records, public deeds, and even open data portals often include personal information that can now be easily accessed online. Yet, for all the recent attention given to informational privacy and data protection, scant literature exists on the governance of personal information that is available in public documents. This Article examines the critical issue of balancing privacy and transparency within public record governance in the age of Big Data.

With Big Data and powerful machine learning algorithms, personal information in public records can easily be used to infer sensitive data about people or aggregated to create a comprehensive personal profile of almost anyone. This information is public and open, however, for many good reasons: ensuring political accountability, facilitating democratic participation, enabling economic transactions, combating illegal activities such as money laundering and terrorism financing, and facilitating. Can the interest in record publicity coexist with the growing ease of deanonymizing and revealing sensitive information about individuals?

This Article addresses this question from a comparative perspective, focusing on US and EU access to information law. The Article shows that the publicity of records was, in the past and not withstanding its presumptive public nature, protected because most people would not trouble themselves to go to public offices to review them, and it was practical impossible to aggregate them to draw extensive profiles about people. Drawing from this insight and contemporary debates on data governance, this Article challenges the binary classification of data as either published or not and proposes a risk-based framework that re-insert that natural friction to public record governance by leveraging techno-legal methods in how information is published and accessed…(More)”.

Creating Real Value: Skills Data in Learning and Employment Records


Article by Nora Heffernan: “Over the last few months, I’ve asked the same question to corporate leaders from human resources, talent acquisition, learning and development, and management backgrounds. The question is this:

What kind of data needs to be included in learning and employment records to be of greatest value to you in your role and to your organization?

By data, I’m talking about credential attainment, employment history, and, emphatically, verified skills data: showing at an individual level what a candidate or employee knows and is able to do.

The answer varies slightly by industry and position, but unanimously, the employers I’ve talked to would find the greatest value in utilizing learning and employment records that include verified skills data. There is no equivocation.

And as the national conversation about skills-first talent management continues to ramp up, with half of companies indicating they plan to eliminate degree requirements for some jobs in the next year, the call for verified skill data will only get louder. Employers value skills data for multiple reasons…(More)”.

Name Your Industry—or Else!


Essay by Sarah M. Brownsberger on “The dehumanizing way economics data describes us”: “…My alma mater wants to know what industry I belong to. In a wash of good feeling after seeing old friends, I have gone to the school website to update my contact information. Name and address, easy, marital status, well and good—but next comes a drop-down menu asking for my “industry.”

In my surprise, I have an impulse to type “Where the bee sucks, there suck I!” But you can’t quote Shakespeare in a drop-down menu. You can only opt only for its options.

The school is certainly cutting-edge. Like a fashion item that you see once and assume is aberrant and then see ten times in a week, the word “industry” is all over town. Cryptocurrency is an industry. So are Elvis-themed marriages. Outdoor recreation is an industry. A brewery in my city hosts “Industry Night,” a happy hour “for those who work in the industry”—tapsters and servers.

Are we all in an industry? What happened to “occupation”?…(More)”.

Integrating Participatory Budgeting and Institutionalized Citizens’ Assemblies: A Community-Driven Perspective


Article by Nick Vlahos: “There is a growing excitement in the democracy field about the potential of citizen’s assemblies (CAs), a practice that brings together groups of residents selected by lottery to deliberate on public policy issues. There is longitudinal evidence to suggest that deliberative mini-publics such as those who meet in CAs can be transformative when it comes to adding more nuance to public opinion on complex and potentially polarizing issues.

But there are two common critiques of CAs. The first is that they are not connected to centers of power (with very few notable exceptions) and don’t have authority to make binding decisions. The second is that they are often disconnected from the broader public, and indeed often claim to be making their own, new “publics” instead of engaging with existing ones.

In this article I propose that proponents of CAs could benefit from the thirty-year history of another democratic innovation—participatory budgeting (PB). There are nearly 12,000 recorded instances of PB to draw learnings from. I see value in both innovations (and have advocated and written about both) and would be interested to see some sort of experimentation that combines PB and CAs, from a decentralized, bottom-up, community-driven approach.

We can and should think about grassroots ways to scale and connect people across geography using combinations of democratic innovations, which along the way builds up (local) civic infrastructure by drawing from existing civic capital (resident-led groups, non-profits, service providers, social movements/mobilization etc.)…(More)”.

Facial Recognition: Current Capabilities, Future Prospects, and Governance


A National Academies of Sciences, Engineering, and Medicine study: “Facial recognition technology is increasingly used for identity verification and identification, from aiding law enforcement investigations to identifying potential security threats at large venues. However, advances in this technology have outpaced laws and regulations, raising significant concerns related to equity, privacy, and civil liberties.

This report explores the current capabilities, future possibilities, and necessary governance for facial recognition technology. Facial Recognition Technology discusses legal, societal, and ethical implications of the technology, and recommends ways that federal agencies and others developing and deploying the technology can mitigate potential harms and enact more comprehensive safeguards…(More)”.

Representative Bodies in the Age of AI


Report by POPVOX: “The report tracks current developments in the U.S. Congress and internationally, while assessing the prospects for future innovations. The report also serves as a primer for those in Congress on AI technologies and methods in an effort to promote responsible use and adoption. POPVOX endorses a considered, step-wise strategy for AI experimentation, underscoring the importance of capacity building, data stewardship, ethical frameworks, and insights gleaned from global precedents of AI in parliamentary functions. This ensures AI solutions are crafted with human discernment and supervision at their core.

Legislatures worldwide are progressively embracing AI tools such as machine learning, natural language processing, and computer vision to refine the precision, efficiency, and, to a small extent, the participatory aspects of their operations. The advent of generative AI platforms, such as ChatGPT, which excel in interpreting and organizing textual data, marks a transformative shift for the legislative process, inherently a task of converting rules into language.

While nations such as Brazil, India, Italy, and Estonia lead with applications ranging from the transcription and translation of parliamentary proceedings to enhanced bill drafting and sophisticated legislative record searches, the U.S. Congress is prudently venturing into the realm of Generative AI. The House and Senate have initiated AI working groups and secured licenses for platforms like ChatGPT. They have also issued guidance on responsible use…(More)”.

Ground Truths Are Human Constructions


Article by Florian Jaton: “Artificial intelligence algorithms are human-made, cultural constructs, something I saw first-hand as a scholar and technician embedded with AI teams for 30 months. Among the many concrete practices and materials these algorithms need in order to come into existence are sets of numerical values that enable machine learning. These referential repositories are often called “ground truths,” and when computer scientists construct or use these datasets to design new algorithms and attest to their efficiency, the process is called “ground-truthing.”

Understanding how ground-truthing works can reveal inherent limitations of algorithms—how they enable the spread of false information, pass biased judgments, or otherwise erode society’s agency—and this could also catalyze more thoughtful regulation. As long as ground-truthing remains clouded and abstract, society will struggle to prevent algorithms from causing harm and to optimize algorithms for the greater good.

Ground-truth datasets define AI algorithms’ fundamental goal of reliably predicting and generating a specific output—say, an image with requested specifications that resembles other input, such as web-crawled images. In other words, ground-truth datasets are deliberately constructed. As such, they, along with their resultant algorithms, are limited and arbitrary and bear the sociocultural fingerprints of the teams that made them…(More)”.

In shaping AI policy, stories about social impacts are just as important as expert information


Blog by Daniel S. Schiff and Kaylyn Jackson Schiff: “Will artificial intelligence (AI) save the world or destroy it? Will it lead to the end of manual labor and an era of leisure and luxury, or to more surveillance and job insecurity? Is it the start of a revolution in innovation that will transform the economy for the better? Or does it represent a novel threat to human rights?

Irrespective of what turns out to be the truth, what our key policymakers believe about these questions matters. It will shape how they think about the underlying problems that AI policy is aiming to address, and which solutions are appropriate to do so. …In late 2021, we ran a study to better understand the impact of policy narratives on the behavior of policymakers. We focused on US state legislators,…

In our analysis, we found something surprising. We measured whether legislators were more likely to engage with a message featuring a narrative or featuring expert information, which we assessed by seeing if they clicked on a given fact sheet/story or clicked to register for or attended the webinar.

Despite the importance attached to technical expertise in AI circles, we found that narratives were at least as persuasive as expert information. Receiving a narrative emphasizing, say, growing competition between the US and China, or the faulty arrest of Robert Williams due to facial recognition, led to a 30 percent increase in legislator engagement compared to legislators who only received basic information about the civil society organization. These narratives were just as effective as more neutral, fact-based information about AI with accompanying fact sheets…(More)”