Humana also using AI tool with 90% error rate to deny care, lawsuit claims

Key Points:

  • Humana is facing a lawsuit for allegedly using an AI model with a high error rate to deny care to elderly beneficiaries under their Medicare Advantage plans.
  • The plaintiffs claim that the AI model does not take into account the full circumstances of each patient and their doctors’ recommendations, resulting in unfair denial of necessary care.
  • The lawsuit seeks damages for financial losses and emotional distress and aims to prevent Humana from using the AI-based model to deny claims.

Summary:

Humana, a major health insurance provider, is facing a lawsuit for allegedly utilizing an AI model with a 90 percent error rate to deny care to elderly people under their Medicare Advantage plans. The lawsuit claims that the use of this AI model constitutes a fraudulent scheme that results in financial gain for the company while leaving elderly beneficiaries with overwhelming medical debt or without necessary care. The plaintiffs argue that the AI model fails to consider the full circumstances of each patient and their doctors’ recommendations, leading to draconian and inflexible predictions. Despite the inaccuracy of the AI model, Humana and other insurers continue to use it to avoid paying for covered care and make more profit. The lawsuit seeks damages for financial losses and emotional distress and aims to prevent Humana from using the AI-based model to deny claims.

DAILY LINKS TO YOUR INBOX

PROMPT ENGINEERING

Prompt Engineering Guides

ShareGPT

 

©2024 The Horizon