When the Algorithm Overrules the Doctor: How UnitedHealthcare Built a Denial Machine
The Prediction That Became a Weapon
UnitedHealth Group's subsidiary Optum developed nH Predict through its 2020 acquisition of naviHealth. The stated purpose: predict appropriate length of stay in rehabilitation facilities for Medicare Advantage patients.
The algorithm generates recommendations that supersede physician medical judgment. When actual stay length deviates from the prediction, it triggers automatic claim denials.
The targets: elderly patients (65+) seeking post-acute care—rehabilitation facility stays, skilled nursing, home health. Employees were instructed to keep stays within 1% of the algorithm's predicted length.
The Confidence Problem in Action
The devastating statistic: 90% of nH Predict denials are reversed on appeal. Nine out of ten times the algorithm denies coverage, physicians and administrative law judges determine coverage should have been provided.
But only 0.2% of affected patients file appeals. This is the hidden catastrophe: 99.8% accept the denial without contesting it.
The math is simple. The algorithm is wrong 90% of the time on contested cases. Most people never contest. The system's metrics said it was working—shorter stays, lower costs. The actual judgment quality was catastrophic.
What Denial Looks Like
Elderly patients prematurely discharged from rehabilitation facilities. Families forced to pay out of pocket for continued care. Medical deterioration documented from premature discharge. Cases where denial of coverage led to patient death.
The denial-appeal-denial loop: when a patient appeals, UnitedHealthcare immediately issues another denial letter. Patients get trapped until they exhaust appeals or run out of money.
"Claims that naviHealth is used to make adverse benefit or coverage decisions are false. Medical necessity determinations are made by qualified physicians following CMS guidance—not AI."
Optum Official Statement
The court's discovery order suggests the evidence may tell a different story. Internal documents show employees were pressured to implement denials to hit cost targets.
The Values Gate Failure
This is the most consequential failure in the Judgment Architecture framework: the Values Gate.
The question: "Does this automation align with our stated commitments?" UnitedHealthcare's stated commitment is patient care. The algorithm's purpose was cost optimization. Those two things are in direct conflict.
The Framework Failure
UnitedHealthcare measured algorithm adherence (stay within 1% of prediction) rather than claim accuracy (were denials medically appropriate?). This inverted metric system optimized for cost reduction, not patient outcomes. That's the Confidence Problem in its purest form.
The Legal Consequence
Case: Estate of Gene B. Lokken v. UnitedHealth Group (Case No. 0:23-cv-03514). Filed November 2023. In February 2025, the court denied UnitedHealth's motion to dismiss and allowed breach of contract and good faith claims to proceed.
The plaintiffs seek to represent all Medicare Advantage beneficiaries denied post-acute care coverage via nH Predict. That's potentially hundreds of thousands of patients.
Discovery has been ordered: all documents on nH Predict development, testing, deployment, internal communications on accuracy, naviHealth acquisition documents, government investigation documents. STAT investigation revealed internal pressure to use the algorithm for denials.
The question the framework forces is not whether AI can predict rehabilitation length. It can. The question is whether that prediction should override the physician standing at the patient's bedside. UnitedHealthcare answered yes. The courts are answering differently.
UnitedHealth AI Coverage Denial: Judge Orders Broad Discovery · Becker's Payer
Class Action Against UnitedHealth's AI Claim Denials Advances · Healthcare Finance News
UnitedHealth Lawsuit: AI Used to Deny Medicare Advantage Claims · CBS News
STAT Investigation: UnitedHealth AI and Insurance Claims · WBUR/On Point
Fact Check: United Healthcare AI Denied Claims · Snopes
Explore the Framework
The Judgment Architecture is a model for understanding where AI succeeds, where it fails, and where it must never go. Learn how to apply it to your organization.