
Artificial Intelligence (AI) is here. It’s reshaping every industry. Healthcare is no different. We must understand and embrace this change. This blog provides information about gains, pitfalls, and opportunities. Be aware of the trends and consider the challenges and the possibilities.
Imagine a healthcare world transformed. Administrative tasks become easy. Diagnoses are much quicker. Patient care is personalized. This isn’t just a dream. It’s the power AI offers us today.
AI can boost diagnosis accuracy. Algorithms analyze vast data. This includes images and genetics. They find patterns we often miss. This means earlier detection. Treatments can be tailored. Patient outcomes improve significantly. Operations also become smoother. AI automates many tasks. Think scheduling, billing, and records. This frees up precious staff time. Professionals can focus on patients. They deliver better, direct care. AI also offers predictive power. It can foresee patient decline. It identifies high-risk groups. It even forecasts disease outbreaks. Finally, drug discovery speeds up. New treatments reach us sooner.
AI offers great advantages. However, implementation has challenges. Ignoring these dangers increases your organization’s liability. Knowing these risks helps us manage them.
Scenario: An 85-year-old patient with a complex injury needed weeks of rehab. Her doctors objected. However, the insurer’s AI system, like UnitedHealthcare’s NaviHealth, predicted a short stay. It then automatically cut off payment for the facility days later. This decision was based solely on the algorithm, overriding the treating physicians’ judgment. This systematic “batch denial” puts the patient’s health and finances at high risk.
Specifics: An 85-year-old woman with a shattered shoulder was projected by an insurer-used algorithm (from NaviHealth) to recover in 6 days; on day 17 her plan cut off nursing-home payments despite treating clinicians saying she still needed rehab. A federal judge later called the denial “at best, speculative,” and she had to spend down savings while appealing.
This case reflects a broader pattern in which Medicare Advantage insurers deploy predictive tools (e.g., UnitedHealth’s NaviHealth nH Predict) to forecast length of stay and then align coverage to the prediction, sometimes pressuring staff to keep actual days within ~1% of the algorithm’s target—effectively overriding treating physicians’ judgment.
Regulators have since clarified that algorithms cannot be the sole basis for terminating post-acute services or denying inpatient stays; plans must assess the individual patient’s condition and follow Traditional Medicare coverage criteria. (CMS FAQ, Feb. 6, 2024.)
UnitedHealth faces a proposed class action alleging wrongful denials tied to nH Predict; in Feb. 2025 a judge allowed key claims (e.g., breach of contract/good faith) to proceed, and in Sept. 2025 rejected UnitedHealth’s bid to limit discovery.
Regulators have since clarified that algorithms cannot be the sole basis for terminating post-acute services or denying inpatient stays; plans must assess the individual patient’s condition and follow Traditional Medicare coverage criteria. (CMS FAQ, Feb. 6, 2024.)
UnitedHealth faces a proposed class action alleging wrongful denials tied to nH Predict; in Feb. 2025 a judge allowed key claims (e.g., breach of contract/good faith) to proceed, and in Sept. 2025 rejected UnitedHealth’s bid to limit discovery.
Bottom line: Automated, “batch-style” cutoffs driven by LOS predictions can place patients’ health and finances at risk; current CMS guidance requires human, patient-specific review and prohibits using algorithmic predictions alone to stop coverage. (CMS FAQ; STAT.)
Scenario: A physician uses an AI diagnostic tool for a chest X-ray. The AI reports “low probability of pneumonia.” The physician, trusting the tech, signs off without thoroughly reviewing the patient’s high fever and cough. Days later, the patient is readmitted with severe pneumonia. The physician is liable because they failed to integrate the AI’s finding with obvious, real-world clinical evidence. You are responsible for the final medical decision.
Specifics: A large, multi-site study showed that when an AI assistant highlighted “no pneumonia” (or otherwise gave incorrect advice) on chest X-rays, physicians’ diagnostic accuracy collapsed to ~24–26%—clear evidence of automation bias. In other words, clinicians often accepted a wrong AI impression instead of reconciling it with the patient’s presentation.
Complementing this, a JAMA randomized vignette study used real cases of acute respiratory failure with chest radiographs; it built a deliberately biased model that mislabeled older patients as “pneumonia.” Exposure to the biased model reduced clinician accuracy by ~9–11 percentage points, and explanations didn’t fix the error.
Why this creates liability. U.S. medico-legal scholarship is clear: with assistive AI (like CXR triage/“probability” tools), the physician remains responsible for the final decision and can be negligent for failing to integrate obvious clinical facts (fever, cough, hypoxia) with an AI read.
Regulatory context. FDA’s final Clinical Decision Support guidance says clinician-facing software must let the HCP independently review the basis for a recommendation so they do not rely primarily on the software (often called “Criterion 4”).
Pattern, not a one-off. Patient-safety bodies have flagged AI misuse in care as a top hazard for 2025, emphasizing governance and documentation to prevent misdiagnosis from automation bias.
Bottom line. Chest-X-ray AI can be helpful, but the standard of care requires reconciling any “low-probability” output with obvious clinical evidence. Failure to do so—especially when classic pneumonia signs are present—risks a negligent-diagnosis claim because you are responsible for the final medical decision, not the algorithm.
Scenario: A hospital system, like the University of Colorado Health (UCHealth) in a past settlement, used an automated coding rule. This rule assigned a high-level emergency department code if monitoring frequency exceeded the visit duration. This logic did not meet the actual billing requirements. The algorithm automatically upcoded thousands of claims. The organization paid a $23 million settlement for False Claims Act violations. The provider still owns the accuracy, even if AI suggested the code.
Specifcs: On 12, 2024, University of Colorado Health (UCHealth) agreed to pay $23 million to resolve U.S. Department of Justice allegations that several of its hospitals falsely upcoded emergency-department facility E/M claims to the highest level. See the DOJ announcement and the signed Settlement Agreement (PDF) for the covered period (Nov. 1, 2017–Mar. 31, 2021) and payment terms (including $11.5M in restitution).
The automated rule that triggered the upcoding. UCHealth’s system automatically assigned CPT 99285 (ED level 5) “whenever health care providers… checked a set of the patient’s vital signs more times than total hours that the patient was present in the ED,” except when the stay was under 60 minutes—regardless of clinical severity or resources used. UCHealth internally called this the “frequent monitoring of vital signs” rule.
Why that logic was improper. Under CMS’s OPPS policy, ED facility E/M levels must “reasonably relate the intensity of hospital resources to the different levels of effort”—not hinge on a single proxy like vitals frequency. The settlement quotes the federal standard (72 Fed. Reg. 66579, 66805–06 (Nov. 27, 2007)).
Red flags and outcome. DOJ alleges UCHealth received coder complaints and CMS “High Outlier” notices for 99285 usage but did not fix the rule system-wide. UCHealth settled without admitting liability; HHS-OIG noted UCHealth declined a Corporate Integrity Agreement and reserved the right to pursue exclusion.
Why it matters (even if a tool/“AI” made the suggestion). Federal guidance and FCA case law stress that providers are responsible for the accuracy and medical necessity of claims, including those generated by automated or AI-assisted coding tools.
Bottom line: UCHealth’s “frequent vital-sign checks = level-5 ED visit” automation did not match CMS’s resource-intensity standard and led to thousands of upcoded claims—ending in a $23M FCA settlement. Even when software or “AI” proposes a code, the provider owns the final code selection and documentation and must ensure it aligns with actual resource use and medical necessity.
Scenario: A hospital uses a complex AI system to triage cancer patients for trials. A patient is told they are ineligible. When the patient asks why, the doctor can only say, “The AI system decided you didn’t meet the criteria.” This violates the patient’s right to an explanation. The hospital failed to ensure the AI system was explainable enough to communicate the reasoning.
Specifics: Hospitals are increasingly using AI to screen cancer patients for clinical trials. If a patient is told “you’re ineligible because the AI said so,” with no further explanation, that’s a legal and ethical red flag. While there isn’t a publicly documented case exactly like that, a closely analogous enforcement action shows the risk of opacity: the UK data regulator found that the Royal Free NHS Trust’s deployment with Google DeepMind wasn’t transparent to patients and therefore breached data-protection law—illustrating that hospitals must be able to explain how patient data is used and how algorithmic tools inform care.
AI trial-matching is real and widespread—e.g., Mayo Clinic’s program using IBM’s trial-matching tool and academic centers rolling out Deep6 AI for protocol matching—so the explainability duty isn’t hypothetical (Mayo Clinic; OSUCCC/Deep6 AI implementation PDF). Newer systems from NIH, such as TrialGPT, even emphasize producing plain-language rationales for eligibility, underscoring that explainable outputs are feasible.
In the U.S., ONC’s HTI-1 Final Rule now requires EHRs to surface “source attributes” and other transparency details for predictive decision-support interventions so users can understand model logic and limits—making “because the AI said so” unacceptable in certified systems (Federal Register summary; 45 C.F.R. §170.315(b)(11)). Separately, HHS’s Section 1557 nondiscrimination rulemaking highlights duties to manage algorithmic bias and communicate appropriately with patients (overview).
In the EU, the GDPR protects people from decisions based solely on automated processing that significantly affect them and gives access to “meaningful information about the logic involved.” Recent Court of Justice guidance clarified that controllers must describe the procedure and principles actually applied so the person can understand how their data influenced the result—trade-secret concerns don’t erase that duty (GDPR Art. 22 / Art. 15; CJEU 2025 explanation ruling). The EU AI Act adds transparency and human-oversight obligations for high-risk medical AI, reinforcing that black-box denials are out of bounds (Parliament explainer).
Bottom line: A hospital that can’t explain to a patient why an AI triage system deemed them ineligible risks breaching modern transparency requirements (U.S. HTI-1 and EU GDPR/AI Act) and repeating the kind of opacity regulators already penalized in the Royal Free–DeepMind case. Clinicians—and organizations—remain responsible for being able to articulate the reasoning and for ensuring human review, not deferring to a black box.
Scenario: A physician uses an AI scribe. The patient discusses job anxiety. The AI note, however, includes the sentence: “Patient reports past history of child sexual abuse, which may contribute to current anxiety.” The physician misses this “hallucination” and signs off. A false and highly sensitive fabrication is entered into the permanent legal medical record. This is a liability nightmare based on inaccurate documentation.
Specifics: In March 2025 reporting, therapists using Alma’s “Note Assist” AI note-taker (powered by vendor Upheal) described AI-generated progress notes that inserted serious false statements—including a line that a client had a past history of child sexual abuse—when no such disclosure occurred. Alma confirmed an AI “hallucination” issue in early December 2024, said it affected roughly 1% of generated notes since launch, and stated it was fixed within two business days. If a clinician signs such a note without catching the error, the fabrication becomes part of the legal medical record.
Why it matters (liability). Legal analysis in JAMA Network Open emphasizes that clinicians—not the vendor—are responsible for the accuracy of documentation. If an ambient listening/scribe tool misdocuments and the clinician fails to correct it before sign-off, the clinician can be held professionally liable. These tools also are typically outside FDA review, and hallucinations remain a known risk.
Technical backdrop. Peer-reviewed research on AI speech-to-text shows that systems can fabricate entire sentences that were never spoken (≈1% of transcripts in one evaluation of Whisper), with 38% of hallucinations containing harmful content—illustrating how a false abuse history can plausibly arise from transcription-plus-summarization pipelines.
Patient rights and remediation. Under HIPAA, patients have the right to request amendment of inaccurate PHI in the designated record set; covered entities must respond (generally within 60 days) and append corrections or note disagreements.
Risk-management pointers. Medical-liability experts advise explicit clinician review before sign-off, clear authorship, and avoiding auto-signatures; FSMB guidance underscores that physicians remain fully responsible for AI-generated documentation.
Bottom line. A documented real-world case shows an AI scribe inserting a fabricated sexual-abuse history into clinical notes. Clinicians who rely on such drafts without thorough review risk serious patient harm and potential liability; organizations should pair deployments with rigorous review policies and clear amendment pathways.
Scenario: A large hospital implements an AI platform to automate RCM tasks like prior authorization and appeals. Leadership lays off 30% of human billers and schedulers. Soon after, a payer rule change or complex cases overwhelm the AI. Denials spike, appeals back up, and the hospital’s cash flow suffers. The organization lost the human expertise needed to handle the exceptions that the AI couldn’t manage.
Specifics: In 2025, Revere Health (100+ clinics) announced a partnership with IKS Health to bring “machine learning and automated claims processing” for billing, collections and denial prevention, and concurrently laid off 177 nonclinical staff from its back office. Local and trade press explicitly linked the workforce reduction to the automation initiative.
Why this maps to your scenario. Revere’s move is a concrete instance of RCM AI/automation replacing human billers/schedulers at scale—exactly the organizational posture that becomes fragile when payer rules change or edge cases pile up (e.g., prior auth nuances, payer-specific edits), because the exception handling expertise lives with the people you just cut. Becker’s.
What happens when automated RCM breaks or can’t keep up. The Change Healthcare outage (Feb–Mar 2024) showed how dependent hospitals are on automated clearinghouse/RCM rails: 94% of hospitals reported cash-flow hits, with massive claim backlogs and denials/appeals delays as systems recovered. This is direct, national-scale evidence of how quickly cash flow can crater when the automation layer fails or must be re-tooled.
Rule-change pressure is real and rising. Across 2024–2025, hospitals saw higher denial rates tied to tighter payer policies and shifting prior authorization rules, with MA denials highlighted as a major pain point—conditions that overwhelm rigid automation and require seasoned staff to navigate.
Bottom line. The Revere Health layoffs tied to AI-driven RCM are a real example of replacing back-office expertise with automation; the Change Healthcare crisis then demonstrates how quickly denials and cash flow can spiral when automated pipelines hiccup or need rapid updates. A hybrid model—AI for volume + retained human experts for exceptions and payer changes—is the resilient pattern.
Adopting AI requires new safeguards. Robust protocols are vital. They protect both staff and the organization.
Bringing new tech can cause worry. Here’s how to welcome AI. Turn resistance into productivity. Reduce your organization’s liability.
Start with education, not just tools. Demystify what AI is. Explain it simply. Focus on how AI assists staff, not how it replaces them. Highlight how AI will reduce their workload. Staff can spend more time with patients. Address job security concerns openly. Offer comprehensive training. Tailor programs to specific roles. Provide hands-on training sessions. This builds confidence and familiarity. Emphasize AI as an error checker. Explain how it catches mistakes. This is a key way to reduce liability risks.
AI isn’t just for executives. It impacts every role. You might use AI for scheduling. Or for patient communication. Your expertise combined with AI is unstoppable. Embrace this evolution. Learn these new tools.
Introducing AI into your organization is a journey. It requires innovation. It demands continuous learning. Ultimately, it leads to better patient care. Address gains and pitfalls thoughtfully. Invest in quality training. Strategically introduce AI to your staff. You can boost productivity. You will decrease liability. You will revolutionize healthcare for the better.
The AI landscape changes fast. Stay informed about compliance and best practices. Take control now: review, refresh, and actively manage your program. For quick, practical guidance, see EPICompliance webcasts: Couple uses artificial intelligence to fight insurance denial.