AI Won't Fix Healthcare by Itself. It Amplifies the Incentives We Give It.
If we reward cost-cutting, it will cut care. If we under-staff, it will mask the gap. And even with good intentions, it can still mislead.
AI in healthcare is arriving everywhere. That is exciting but also risky. The truth is simple: AI won’t fix healthcare by itself, because it amplifies whatever goals and incentives we give it. If those incentives aren’t aligned with patient outcomes, AI can scale up the wrong objectives faster than ever.
When Cost-Cutting Is the Goal: How AI in Healthcare Can Reduce Quality of Care
Using AI to cut costs in healthcare can be dangerous when the savings come at the expense of patient outcomes. When the bottom line overshadows care, quality is often the first thing to go.
Health insurance decision systems are a textbook example. Investigations and court filings show how automated review can push review speed and claim denials over nuance and patient care:
One internal insurer workflow processed claims in an average of about 1.2 seconds (ProPublica investigation of Cigna’s claim reviews).
A Senate probe also tied Medicare Advantage denial rates to an algorithm used in post-acute care (Senate report on algorithm-driven denials at UnitedHealthcare, Humana, and CVS).
This is what happens when the objective is to “save money” rather than “improve health.” We need to shift the goal to rewarding appropriate care, independent review, appeal fairness, and health outcomes.
AI and Staffing Shortages in Hospitals: Why Replacing Healthcare Workers Increases Patient Risk
When facing a hospital staffing shortage crisis, some leaders may try to use chatbots or triage algorithms in place of human staff. But this can easily multiply risk instead of resolving it.
Diagnostic accuracy for these tools is often low (19-38%), and performance in triage can vary widely (~49-90%) (npj Digital Medicine analysis of symptom checkers).
And we already know that hospital short staffing itself correlates with higher mortality (BMJ Quality and Safety systematic review on nurse staffing and mortality).
If we use AI to mask a staffing problem, we can increase risk by providing inconsistent advice without anyone to fact-check.
The solution? Use these tools as assistants, not substitutes, while still maintaining safe nurse-to-patient ratios.
Automation Bias in Clinical AI: How Good Tools Can Mislead Clinicians
When doctors rely too heavily on AI recommendations without using their own clinical judgement, we call this automation bias. This misplaced trust can turn into a blind spot with real patient consequences.
In a 2023 JAMA study on model bias in acute respiratory failure cases, clinicians actually became less accurate (by a whopping 11.3 percentage points) when shown predictions from a biased AI model.
Even worse, large language models (LLMs) sometimes hallucinate medical facts that sound realistic enough to believe. Recently, a high-profile model fabricated a structure in the brain it called the “basilar ganglia” (Futurism report on Med-Gemini hallucination).
We need guardrails and education in order to tackle this problem. Models need to be thoroughly tested before deployment, and clinicians need to be reminded that responses from LLMs may be not only inaccurate, but totally incorrect.
AI Governance Checklist for Healthcare Leaders: Policy, Process, and Proof
Building an AI governance framework for healthcare is not optional or just “nice to have;” it’s a requirement for safe, transparent adoption of AI models. Hospital leaders can start by setting clear AI policies, processes, and criteria to prove we are keeping patients safe.
Policy: Adopt a risk framework (see: NIST AI Risk Management Framework for an example). When contracting with an AI model developer, demand transparency: Model objectives, data provenance, subgroup performance, data drift monitoring, and a kill switch. For software as a medical device (SaMD), ask vendors how their PCCP (Predetermined Change Control Plan) will handle updates without eroding patient safety.
Process: Create an inventory of all the AI models your health system uses, along with all the change-control processes. For each high-risk tool, define: under what conditions the model will operate, who owns post-market surveillance, and what criteria require escalation for performance review. This is a multi-disciplinary task that requires buy-in from clinicians, data scientists, and leadership alike.
Proof: Track the outcomes you want to see in practice: Avoidance of harm, equity across patient demographics, timely interventions. For example: if your health system recently deployed an ambient AI medical scribe, have you tested its performance across different accents, interpreters, and languanges to avoid equity gaps?
FAQ: Common Questions About AI in Healthcare Governance and Safety
Does AI reduce healthcare costs without hurting care?
Only if systems are well-designed and reward appropriate care and outcomes. I have previously written about how AI models have led to healthcare disaster when proper guardrails were not in place.
What is a PCCP and why should hospitals implement a PCCP?
A Predetermined Change Control Plan (PCCP) is the FDA’s way of ensuring safe updates for AI in regulated medical devices. Hospitals should ask vendors how their PCCP maintains performance across patient groups when updates are necessary.
Conclusion
AI will make healthcare better when we make our patient care incentives better, and worse when we don’t. We need to design our AI healthcare models thoughtfully and evaluate them to ensure we are meeting real-world patient outcomes.
Check out my newly-published article here!
Artificial Intelligence in Healthcare: No Longer Optional But Neither Is Patient Safety, found in The American Journal of Healthcare Strategy (Healthcare Strategy Review)
Read how I use AI in my writing here: AI Use Policy
Read how I use analytics to improve my newsletter here: Privacy & Analytics
The opportunity is there to reduce the cost of healthcare, but with the way healthcare is run in the USA, do you think patients will see cost reductions?
Spot on about AI amplifying the incentives it’s given.
I’ve been working on a framework that tackles this head-on by making those incentives — and compliance rules — unchangeable at the protocol level. In healthcare, that means AI can’t drift into unsafe, non-compliant territory because the lawful constraints are built into its operation, not just into a policy doc.
If the incentives are aligned with patient safety, the AI’s outputs stay aligned too — no matter the pressure to cut costs or “move fast.”
Governance works best when it’s in the code, not just in the meeting notes.