AI in healthcare is no longer optional—but neither is patient safety.
We urgently need guardrails to protect against bias and data drift.
The impact of artificial intelligence (AI) in healthcare is already deep-rooted: it can influence how your lab results are read, which hospital beds fill first, and even whether an alarm goes off in the middle of the night. Sometimes, it gets these life-or-death calls dangerously wrong. Because of this, AI in healthcare is no longer optional, but neither is patient safety. For healthcare leaders, that means building governance, workforce skills, and monitoring from day one, before deployment scales harm.
IBM Watson Health: A Cautionary Tale of AI Hype vs Clinical Reality
IBM Watson is a computer system that can process questions from human speech (“natural language”) and provide an answer. It shocked the world in 2011 when it won first place in the quiz show Jeopardy! against champions Ken Jennings and Brad Rutter.
Two years later, IBM looked to commercialize Watson by using the AI to help guide treatment decisions for lung cancer patients. The company spent $4 billion trying to develop these capabilities, and the public hoped it would revolutionize cancer treatment just like it crushed Jeopardy! contestants on live TV.
Unfortunately, this never materialized: medical specialists at the company identified “multiple examples of unsafe and incorrect treatment recommendations.” Apparently, the software only was trained on a small number of hypothetical cases instead of real-life patient data. In addition, it also seemed to provide recommendations based not on “guidelines or evidence,” but the expert opinions of just a few specialists from each cancer type. Providers quietly withdrew from using IBM’s service.
AI in Healthcare is Surging. The Risks Are, Too.
Despite Watson’s disappointing results, AI has continued to improve, and has become increasingly enmeshed in health systems. It can now read X-rays for broken bones, flag drug interactions, and suggest tailored chemo regimens. This does not mean we should not be cautious, though, as AI has led to numerous health errors since then:
The Epic Sepsis Model was shown to underperform expectations, where it missed two-thirds of sepsis cases at a health system despite generating alerts for 18% of all hospitalized patients. This model overwhelmed clinicians with alerts, but it was not actually helpful in identifying sepsis patients.
Another disturbing example is found in a commercial “high-risk care management” algorithm that consistently underrated the illness of Black patients because of one faulty assumption: that higher healthcare costs makes a person more sick. Because Black patients did not spend as much money on health care as White patients who were just as ill, the algorithm assumed they were less sick and offered lower amounts of care management.
These two examples showcase that even with the best intentions, patients can receive suboptimal treatment and entire groups can be marginalized because of one misinterpreted data point.
What Needs to Happen Now: Healthcare AI Governance, Process, and Proof
The proliferation of AI through the field of healthcare can bring many benefits, but there are risks that need to be taken seriously and addressed. Hospital leadership must treat AI risks in the same way they treat infection control or financial audits. Teams of clinicians and data scientists need to screen models for bias or dangerous results before they go live. And vendors of AI healthcare solutions should be transparent with their methods and results.
When AI works well, it can catch cancer on a CT scan or free a nurse from hours of paperwork. When it doesn’t, it can recommend the wrong chemo or let sepsis slip through the cracks. The technology will keep advancing; the only question is if our safeguards will keep up.
Conclusion
AI is a promising tool to improve patient outcomes in healthcare. However, AI won’t fix healthcare by itself; it amplifies the incentives we give it. We need to design healthcare AI in a thoughtful way that protects patients rather than scaling up the wrong outcomes. To accomplish this, we need robust AI governance in healthcare.
Ryan Sears, Philly’s AI Pharmacist
Check out my newly-published article (inspired by this article) here!
Artificial Intelligence in Healthcare: No Longer Optional But Neither Is Patient Safety, found in The American Journal of Healthcare Strategy (Healthcare Strategy Review)
Read how I use AI in my writing here: AI Use Policy
Read how I use analytics to improve my newsletter here: Privacy & Analytics
This is brilliant discussion. I completely agree with you 💯. The adoption of AI in healthcare needs to be taken only with extreme caution because errors can be very costly.
I strongly believe that autonomous AI integration is not very practical and rather AI in healthcare should always have Experts-in-the-Loop so that at best AI just helps with optimization (not critical decision making).
We’re building NDD — not as hype, but out of necessity.
Because the world doesn’t just need another shiny initiative. It needs a scaffold for all of us who’ve been told we’re too much, or not enough, and yet are holding systems together with invisible labour.
We’ve designed NDD to be that framework: a sanctuary that scales. It’s a CIC, built by and for neurodivergent people and the allies who get it — with employment placement, wraparound care, policy change, and employer reform baked in from day one.
And we have founder spots still open for people like you. People who already see that AI in healthcare — or anywhere — isn’t optional anymore because humans like us have been optional for far too long.
The world needs this. And we need you.
You in?
— NDD Movement
Neurodivergent by Design.