8 Comments
User's avatar
AI Governance Lead's avatar

The opportunity is there to reduce the cost of healthcare, but with the way healthcare is run in the USA, do you think patients will see cost reductions?

Expand full comment
Ryan Sears, PharmD's avatar

My guess is insurance companies are finding every way they can to leverage AI to reduce their costs. Some of these ways will be perfectly legitimate and there will probably be some ways that it happens at the expense of patients.

Either way, I highly doubt that any cost-savings will be passed down in our for-profit healthcare environment.

Expand full comment
Philip garry's avatar

Spot on about AI amplifying the incentives it’s given.

I’ve been working on a framework that tackles this head-on by making those incentives — and compliance rules — unchangeable at the protocol level. In healthcare, that means AI can’t drift into unsafe, non-compliant territory because the lawful constraints are built into its operation, not just into a policy doc.

If the incentives are aligned with patient safety, the AI’s outputs stay aligned too — no matter the pressure to cut costs or “move fast.”

Governance works best when it’s in the code, not just in the meeting notes.

Expand full comment
Ryan Sears, PharmD's avatar

That sounds incredible, Philip. Is there anywhere I could read more about how your framework operates?

Thanks for the feedback!

Expand full comment
Rachel Maron's avatar

You've nailed the core truth: AI doesn’t change the incentives, it scales them. If your system rewards cost-cutting over care, AI will industrialize denial. If you use it to mask staffing shortages, you’re scaling silent risk. And if clinicians treat AI outputs as gospel, you’ve automated bias at speed.

The trust gap here isn’t just about the model, it’s about governance. Without transparent objectives, patient-centered metrics, and enforceable accountability, AI will merely amplify existing structural incentives. Trust in AI healthcare isn’t earned by accuracy alone; it’s earned by aligning the system’s goals with patient outcomes and proving it in the data.

Expand full comment
Ryan Sears, PharmD's avatar

Yes, the alignment and ongoing proof of function are both equally important. Thanks for your insights, Rachel!

Expand full comment
User's avatar
Comment deleted
Aug 14
Comment deleted
Expand full comment
Ryan Sears, PharmD's avatar

Thanks so much for the thoughtful feedback, Maryam! Do you remember what the papers you read about AI in healthcare were about? Would love to check them out.

Expand full comment
User's avatar
Comment deleted
Aug 15
Comment deleted
Expand full comment
Ryan Sears, PharmD's avatar

That’s a great list, I don’t think I’ve read any of them. Thank you!

Expand full comment