The opportunity is there to reduce the cost of healthcare, but with the way healthcare is run in the USA, do you think patients will see cost reductions?
My guess is insurance companies are finding every way they can to leverage AI to reduce their costs. Some of these ways will be perfectly legitimate and there will probably be some ways that it happens at the expense of patients.
Either way, I highly doubt that any cost-savings will be passed down in our for-profit healthcare environment.
Spot on about AI amplifying the incentives it’s given.
I’ve been working on a framework that tackles this head-on by making those incentives — and compliance rules — unchangeable at the protocol level. In healthcare, that means AI can’t drift into unsafe, non-compliant territory because the lawful constraints are built into its operation, not just into a policy doc.
If the incentives are aligned with patient safety, the AI’s outputs stay aligned too — no matter the pressure to cut costs or “move fast.”
Governance works best when it’s in the code, not just in the meeting notes.
Very true! Healthcare is a highly sensitive area, and we need proper guardrails in place to maximize AI’s benefits and minimize costs (all types of costs). I have read a few papers on AI in healthcare, and I find them interesting. I think it is an important area to pay attention to, especially in this age of automation. Thanks for this informative piece. Also, I like that you have drafted an AI Use policy, something I need to do as well.
Thanks so much for the thoughtful feedback, Maryam! Do you remember what the papers you read about AI in healthcare were about? Would love to check them out.
Of course! More than happy to share. Here are some references (maybe you have read some already). And I have them in PDF also, just let me know if you need any.
1) Wang, D., Wang, L., Zhang, Z., Wang, D., Zhu, H., Gao, Y., Fan, X., & Tian, F. (2021). “Brilliant AI Doctor” in Rural Clinics: Challenges in AI-Powered Clinical Decision Support System Deployment. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–18.
2) Ta, A. W. A., Goh, H. L., Ang, C., Koh, L. Y., Poon, K., & Miller, S. M. (2022). Two Singapore public healthcare AI applications for national screening programs and other examples. Health Care Science, 1(2), 41–57.
3) Sylolypavan, A., Sleeman, D., Wu, H., & Sim, M. (2023). The impact of inconsistent human annotations on AI-driven clinical decision making. NPJ Digital Medicine, 6(1), 26
4) Vallès-Peris, N., & Pareto, J. (2024). Artificial intelligence as a mode of ordering. Automated-decision making in primary care. Information, Communication & Society, 1-19.
5) Lebovitz, S., Lifshitz-Assaf, H., & Levina, N. (2022). To Engage or Not to Engage with AI for Critical Judgments: How Professionals Deal with Opacity When Using AI for Medical Diagnosis. Organization Science, 33(1), 126–148.
6) Bernasconi, L., & Grossmann, R. (2025). Navigating the future of clinical trial management–insights on the transformative role of AI. Research Ethics, 17470161241309347.
You've nailed the core truth: AI doesn’t change the incentives, it scales them. If your system rewards cost-cutting over care, AI will industrialize denial. If you use it to mask staffing shortages, you’re scaling silent risk. And if clinicians treat AI outputs as gospel, you’ve automated bias at speed.
The trust gap here isn’t just about the model, it’s about governance. Without transparent objectives, patient-centered metrics, and enforceable accountability, AI will merely amplify existing structural incentives. Trust in AI healthcare isn’t earned by accuracy alone; it’s earned by aligning the system’s goals with patient outcomes and proving it in the data.
The opportunity is there to reduce the cost of healthcare, but with the way healthcare is run in the USA, do you think patients will see cost reductions?
My guess is insurance companies are finding every way they can to leverage AI to reduce their costs. Some of these ways will be perfectly legitimate and there will probably be some ways that it happens at the expense of patients.
Either way, I highly doubt that any cost-savings will be passed down in our for-profit healthcare environment.
Spot on about AI amplifying the incentives it’s given.
I’ve been working on a framework that tackles this head-on by making those incentives — and compliance rules — unchangeable at the protocol level. In healthcare, that means AI can’t drift into unsafe, non-compliant territory because the lawful constraints are built into its operation, not just into a policy doc.
If the incentives are aligned with patient safety, the AI’s outputs stay aligned too — no matter the pressure to cut costs or “move fast.”
Governance works best when it’s in the code, not just in the meeting notes.
That sounds incredible, Philip. Is there anywhere I could read more about how your framework operates?
Thanks for the feedback!
Very true! Healthcare is a highly sensitive area, and we need proper guardrails in place to maximize AI’s benefits and minimize costs (all types of costs). I have read a few papers on AI in healthcare, and I find them interesting. I think it is an important area to pay attention to, especially in this age of automation. Thanks for this informative piece. Also, I like that you have drafted an AI Use policy, something I need to do as well.
Thanks so much for the thoughtful feedback, Maryam! Do you remember what the papers you read about AI in healthcare were about? Would love to check them out.
Of course! More than happy to share. Here are some references (maybe you have read some already). And I have them in PDF also, just let me know if you need any.
1) Wang, D., Wang, L., Zhang, Z., Wang, D., Zhu, H., Gao, Y., Fan, X., & Tian, F. (2021). “Brilliant AI Doctor” in Rural Clinics: Challenges in AI-Powered Clinical Decision Support System Deployment. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–18.
2) Ta, A. W. A., Goh, H. L., Ang, C., Koh, L. Y., Poon, K., & Miller, S. M. (2022). Two Singapore public healthcare AI applications for national screening programs and other examples. Health Care Science, 1(2), 41–57.
3) Sylolypavan, A., Sleeman, D., Wu, H., & Sim, M. (2023). The impact of inconsistent human annotations on AI-driven clinical decision making. NPJ Digital Medicine, 6(1), 26
4) Vallès-Peris, N., & Pareto, J. (2024). Artificial intelligence as a mode of ordering. Automated-decision making in primary care. Information, Communication & Society, 1-19.
5) Lebovitz, S., Lifshitz-Assaf, H., & Levina, N. (2022). To Engage or Not to Engage with AI for Critical Judgments: How Professionals Deal with Opacity When Using AI for Medical Diagnosis. Organization Science, 33(1), 126–148.
6) Bernasconi, L., & Grossmann, R. (2025). Navigating the future of clinical trial management–insights on the transformative role of AI. Research Ethics, 17470161241309347.
That’s a great list, I don’t think I’ve read any of them. Thank you!
Great! You're welcome.
You've nailed the core truth: AI doesn’t change the incentives, it scales them. If your system rewards cost-cutting over care, AI will industrialize denial. If you use it to mask staffing shortages, you’re scaling silent risk. And if clinicians treat AI outputs as gospel, you’ve automated bias at speed.
The trust gap here isn’t just about the model, it’s about governance. Without transparent objectives, patient-centered metrics, and enforceable accountability, AI will merely amplify existing structural incentives. Trust in AI healthcare isn’t earned by accuracy alone; it’s earned by aligning the system’s goals with patient outcomes and proving it in the data.
Yes, the alignment and ongoing proof of function are both equally important. Thanks for your insights, Rachel!