21 Comments
User's avatar
Noel's avatar

Hey Ryan,

This is a great article! As healthcare professionals, we see AI as a convenient tool to lessen our workload (which is understandable given the current strain on the system).

But we all forget about the privacy risks until its too late.

My hope is that educated higher ups understand the risks and quickly build policies that protect their staff and their patients instead of pushing for efficiency only. .

Thanks again for writing this!

Ryan Sears, PharmD's avatar

Hey Noel,

Thanks so much for the feedback and I'm very happy to connect with you! You're the first pharmacist I've encountered on Substack other than myself - great to see there are other insightful, passionate leaders out there.

I agree that the risks are moving far quicker than the preparation. My hope is that increasing awareness will help with this.

Thanks again and I look forward to diving into your work as well!

Chara's avatar

It’s wild that they even think they can do that.

Ryan Sears, PharmD's avatar

Education and common sense are, unfortunately, mutually exclusive.

Chara's avatar

That part. I don’t even use my real name when I engage of LLMs. That’s how little I trust them.

Ryan Sears, PharmD's avatar

I don’t blame you. I think it’s only a matter of time before we see some high-profile data breaches.

Chara's avatar

Absolutely. But my concern about this is, if we’ll actually hear about them, and if they can be properly rectified. I write for organizations that serve high risk communities, and I intensely focus on privacy-first programming, development and use for AI.

Ryan Sears, PharmD's avatar

That’s amazing. Would love to pick your brain about those frameworks if you ever had the chance to discuss!

Chara's avatar

Absolutely! I actually have a live on the 9th where I’ll be discussing Privacy for AI. While it’s designed for nonprofit leaders, I’ll be universalizing the principles for leadership application across every sector.

Cyber Safety Watchdog's avatar

This is very eye-opening as I would not have thought doctors did that but it makes sense as everyone else uses ChatGPT! However as I have said in my teachings not everyone realizes that it's a PUBLIC database - so as you point out PHI could violate HIPAA and doctors or anyone dealing with PHI or PII or any sensitive information need to be very careful when using LLM!

Ryan Sears, PharmD's avatar

I think most of us in healthcare know better but there will always be people who try to cut corners.

Cyber Safety Watchdog's avatar

True but that’s a scary thought.

Ryan Sears, PharmD's avatar

It’s not just AI, either. If you or a loved one are in a healthcare situation where something doesn’t seem right, definitely speak up and ask questions.

Zain Haseeb's avatar

Ryan, this is incredibly important and eye-opening. Thank you for breaking down such a critical blind spot in healthcare AI adoption. What really struck me is your point about it being "when, not if" for major court cases. The enforcement is going to be brutal when it happens.

Ryan Sears, PharmD's avatar

Thanks for the kind words, Zain. You’re right that it’s a critical data security risk - and generally, I don’t think institutions are moving fast enough to address this huge issue.

I would not be surprised if they “make an example” of the first major incident.

When that happens, I’ve got the article search engine optimized.

Kristina Kroot's avatar

Thank you for calling this out Ryan. I think this is happening more often than people realize. While HIPAA compliance is critical, not every AI tool in use across healthcare necessarily meets that standard. It really depends on the choices each organization or practitioner makes, and whether they’ve taken the time to evaluate the privacy and security safeguards their tools provide.

Ryan Sears, PharmD's avatar

Kristina, thank you for reading and leaving such an insightful comment!

You couldn’t be more right that it takes buy-in on both the individual and organizational level to make the necessary changes happen.

The first step for both levels is to inform people of the risk: when you submit PHI to non-compliant AI, you don’t get that data back. Ever. And you don’t control what happens to it next.

It’s important for individual practitioners to be responsible, but we minimize risk the most when organizations provide their employees with the right tools and education from the start.

Thanks again for highlighting such an important component of this!

JayCee's avatar

Preach. I ask clients if they'd rather do the safe thing first or go to court later. Thanks for this.

Ryan Sears, PharmD's avatar

Thanks so much for reading!

Doing things the right way seems “hard” until you’re in legal trouble wishing you had made different decisions.

Rebecca Bellan's avatar

This is great and so clearly lays out the stakes. Thanks for writing!

Ryan Sears, PharmD's avatar

I’m glad you found the information useful, Rebecca. Thanks so much for reading. I write for the curious minds like you!