19 Comments
User's avatar
Chara's avatar

It’s wild that they even think they can do that.

Expand full comment
Ryan Sears, PharmD's avatar

Education and common sense are, unfortunately, mutually exclusive.

Expand full comment
Chara's avatar

That part. I don’t even use my real name when I engage of LLMs. That’s how little I trust them.

Expand full comment
Ryan Sears, PharmD's avatar

I don’t blame you. I think it’s only a matter of time before we see some high-profile data breaches.

Expand full comment
Chara's avatar

Absolutely. But my concern about this is, if we’ll actually hear about them, and if they can be properly rectified. I write for organizations that serve high risk communities, and I intensely focus on privacy-first programming, development and use for AI.

Expand full comment
Ryan Sears, PharmD's avatar

That’s amazing. Would love to pick your brain about those frameworks if you ever had the chance to discuss!

Expand full comment
Chara's avatar

Absolutely! I actually have a live on the 9th where I’ll be discussing Privacy for AI. While it’s designed for nonprofit leaders, I’ll be universalizing the principles for leadership application across every sector.

Expand full comment
Cyber Safety Watchdog's avatar

This is very eye-opening as I would not have thought doctors did that but it makes sense as everyone else uses ChatGPT! However as I have said in my teachings not everyone realizes that it's a PUBLIC database - so as you point out PHI could violate HIPAA and doctors or anyone dealing with PHI or PII or any sensitive information need to be very careful when using LLM!

Expand full comment
Ryan Sears, PharmD's avatar

I think most of us in healthcare know better but there will always be people who try to cut corners.

Expand full comment
Cyber Safety Watchdog's avatar

True but that’s a scary thought.

Expand full comment
Ryan Sears, PharmD's avatar

It’s not just AI, either. If you or a loved one are in a healthcare situation where something doesn’t seem right, definitely speak up and ask questions.

Expand full comment
Zain Haseeb's avatar

Ryan, this is incredibly important and eye-opening. Thank you for breaking down such a critical blind spot in healthcare AI adoption. What really struck me is your point about it being "when, not if" for major court cases. The enforcement is going to be brutal when it happens.

Expand full comment
Ryan Sears, PharmD's avatar

Thanks for the kind words, Zain. You’re right that it’s a critical data security risk - and generally, I don’t think institutions are moving fast enough to address this huge issue.

I would not be surprised if they “make an example” of the first major incident.

When that happens, I’ve got the article search engine optimized.

Expand full comment
Kristina Kroot's avatar

Thank you for calling this out Ryan. I think this is happening more often than people realize. While HIPAA compliance is critical, not every AI tool in use across healthcare necessarily meets that standard. It really depends on the choices each organization or practitioner makes, and whether they’ve taken the time to evaluate the privacy and security safeguards their tools provide.

Expand full comment
Ryan Sears, PharmD's avatar

Kristina, thank you for reading and leaving such an insightful comment!

You couldn’t be more right that it takes buy-in on both the individual and organizational level to make the necessary changes happen.

The first step for both levels is to inform people of the risk: when you submit PHI to non-compliant AI, you don’t get that data back. Ever. And you don’t control what happens to it next.

It’s important for individual practitioners to be responsible, but we minimize risk the most when organizations provide their employees with the right tools and education from the start.

Thanks again for highlighting such an important component of this!

Expand full comment
JayCee's avatar

Preach. I ask clients if they'd rather do the safe thing first or go to court later. Thanks for this.

Expand full comment
Ryan Sears, PharmD's avatar

Thanks so much for reading!

Doing things the right way seems “hard” until you’re in legal trouble wishing you had made different decisions.

Expand full comment
Rebecca Bellan's avatar

This is great and so clearly lays out the stakes. Thanks for writing!

Expand full comment
Ryan Sears, PharmD's avatar

I’m glad you found the information useful, Rebecca. Thanks so much for reading. I write for the curious minds like you!

Expand full comment