Clinicians are entering patient information into ChatGPT to create therapy plans. But pasting protected health information into non-HIPAA AI models risks fines, leaks, and lost trust.
Absolutely. But my concern about this is, if we’ll actually hear about them, and if they can be properly rectified. I write for organizations that serve high risk communities, and I intensely focus on privacy-first programming, development and use for AI.
Absolutely! I actually have a live on the 9th where I’ll be discussing Privacy for AI. While it’s designed for nonprofit leaders, I’ll be universalizing the principles for leadership application across every sector.
This is very eye-opening as I would not have thought doctors did that but it makes sense as everyone else uses ChatGPT! However as I have said in my teachings not everyone realizes that it's a PUBLIC database - so as you point out PHI could violate HIPAA and doctors or anyone dealing with PHI or PII or any sensitive information need to be very careful when using LLM!
It’s not just AI, either. If you or a loved one are in a healthcare situation where something doesn’t seem right, definitely speak up and ask questions.
Ryan, this is incredibly important and eye-opening. Thank you for breaking down such a critical blind spot in healthcare AI adoption. What really struck me is your point about it being "when, not if" for major court cases. The enforcement is going to be brutal when it happens.
Thanks for the kind words, Zain. You’re right that it’s a critical data security risk - and generally, I don’t think institutions are moving fast enough to address this huge issue.
I would not be surprised if they “make an example” of the first major incident.
When that happens, I’ve got the article search engine optimized.
Thank you for calling this out Ryan. I think this is happening more often than people realize. While HIPAA compliance is critical, not every AI tool in use across healthcare necessarily meets that standard. It really depends on the choices each organization or practitioner makes, and whether they’ve taken the time to evaluate the privacy and security safeguards their tools provide.
Kristina, thank you for reading and leaving such an insightful comment!
You couldn’t be more right that it takes buy-in on both the individual and organizational level to make the necessary changes happen.
The first step for both levels is to inform people of the risk: when you submit PHI to non-compliant AI, you don’t get that data back. Ever. And you don’t control what happens to it next.
It’s important for individual practitioners to be responsible, but we minimize risk the most when organizations provide their employees with the right tools and education from the start.
Thanks again for highlighting such an important component of this!
It’s wild that they even think they can do that.
Education and common sense are, unfortunately, mutually exclusive.
That part. I don’t even use my real name when I engage of LLMs. That’s how little I trust them.
I don’t blame you. I think it’s only a matter of time before we see some high-profile data breaches.
Absolutely. But my concern about this is, if we’ll actually hear about them, and if they can be properly rectified. I write for organizations that serve high risk communities, and I intensely focus on privacy-first programming, development and use for AI.
That’s amazing. Would love to pick your brain about those frameworks if you ever had the chance to discuss!
Absolutely! I actually have a live on the 9th where I’ll be discussing Privacy for AI. While it’s designed for nonprofit leaders, I’ll be universalizing the principles for leadership application across every sector.
This is very eye-opening as I would not have thought doctors did that but it makes sense as everyone else uses ChatGPT! However as I have said in my teachings not everyone realizes that it's a PUBLIC database - so as you point out PHI could violate HIPAA and doctors or anyone dealing with PHI or PII or any sensitive information need to be very careful when using LLM!
I think most of us in healthcare know better but there will always be people who try to cut corners.
True but that’s a scary thought.
It’s not just AI, either. If you or a loved one are in a healthcare situation where something doesn’t seem right, definitely speak up and ask questions.
Ryan, this is incredibly important and eye-opening. Thank you for breaking down such a critical blind spot in healthcare AI adoption. What really struck me is your point about it being "when, not if" for major court cases. The enforcement is going to be brutal when it happens.
Thanks for the kind words, Zain. You’re right that it’s a critical data security risk - and generally, I don’t think institutions are moving fast enough to address this huge issue.
I would not be surprised if they “make an example” of the first major incident.
When that happens, I’ve got the article search engine optimized.
Thank you for calling this out Ryan. I think this is happening more often than people realize. While HIPAA compliance is critical, not every AI tool in use across healthcare necessarily meets that standard. It really depends on the choices each organization or practitioner makes, and whether they’ve taken the time to evaluate the privacy and security safeguards their tools provide.
Kristina, thank you for reading and leaving such an insightful comment!
You couldn’t be more right that it takes buy-in on both the individual and organizational level to make the necessary changes happen.
The first step for both levels is to inform people of the risk: when you submit PHI to non-compliant AI, you don’t get that data back. Ever. And you don’t control what happens to it next.
It’s important for individual practitioners to be responsible, but we minimize risk the most when organizations provide their employees with the right tools and education from the start.
Thanks again for highlighting such an important component of this!
Preach. I ask clients if they'd rather do the safe thing first or go to court later. Thanks for this.
Thanks so much for reading!
Doing things the right way seems “hard” until you’re in legal trouble wishing you had made different decisions.
This is great and so clearly lays out the stakes. Thanks for writing!
I’m glad you found the information useful, Rebecca. Thanks so much for reading. I write for the curious minds like you!