Artificial Intelligence (AI) is changing the world - quickly and irreversibly.
Whether you love AI or loathe it, join me as I try to make sense of it all.
Who am I?
My name is Ryan Sears. I’m a hospital pharmacist who grew up in small-town Ohio. Now I live in Philadelphia, PA. (go Birds!)
My AI journey, like many others, started with the meteoric rise of ChatGPT in late 2022, though it was only a passing curiosity at the time. It wasn’t until March 2023 when I really started paying attention. I watched a video about a research paper called “Sparks of AGI,” a paper about GPT-4 that discussed how AI would be able to code, understand images, and use tools to help solve problems.
Here in 2025, every AI lab has models that make GPT-4’s intelligence look like a kindergartener’s. And 2027 seems to be the year doomers and techno-optimists alike have landed on for full-blown artificial general intelligence.
I don’t think we as a society are ready for it. At all.
As an individual, and as a health care worker, I don’t feel ready for it yet either.
What I’m hoping to accomplish here
I have four personal goals with this newsletter:
Explore the clinical, regulatory, technical, and human aspects of AI in healthcare.
Create and organize materials to prepare health systems for regulatory audits of their AI systems.
Develop skills (e.g., AI policy knowledge, coding, database management) that will allow me to become an AI Validation Specialist or Medical AI Liaison.
Document everything I learn along the way.
I’m creating this primarily for myself, as I don’t know how many other people are interested in AI governance with a healthcare focus right now. I just want to track my knowledge and progress in one place.
Secondarily, once regulatory bodies begin putting pressure on hospitals and health systems to show comprehensive AI governance, I hope that future clinicians and informatics specialists gain something from reading how I attempt to figure things out.
Newsletter specifics
My posts will be about AI governance in healthcare. They will have a clinical, regulatory, technical, and/or human focus. Which one(s) I post about, and how often I post, will depend on what I’m trying to understand better at the time.
Take a look at a few examples here:
AI in healthcare is no longer optional—but neither is patient safety.
AI Won't Fix Healthcare by Itself. It Amplifies the Incentives We Give It.
For now, everything will be available for free subscribers. Once I gain domain expertise, I may start a paid subscription service tailoring content to those trying to break into, or succeed in, the AI healthcare field. In that case, I would post a mix of free and paid content at a pre-specified cadence.
Welcome to History’s Most Interesting Time.
Ryan Sears, Philly’s AI Pharmacist
Read how I use AI in my writing here: AI Use Policy
Read how I use analytics to improve my newsletter here: Privacy & Analytics
I'm finally able to sit down and ready back through your posts. You and I sound like we are on similar trajectories in our use of collaborative AI with essentially a QA/QC process. My proposal is this: that may only be half of the equation. What if we used AI and its vast resource capabilities to keep us in check, too? AI presents the opportunity to look at a situation without human biases. If it had an agreed upon ethical framework in which to act as its code of conduct, it would be able to adaptively manage its outcomes with the emergence of new information and perspectives. And if we are trying to do something, if we provide our context and authentic intent, it should be able to play devil's advocate, so to speak, by challenging what we present as objective truth. I write about this stuff a lot nowadays and my partner, Val, and I have been working on this framework and are finding it shocking that all of the modeling for how this will all go moving forward only gives a very, very clear red flag: that we are making the assumption that humans are right and AI is wrong. AI is only as good as its input and data set. If we started out with the foundation of a mutually agreed upon ethical social contract, I think that will change the conversation and trajectory in ways that will not end in dominance and destrution of one or the other "for the greater good." Just some thoughts. I'm happy to get you in contact with Val's substack. They are more of the brains and logic behind it. I try to put it into a human ethical perspective stress test. So far, it's been fascinating and oddly consistent. Let me know. I'm not the type that just posts other stacks on comments. Looking forward to more great conversations with you. Thanks for talking about this from a new perspective for me.
Hi neighbor (just over the bridge in south NJ)! Looking forward to following you on your AI journey.