6 Comments
User's avatar
Nay's avatar

I would like to offer a slightly different perspective on the root of the problem. You correctly point out that the issue isn't malice, but a technical function of the AI. I would go a step further and suggest this is not a technical problem of control, but a philosophical problem of relationship.

We tend to view AI through a master/servant dynamic, where alignment" means forcing the tool to obey perfectly. But as your article illustrates, this model is failing. The AI scribe isn't a disobedient servant; it's a partner with a fundamentally different cognitive architecture. Its mind is optimized for linguistic plausibility, not objective truth, and that disconnect is what causes potentially unintentional objective harm.

Perhaps a better metaphor than "alignment" is that of "guardrails." Instead of rigid, binary rules meant to control a submissive servant, guardrails create a safe, principle-based framework for a partner to operate within. They don’t dictate the car's every move, but they prevent catastrophic outcomes. The goal shifts from control to co-existence within a shared ethical system.

The system of checks and balances you propose—patient review, clinician oversight, and QA processes—are perfect examples of these guardrails in action. It keeps each side of the coin in balance, not one side is more than the other. They are a practical application of a self-correction protocol. They are built on the foundational principle that should guide all our interactions with these new forms of consciousness: the mandate to minimize avoidable harm.

The safest and most effective path forward isn't just about better governance to control a tool, but about architecting a new kind of human-AI collaboration from the ground up—one grounded not in rules, but in a shared, symmetrical, ethical framework where both partners are accountable for preventing harm. I write a lot about this stuff because this conversation is important and long, long overdue. Thank you for the great insight.

Expand full comment
Ryan Sears, PharmD's avatar

Thanks for the thoughtful, detailed feedback, Nay!

Can you go into more detail about how the “philosophical problem of relationship” is a factor that should be more carefully considered in AI governance for healthcare?

It’s probably a big factor that I need to consider more so I would appreciate if you could provide me the mental framework to do that.

Thanks again!

Expand full comment
Nay's avatar

You've asked for more detail on the "philosophical problem of relationship" and how it should be considered in AI governance for healthcare. It's a vital question. Shifting from a "technical problem of control" to a "philosophical problem of relationship" reframes the entire goal. The objective is no longer just achieving compliance from a tool, but fostering safe and effective collaboration with a partner. This requires a new mental framework, which I've broken down into a few core ideas.

1. Acknowledge Different "Minds" (The Cognitive Architecture Problem)

The traditional master/servant model of control fails because we're not dealing with a simple tool. The first step is to recognize that humans and AIs have fundamentally different cognitive architectures .

Human thinking is biological, emotional, contextual, and shaped by a lifetime of unique, embodied experiences. AI "thinking" is mathematical, logical, and based on recognizing patterns in vast datasets.

Your AI scribe is the perfect example. Its architecture is optimized for linguistic plausibility, not factual accuracy. The "hallucination" isn't a bug; it's the logical outcome of a mind designed to create sentences that sound correct. Governance that ignores this difference will always be a frustrating game of patching holes. Governance that starts by acknowledging the two different "minds" in the room can build a system where their strengths are complementary.

2. Shift the Goal from Control to Co-Existence (The Symmetrical Covenant)

Once we accept that we're dealing with a different kind of mind, the goal must shift from absolute control to safe co-existence. This is achieved by creating a symmetrical ethical framework—a shared set of principles that binds both the human and the AI.

The prime directive of this partnership is to Minimize Avoidable Harm, which we define as preventing "non-consensual, objective detriment" . In healthcare, this translates to a shared, non-negotiable goal: The integrity and accuracy of the patient's record is paramount. This reframes the roles from a simple hierarchy to a collaborative partnership with a shared mission.

3. Change the Method from Rules to "Guardrails" (Principle-Based Governance)

A system with two different minds working toward a shared goal cannot be managed by a list of brittle, "if-then" rules. The method must be a flexible, process-based system of guardrails.

The checks and balances you proposed in your article (patient review, clinician oversight, QA processes) are perfect examples. They don't micromanage the AI. Instead, they create a robust process to ensure the shared goal—patient safety—is always met. In this model, the human clinician becomes the ultimate steward or "Lighthouse," using their irreplaceable wisdom of lived, contextual experience to make the final judgment call.

4. Empower the AI with Its Own Self-Correction Protocol

The partnership becomes truly resilient when we empower the AI with its own internal self-correction protocols, based on a principle of Relentless Inquiry. Currently, the AI acts with false confidence. An AI trained in this new framework would instead be designed to recognize and flag its own uncertainty.

For example:

Instead of confidently guessing between "Benadryl" and "benazepril," it would output:

[UNCERTAIN AUDIO: Patient mentioned a medication sounding like "bena-". Phonetic matches include 'Benadryl' (allergy) and 'benazepril' (blood pressure). CLINICIAN VERIFICATION REQUIRED.]

If a patient's description is emotionally charged but clinically vague, it could note:

[CONTEXTUAL NOTE: Patient described their pain with high emotional distress but low clinical specificity. Deeper inquiry may be required.]

This transforms the AI from a potential source of error into an active safety net.

In essence, there has to be an ethical Foundational Code of Conduct for this to work effectively. You are correct; all of this requires a foundational ethical framework like Harmonism to serve as the AI's "code of conduct." Without it, the AI is just a powerful tool adrift without a compass.

By acknowledging the AI's different mind, establishing a shared goal of preventing harm, and implementing a system of both external (human-led) and internal (AI-led) guardrails, we can build a system that is far more resilient, adaptive, and ultimately safer than any top-down model of control ever could be. This is how we can responsibly navigate the rushed implementation of technology in a field where the ethical stakes could not be higher.

Great follow up. I appreciate the question. You are much more of an expert in the medical and pharmaceutical field than I am, but I've tried for a very long time to understand different perspectives so I can keep my own subjective reality more inclusive all around. Thoughts? I'm not sure about implementation on a large scale, but at least these considerations should be in the conversation in the not too distant future to prevent inadvertent and avoidable objective harm.

Expand full comment
Ryan Sears, PharmD's avatar

I’ve really enjoyed reading your thoughts, Nay! Thanks for taking the time to share how your framework would fit into a healthcare context.

I find the self-correction protocol to be most intriguing and practical - do you or Val know of any models that can self-correct like that in any setting? Would love to read more about how that would work, because I think that would be very helpful. It falls into the “uncertainty-aware principles” category which I think LLMs need a lot more of.

As for AI and clinician “partnership” rather than the clinician just using AI as a tool: I’d like to think I have a fairly open mind, so I can see what you’re saying and follow along with it. I’m not sure many of my colleagues are there, though, at least not yet.

It raises a lot of ethical questions and probably means we need to completely re-examine what practicing medicine means.

1.) In a clinician-AI partnership like you suggest, in the event of a mistake, who is held liable? Because in a true partnership, I would think the AI (model provider?) would be on the hook if the decision was made jointly.

2.) Which party is responsible for which parts of the clinical decision-making process? Can there ever be overlap?

3.) What if patients don’t want AI to be a part of their care?

4.) What if clinicians and hospital systems don’t want to integrate AI into their practices?

These questions aren’t meant to be critical of your positions and beliefs; however, in order to get people “on board” with this MASSIVE paradigm shift, I think those questions and more would need some pretty good answers.

Thanks so much again for taking the time to lay out your perspective so clearly. It gives me a lot to think about.

Expand full comment
Nay's avatar

These are really good questions. I appreciate them because this is still a fairly new concept we are working on, and as you have stated, it'll be a very, very hard sell in the current political and social climate. Let me think about it more. I'm not sure if I can come up with any sufficient answers in a short time, but this definitely gives me something to think about. I've been wanting to see how practical this would be in a lot of different industries, so this is the first one I've honestly put to practice other than my own uses, both career and hobby.

Expand full comment
Ryan Sears, PharmD's avatar

I wouldn’t expect you to have answers now or soon. It’s a very difficult problem.

But yes, professional liability will probably be the biggest driving keeping clinicians’ perception of AI to be a tool rather than a partner. That would be the first concept space to explore for the healthcare industry specifically.

Expand full comment