Corrupting the Mental Map: AI poses a critical risk to how healthcare students learn.
To an experienced clinician, AI's hallucinations are a nuisance. To an uninformed student, they are the truth.
Main idea: Artificial intelligence destroys the “paradigm of truth” for students by generating high-fidelity hallucinations (textual and visual) which they lack the clinical baseline to audit, leading to the memorization of erroneous concepts. This is a systemic issue without precedent.
If we don’t teach discernment, we are graduating clinicians whose foundational knowledge is built on “probabilistic guessing” rather than critical thinking.
We are witnessing a real-time fracture in the way healthcare professionals are trained.
For many decades, the struggle of a pharmacy student was information scarcity, knowing where to find the answer in the library or on the Web.
Today, the new struggle is synthetic certainty.
Students are now learning from AI tools that speak with the confidence of a tenured professor; however, these models lack a human teacher’s critical thinking skills and clinical experience that only comes after years of practice.
To a student who has not yet built their clinical baseline, an AI hallucination looks indistinguishable from the truth. This challenges the competence of an entire generation of future healthcare practitioners.
The Discernment Gap: Experts Audit, Students Memorize
The fundamental danger of using AI in education is automation bias, the psychological tendency of humans to favor the suggestions of automated decision-making systems. The bias can be so strong that contradictory information from other sources is simply discarded.
An experienced clinical pharmacist has an important protecting factor against automation bias: a career of real-world experience. If an AI assistant were to suggest a dangerously high dose of a drug, the “spidey sense” honed by thousands of verified orders kicks in. They can spot the error, correct it, and move on.
A pharmacy student without this experience may not be able to verify AI outputs the same way. As their AI study partner provides an incorrect dose, it is not flagged as an error, but a fact to be memorized for their next exam.
The Hallucinated Journal Club
A common manifestation of this phenomenon is the “PDF summary” trap.
Imagine a student is preparing for a Journal Club, an educational meeting where clinicians and trainees discuss recent medical literature to keep clinical knowledge current and improve skills in evidence-based medicine.
The student has been assigned a complex clinical trial that they will need to understand and present on.
In the past, students would have to read the study and process the information on their own. Now, they can upload a PDF of the study into their AI assistant and ask for a summary.
The AI model, designed to predict plausible text rather than extract rigid data, often overlooks the nuance. They can generate “plausible, but factually incorrect” summaries. Sometimes, they can fabricate details about the study methods or even output incorrect numerical values in the results section.
A day later, the student stands up and confidently presents the study information. When questioned about the erroneous aspects of their Journal Club, they cannot point to where in the paper that information comes from. Because it doesn’t.
The student has built their understanding of the study, and the underlying implications to clinical practice, on a statistical phantom.
The Visual Lie: AI “Slop” in Medical Infographics
It gets even worse when we move from text to images. We are seeing a rise in “AI slop,” infographics which seem plausible on the surface but are scientifically illiterate. This info is often quickly generated and posted onto sites such as LinkedIn without being fact-checked for accuracy.
Visual learners are particularly vulnerable to the effects of this misinformation. If the student learns the mechanism of action (MOA) of a particular drug from an infographic generated by AI, they may be internalizing a biological pathway that is not compatible with reality. If the image looks professional and sleek enough, this can bypass our skepticism and lodge itself directly into our mental map.
The Solution: Emphasize Critical Thinking and Fact-Checking
We must recognize that the competence of the future clinician is no longer reliant on finding information, it is validating it.
Epistemologically, we need to shift students from a “recipient” mindset (where they accept any AI-generated information as factual) to an “auditor” mindset (where they treat every response as a hypothesis requiring proof).
Here is how we can build that discernment in both the classroom and the experiential settings.
The Classroom: Adversarial learning, and a return to pen and paper?
In the didactic setting, we need to break the illusion of computer infallibility. We must teach students that AI is a not an oracle and it cannot truly reason as a human can, at least not yet.
One way of reinforcing this concept is the “hallucination hunt” assignment. Rather than having students write (in other words, generate) a summary of the most recent COPD guidelines, flip the script. Have the students look at an AI-generated summary that contains specific, dangerous errors (e.g., incorrect dosing, missing contraindication).
By having the students find the lies in an AI-generated response, followed by citing the page in the actual guidelines to back it up, this forces students to engage with the primary literature in a defensive nature. They learn that truth resides in the evidence rather than the summary.
Additionally, we need more ways of introducing friction into the learning process. This helps students build up critical thinking skills and not immediately accept the instant gratification from an AI response.
The Clinic: Summaries aren’t going to fly.
Going back to our example of the AI slop infographic: having the student write out the mechanism of action of the drug by hand, with explanations of why certain biological processes happen the way they do, ensures we are reinforcing the correct information.
When students make it to clinical rotations, the stakes change completely: an AI hallucination turns from a bad grade to a patient safety event. Preceptors must enforce a strict “chain of custody” for information that students provide.
A rule such as the “Primary Source Mandate” could establish that a clinical recommendation cannot be voiced unless a student has seen the primary source of information with their own eyes. Ask the student, “Did you read that in a summary, or did you read the study?” If they had only read a summary, the answer is inadmissible. This teaches that an AI insight needs to be confirmed with primary literature, or at the very least, a tertiary database (such as Lexicomp for information on a specific drug).
Students would also benefit from an increased focus on explaining the logic behind a response rather than just the answer itself. If AI is used blindly, they may have a recommendation without understanding the underlying physiology or literature knowledge. Exposing this gap reinforces that understanding is more important than retrieval.
A New Mental Model: “Trust is Earned, Not Generated”
Ultimately, to ensure we are graduating competent clinicians, we must teach our students epistemic vigilance.
Students must understand that generative AI sounds confident, even when it is wrong. The solution is to instill a hypervigilance on verifying the AI output and linking it to concrete understanding from the real world.
We are graduating the first generation of clinicians who will practice alongside synthetic intelligence. As preceptors or instructors, our job is to ensure they remain the masters of that intelligence rather than its passive consumers.
The future of patient care depends on getting this right.
Read how I use AI in my writing here: AI Use Policy
Read how I use analytics to improve my newsletter here: Privacy & Analytics


