Dr. Memari explores how generative AI is reshaping medical education through practical, real-world integration at UVA, highlighting both the promise and the complexity of these tools, from enhancing clinical training with AI-supported case work and simulated patient encounters to the ongoing challenge of preserving core clinical reasoning skills. At its core, the conversation emphasizes a balanced, thoughtful approach: using AI as a partner in learning rather than a replacement for it. The insights offer a compelling look at how educators can responsibly adopt AI while preparing the next generation of physicians to do the same.
What first sparked your interest in using AI in medical education, and how has your perspective changed as these tools have become more common?
I first started using generative AI to assist with personal tasks and projects and found it to be helpful as a thought partner, organizer, and idea generator. As time passed and the tools became more advanced and task-specific, it became clear that it would have practical applications in clinical, educational, and research domains. In the past year or two, I have tried to gain experience with several GPTs and specific tools, incorporating many into my regular workflows.
We already know that we must think carefully about how we teach our learners and colleagues to work alongside these tools while maintaining the core of our physician identities. Our work suggests that familiarity and experience using generative AI helps clinician-educators become more comfortable with role modeling, thoughtful and responsible use of AI tools and identifying the right time and place to teach these skills to our trainees.
What opportunities and challenges have you encountered while introducing generative AI into the clinical learning environment at UVA?
The UVA School of Medicine leadership made an early commitment to incorporating AI tools by creating a committee specifically focused on incorporating technology into the curriculum. Our committee has developed several use cases here locally to address needs at our institution, keeping in mind potential downsides with respect to cognitive offloading/de-skilling and the potential for bias with each project. We have thus far successfully incorporated generative AI teaching into case-based small group sessions alongside clinical coaches, preparation for clinical skills examinations (OSCEs) using chatbots, as well as to support faculty grading of clinical skills examination checklists.
One of the challenges we face is the rapidly changing landscape – it is an ongoing challenge to make curricular decisions and identify specific tools when the landscape and availability of tools changes daily. Our goal is to maintain educational equipoise across multiple groups of learners when the technological environment changes by prioritizing role modeling and working alongside learners with these tools.
How do you balance using AI to support learners while ensuring they still develop strong clinical reasoning skills?
With respect to clinical reasoning, we first introduced students to best practices in prompting and encouraged limited use of tools at specific stages in the reasoning process, prioritizing AI as a supplement to reasoning steps rather than a replacement for students’ independent cognitive processes. For differential diagnosis development, students develop their own ideas before they consult AI, and then critically evaluate and discuss what the tool offers alongside their faculty coach. As our curriculum continues to evolve on this front, the mainstay of our approach is encouraging the development of clinical reasoning skills in a stepwise fashion. We scaffold AI tools gradually over time, working primarily in small-group, case-based learning settings alongside clinical faculty.
What have you learned from using AI chatbots for patient interviewing practice and OSCE grading?
Both learners and faculty value guided practice opportunities, and we have found that AI-simulated patient encounters can contribute to formative growth if done thoughtfully. The fidelity of the interaction matters enormously; if the AI behaves in ways that feel implausible or formulaic, there is potential for learner disengagement. We initially piloted the OSCE chatbot with our teaching faculty, who offered feedback that helped improve the tool before it ever interfaced with learners. Once rolled out, students were able to rehearse interviewing skills on their own schedule, without the logistical constraints of standardized patient sessions.
On the assessment side, using AI to assist with OSCE grading has been illuminating — it has forced us to be far more explicit about what we are measuring, which has been a valuable exercise. We’ve also become more attentive to where AI grading aligns with faculty assessments and, where it diverges, and those discrepancies often reveal important nuances about the checklists we are using, the inferences associated with them, and clinical communication and documentation that rubrics sometimes struggle to capture.
What excites you most about the future of AI in medical education, and what advice would you offer educators getting started?
What excites me most is the idea that a learner’s curriculum could dynamically adapt to their evolving gaps and strengths in ways that are simply not feasible with traditional educational infrastructure. That kind of individualization has always been the aspiration of competency-based medical education, and AI may finally make it practically achievable. We have already seen AI’s potential to help with the cognitive load of clinician-educators and learners, freeing up time for the relational aspects of medicine and mentorship that no technology can or should replace.
For educators getting started, my advice would be to begin as a curious learner before trying to teach others. Spend time with these tools, notice where they impress and where they disappoint, and let that experience inform how you introduce them to trainees. Find one or two colleagues who share your interest and start something small — a single session, a pilot project — because the field is moving too fast to wait for a perfect institutional framework. Faculty role-modeling in using AI as task partners rather than cognitive replacements is essential. The hope is to create a generation of physicians who see AI as thought and task partners, equipped to identify when and where these tools can help them thrive.
Want to learn more? Catch Dr. Memari live at SGIM26:
WN02: Rebooting Journal Club: Leveraging AI to Maximize Learning and Engagement
Friday, May 8, 2:15 PM – 3:15 PM EDT


