CLINICAL QUESTION: What are strategies that can equip medical educators and learners to engage critically with artificial intelligence (AI)?
BACKGROUND: AI has the capacity to fundamentally alter medical learning and practice. As in other professions, the use of AI in medical training could result in professionals who are highly efficient, yet less capable of independent problem-solving and critical evaluation when compared to pre-AI counterparts. Off-loading complex tasks such as clinical reasoning can lead to “deskilling,” “never-skilling,” or “mis-skilling.” Critical thinking is essential to clinical reasoning and for the safe use of AI during learning, both of which must be taught and modeled by educators.
STUDY DESIGN: Proposed stepwise approach to learner-AI interactions that educators can use to model and scaffold critical thinking for the concurrent development of effective clinical skills and engagement with AI.
SETTING: The setting to use this strategy is any “AI interaction” of the learner. It is up to the educator to recognize and name the interaction. The authors’ definition of an AI interaction is from Bearman and Ajjawi’s works. It is defined as any moment when “a computational artefact provides a judgement to inform an optimal course of action and… this judgement cannot be traced,” or, in other words, requires the user to take a leap of faith to trust the AI output.
SYNOPSIS: The authors propose an adapted approach, termed DEFT-AI, based on the existing DEFT (diagnosis, evidence, feedback, and teaching) framework. This model emphasizes structured discussions of clinical reasoning, evidentiary support, and targeted feedback.
Diagnosis, Discussion, and Discourse—The educator begins by probing the learner’s clinical reasoning process and their concomitant use of AI. This includes asking about the differential diagnosis, how the learner interacted with AI, what prompts were used, how the output was verified, and if the output affected the learner’s diagnostic approach.
Evidence—The educator probes the learner for the use of supporting and opposing evidence to evaluate the learner’s medical and AI knowledge and application of that knowledge. The educator then asks the learner to self-assess their AI literacy.
Feedback—The educator asks the learner to reflect on potential growth opportunities relevant to the case at hand. This can include missed diagnostic considerations, gaps in medical knowledge, or other AI applications perhaps better suited to the task.
Teaching—The educator provides feedback on the learner’s reasoning, performance, and use of AI. This includes recommendations that concurrently promote foundational skills and AI literacy. With rare exceptions, the educator should encourage ongoing practice with AI, with appropriate guardrails. Universally, educators should caution learners against passively adopting AI output without interrogation and should encourage adaptive engagement.
BOTTOM LINE: AI is already embedded in medical learning and practice. The DEFT-AI framework can be used by educators to scaffold critical thinking for the concurrent development of effective clinical skills and engagement with AI during AI interactions.
CITATION: Abdulnour RE, et al. Educational strategies for clinical supervision of artificial intelligence use. N Engl J Med. 2025;393(8):786-797. doi: 10.1056/NEJMra2503232.
Dr. Ritter is an assistant professor of medicine at Columbia University Irving Medical Center, a hospitalist, and the co-director of the senior medicine rotation of the New York Presbyterian Columbia University Irving Medical Center internal medicine residency program, both in New York.