AI as a Cognitive Co-Pilot within Laryngology Training and Practice
By grounding training in a framework of critical appraisal and ethical oversight, we ensure that AI tools improve the analysis of data-heavy diagnostics without devaluing the human expertise at the heart of patient care.
Danielle A. Morrison, MD, on behalf of the Laryngology and Bronchoesophagology Education Committee (LBEC) Member
(See author’s disclosure at the end of this column.)

The Four Pillars of AI Literacy in Laryngology
To ensure responsible use among residents and clinicians, the LBEC identifies four foundational pillars of AI literacy: foundational knowledge, progressive application, technical governance, and the clinical override.
1. Foundational Knowledge and Core Literacy
Before implementing AI in clinical workflows, the trainee must understand the basic statistical concepts that underpin machine learning.4 Standardized readiness scales, such as the Medical Artificial Intelligence Readiness Scale (MAIRS-MS), suggest a significant gap in technical knowledge despite students’ positive attitudes toward technology.5 Understanding fundamentals such as overfitting and training data bias helps the laryngologist understand why an algorithm might misidentify a subtle vocal fold lesion or misinterpret a mucosal wave.
2. Progressive Application Across the Career Span
AI utility evolves with the clinician’s stage of training. For medical students, AI serves as an introductory bridge to complex topics like laryngeal neurophysiology. Residents can utilize AI to gather primary resources, organize thinking frameworks for airway management, or engage with AI-driven Virtual Simulated Patients (AI-VSP) to practice high-stakes clinical scenarios before entering the operating room.6
For the practicing fellow or attending, AI becomes a tool for efficiency and diagnostic refinement. Ambient AI can draft visit notes, allowing the physician to focus on the patient-doctor interaction rather than rote documentation. Additionally, AI offers an impartial second opinion in areas where consensus is lacking, such as the management of leukoplakia or benign vocal fold lesions. By synthesizing information from diverse datasets, AI can help the seasoned clinician explore treatment algorithms for controversial pathologies or interpret complex data from high-resolution pharyngeal manometry.
3. Technical Governance and Patient Privacy
The gatekeeping role of the physician is amplified by AI. We must ensure that patient data privacy is maintained by using secure, institutionally sanctioned AI instances and never entering Protected Health Information (PHI) into public models.7 Legal frameworks, such as the EU Artificial Intelligence Act, highlight the increasing accountability of healthcare professionals when using high-risk AI systems.8 Physicians must verify that all platforms, whether used to analyze stroboscopic video or to summarize a Modified Barium Swallow (MBS), comply with their institution's patient health policies.
4. Critical Evaluation and Clinical Override
The most vital skill in the AI era is knowing when to ignore an AI recommendation. Automation bias can lead to diagnostic errors if a clinician defaults to an algorithm's suggestion.9 This is particularly relevant in laryngology, where the interpretation of a suspicious lesion is often nuanced. Physicians must maintain human-AI oversight, integrating the algorithm's output with their own clinical intuition, physical exam findings, and the patient's lived experience to reach a final diagnosis.
Managing the Error—The VET Framework
A significant hurdle in AI-assisted studying is the phenomenon of hallucinations, where LLMs generate plausible but fabricated medical citations or physiological facts.10 This is particularly dangerous in laryngology, where mismanagement of a difficult airway or a caustic ingestion has rapidly life-altering consequences. The LBEC recommends the VET Framework for any AI-generated clinical suggestion:
- V—Verify against Primary Sources: Does the AI output align with AAO-HNS Clinical Practice Guidelines or peer-reviewed literature?
- E—Evaluate for Hallucinations: Are the specific citations real? Manually search for the DOI or PubMed ID to ensure the data was not fabricated.
- T—Trust Your Foundation: Does the recommendation pass the eye test? If the AI suggests a management plan that contradicts your clinical training, the human clinician remains the final authority.
The Role of the Committee and Faculty
The LBEC advocates for purposeful innovation. AI use should be driven by articulated educational objectives rather than novelty. Users must model transparency by disclosing when AI has been used in educational or administrative tasks. Continuous refinement and iterative feedback loops between users are necessary to ensure these tools adapt to the evolving landscape of our subspecialty.
Summary
The goal of the LBEC is not to produce AI-dependent surgeons, but AI-literate laryngologists capable of navigating a future where vocal biomarkers and automated videofluoroscopy segmentation are routine. By grounding our training in a framework of critical appraisal and ethical oversight, we ensure that AI tools improve the analysis of data-heavy diagnostics without devaluing the human expertise at the heart of patient care. Success in this integration will require ongoing collaboration to establish the diverse datasets and ground-truth references necessary for generalizable, trustworthy clinical results.
Important Takeaways
- Human-Centered Approach: Maintain the human-AI relationship as the cornerstone of clinical deployment.9
- Data Governance: Adhere strictly to institutional and HIPAA policies regarding PHI and academic integrity.7
- Verify Everything: AI is a tool for summarization and workflow; the clinician remains the ultimate authority and is accountable for all AI-assisted outputs.6,10
Disclosure: An AI tool (Microsoft Copilot) was used for editorial assistance, including grammar, wording refinement, and clarity. All substantive content, concepts, and interpretations were developed by the author, who reviewed and approved all AI‑assisted edits and is fully responsible for the final content.
References
- Torborg SR, Kim AYE, Rameau A. New developments in the application of artificial intelligence to laryngology. Current opinion in otolaryngology & head and neck surgery. Dec 1 2024;32(6):391-397. doi:10.1097/moo.0000000000000999
- Gordon M, Daniel M, Ajiboye A, et al. A scoping review of artificial intelligence in medical education: BEME Guide No. 84. Med Teach. Apr 2024;46(4):446-470. doi:10.1080/0142159x.2024.2314198
- Tolentino R, Baradaran A, Gore G, Pluye P, Abbasgholizadeh-Rahimi S. Curriculum Frameworks and Educational Programs in AI for Medical Students, Residents, and Practicing Physicians: Scoping Review. JMIR Med Educ. Jul 18 2024;10:e54793. doi:10.2196/54793
- Çalışkan SA, Demir K, Karaca O. Artificial intelligence in medical education curriculum: An e-Delphi study for competencies. PLoS One. 2022;17(7):e0271872. doi:10.1371/journal.pone.0271872
- Karaca O, Çalışkan SA, Demir K. Medical artificial intelligence readiness scale for medical students (MAIRS-MS) - development, validity and reliability study. BMC Med Educ. Feb 18 2021;21(1):112. doi:10.1186/s12909-021-02546-6
- CDA-AMC. 2025 Watch List: Artificial Intelligence in Health Care. Vol. 5. 2025. Canadian Journal of Health Technologies. 2563-6596. https://canjhealthtechnol.ca/index.php/cjht/article/view/ER0015
- AAMC. Principles for the Responsible Use of Artificial Intelligence in and for Medical Education. 2025. https://www.aamc.org/about-us/mission-areas/medical-education/principles-ai-use
- van Kolfschooten H, van Oirschot J. The EU Artificial Intelligence Act (2024): Implications for healthcare. Health Policy. Nov 2024;149:105152. doi:10.1016/j.healthpol.2024.105152
- Lekadir K, Frangi AF, Porras AR, et al. FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare. Bmj. Feb 5 2025;388:e081554. doi:10.1136/bmj-2024-081554
- Lee P, Bubeck S, Petro J. Benefits, Limits, and Risks of GPT-4 as an AI Chatbot for Medicine. The New England journal of medicine. Mar 30 2023;388(13):1233-1239. doi:10.1056/NEJMsr2214184





