Errors in Otolaryngology: Revisited
Rahul K. Shah, MD George Washington University School of Medicine Children’s National Medical Center, Washington, DC I was a resident almost a decade ago, working with David W. Roberson, MD, at Children’s Hospital Boston when we both asked the question, “Where are we with errors in otolaryngology?” At that time, the study of patient safety and quality improvement was in a resurgence, which was in its relative infancy. To properly conduct studies, we were trained in the research methodology: ensuring a proper sample size and looking for statistical significance when comparing two groups. In an attempt to design a proper study of errors in otolaryngology, this methodology proved to be a stumbling block. There had been seminal work on a classification of errors in family medicine. That manuscript and methodology resonated with us as it elegantly provided a framework to assess, measure, quantify, and perhaps ameliorate errors in that specialty. Like good researchers, we emulated their methodology and it worked. In 2004, we published a classification of errors in otolaryngology along with the implications of those errors. When looking at zones of risk in our specialty, we would often revisit the data from that set to understand vulnerabilities in our realms of practice. We would then design a deeper dive study or approach to tackle a specific zone of risk. We have done that a few dozen times and hope we have made the practice of otolaryngology safer and more standardized. In the past months, we have been struggling with the concept that our Academy Members’ understanding, appreciation, and sophistication vis-à-vis patient safety and quality improvement have grown tremendously as a result of specific Academy initiatives, mandates from the government and payers, and personal interest from our dedicated Academy Members. To this end, we felt it compulsory to check the pulse of our members with regard to understanding errors in otolarygnology almost a decade after our initial survey study. We needed to ensure that we would be comparing apples to apples so we could make meaningful comparisons between the data from 2004 and the current data. Hence, we used a similar question set in an updated survey tool with some additional questions focusing on the nature of our practices and perceived zones of risk, attribution for the errors, culpability, and improvement processes implemented. After much consideration, we decided we should embrace technology (and keep costs low) and use an online survey tool to conduct the survey. The survey closed at the end of November after being open for fewer than 20 days. We have not sat down to properly classify and sort through the responses; we’ve only spent a few moments to ensure the integrity of the responses and confidential data capture of the online survey tool. We will, of course, properly classify the responses, write up the results in a peer-reviewed manuscript, and publish it for Academy members to continue to reference. However, we were shocked by the number of responses we received. In fewer than 20 days, more than 677 Academy Members took time from their busy practices in the winter season to respond to a survey that was essentially self-reporting of errors in our specialty. The response rate is staggering and clearly shows the passion of Academy Members and sheer interest we each have in improving the quality of care we deliver. Members clearly understand that, collectively, we have the power to improve our own practices. The high response rate resonates with us because it implies that we are aware of the concept that we may proceed through our entire career and never experience an error such as mis-administration of concentrated epinephrine because it is so rare, however, if we collectively look at our practices, it is a problem that needs to be considered. The sheer volume of responses also validates the PSQI Committee’s commitment to a secure, online patient safety event reporting portal, which will be available soon on the Academy website. As this issue of Bulletin goes to press, we will be properly classifying and understanding the huge volume of responses we received from Academy members. We should all take a moment to pause and appreciate how collectively our specialty continues to move the needle toward improving the care and safety of patients with otolaryngology diseases because we are so passionate as a specialty and as Academy Members about ensuring that we deliver quality care. We encourage members to write us with any topic of interest and we will try to research and discuss the issue. Members’ names are published only after they have been contacted directly by Academy staff and have given consent to the use of their names. Please email the Academy at qualityimprovement@entnet.org to engage us in a patient safety and quality discussion that is pertinent to your practice.
Rahul K. Shah, MD
George Washington University School of Medicine
Children’s National Medical Center, Washington, DC
I was a resident almost a decade ago, working with David W. Roberson, MD, at Children’s Hospital Boston when we both asked the question, “Where are we with errors in otolaryngology?” At that time, the study of patient safety and quality improvement was in a resurgence, which was in its relative infancy. To properly conduct studies, we were trained in the research methodology: ensuring a proper sample size and looking for statistical significance when comparing two groups. In an attempt to design a proper study of errors in otolaryngology, this methodology proved to be a stumbling block. There had been seminal work on a classification of errors in family medicine. That manuscript and methodology resonated with us as it elegantly provided a framework to assess, measure, quantify, and perhaps ameliorate errors in that specialty. Like good researchers, we emulated their methodology and it worked. In 2004, we published a classification of errors in otolaryngology along with the implications of those errors. When looking at zones of risk in our specialty, we would often revisit the data from that set to understand vulnerabilities in our realms of practice. We would then design a deeper dive study or approach to tackle a specific zone of risk. We have done that a few dozen times and hope we have made the practice of otolaryngology safer and more standardized.
In the past months, we have been struggling with the concept that our Academy Members’ understanding, appreciation, and sophistication vis-à-vis patient safety and quality improvement have grown tremendously as a result of specific Academy initiatives, mandates from the government and payers, and personal interest from our dedicated Academy Members. To this end, we felt it compulsory to check the pulse of our members with regard to understanding errors in otolarygnology almost a decade after our initial survey study. We needed to ensure that we would be comparing apples to apples so we could make meaningful comparisons between the data from 2004 and the current data. Hence, we used a similar question set in an updated survey tool with some additional questions focusing on the nature of our practices and perceived zones of risk, attribution for the errors, culpability, and improvement processes implemented. After much consideration, we decided we should embrace technology (and keep costs low) and use an online survey tool to conduct the survey. The survey closed at the end of November after being open for fewer than 20 days.
We have not sat down to properly classify and sort through the responses; we’ve only spent a few moments to ensure the integrity of the responses and confidential data capture of the online survey tool. We will, of course, properly classify the responses, write up the results in a peer-reviewed manuscript, and publish it for Academy members to continue to reference. However, we were shocked by the number of responses we received. In fewer than 20 days, more than 677 Academy Members took time from their busy practices in the winter season to respond to a survey that was essentially self-reporting of errors in our specialty.
The response rate is staggering and clearly shows the passion of Academy Members and sheer interest we each have in improving the quality of care we deliver. Members clearly understand that, collectively, we have the power to improve our own practices. The high response rate resonates with us because it implies that we are aware of the concept that we may proceed through our entire career and never experience an error such as mis-administration of concentrated epinephrine because it is so rare, however, if we collectively look at our practices, it is a problem that needs to be considered. The sheer volume of responses also validates the PSQI Committee’s commitment to a secure, online patient safety event reporting portal, which will be available soon on the Academy website.
As this issue of Bulletin goes to press, we will be properly classifying and understanding the huge volume of responses we received from Academy members. We should all take a moment to pause and appreciate how collectively our specialty continues to move the needle toward improving the care and safety of patients with otolaryngology diseases because we are so passionate as a specialty and as Academy Members about ensuring that we deliver quality care.
We encourage members to write us with any topic of interest and we will try to research and discuss the issue. Members’ names are published only after they have been contacted directly by Academy staff and have given consent to the use of their names. Please email the Academy at qualityimprovement@entnet.org to engage us in a patient safety and quality discussion that is pertinent to your practice.