Quality, Research, Metrics, and You
Rahul K. Shah, MD George Washington University School of Medicine Children’s National Medical Center, Washington, DC There is a continuum in the patient safety and quality improvement area that essentially involves quality improvement initiatives, research to build on initiatives, metrics to measure outcomes with the interventions and benchmark, and ultimately measurement of the initiative’s impact on the end users — the patient and the physician. Reviewing the recent history of patient safety and quality improvement in otolaryngology reveals an interesting progression along this spectrum. The early literature focused on what was wrong with the system with suggestions to ameliorate latent systems defects. The next iteration has involved attempts to produce tightly focused patient safety and quality improvement studies using research methodology, paradigms, and statistical measures. David W. Roberson, MD, co-chair of the Patient Safety and Quality Improvement (PSQI) committee, and a national thought leader in this arena, often explains that quality improvement initiatives are distinctly different from basic science studies in terms of the significance of the burden of proof needed to show demonstrable differences. In other words, a great quality improvement initiative may not necessarily need a robust statistical significance to pass the basic litmus test of, “is this good for the patient/physician/system, etc.” An excellent and well-known example of this is that we do not need a randomized, double-blinded study to show that using a parachute when sky-diving improves outcomes. Of course, that example is an exaggeration of Dr. Roberson’s point – that basic science and quality improvement have different research end-points. With this caveat, we have seen a trend in the literature, within the body of surgery, and specifically for otolaryngology that quality improvement initiatives are now becoming more robust. The most beneficial result of the application of statistical tests and measures is that the initiatives are more readily accepted by physicians. Surgeons are accustomed to reading and interpreting literature with such statistics and now have come to expect such from many of our journals. The infusion of these measures into quality improvement initiatives has resulted in adoption of the findings much more readily than before. Most recently, the Joint Commission and other regulatory bodies have been ensuring that hospitals follow the Ongoing Professional Practice Evaluation (OPPE) and Focused Professional Practice Evaluation (FPPE) recommendations. These are excellent initiatives. However, their success and ultimately validity depends on the input. For example, to recite the well-known adage – garbage in, garbage out. I have spoken with more than a dozen hospitals and otolaryngology programs representing the community and academic setting, and the common sentiment is that many of the metrics that we propose are basic and do not differentiate or facilitate stratification among otolaryngologists. My fear is that if we do not produce robust metrics that allow such and are created, vetted, and adopted by us, then others (regulatory or insurance bodies) will use their vast databases to produce metrics or measures. We will then be forced to follow OPPE guidelines set by others and controlled by others – not the physicians. Hence we see the continuum from quality improvement initiatives morphing toward studies with application of research methodologies, which will result in data that can be used for setting and creating metrics to help demonstrate the quality of care that we provide to our patients while preserving the autonomy of our practices. The programming at the Annual Meeting attempts to cover this changing trend by having topics on apology and disclosure, quality/research in patient safety/QI, and how to make the transition to putting the literature to work in the form of metrics for maintenance of certification and hospital regulatory obligations. It has been excellent to see how the programming has evolved over the past years. This year’s programming should allow otolaryngologists to be updated on how to continue to provide care and contribute to this continuum of patient safety and quality improvement. We encourage members to write us with any topic of interest. We will try to research and discuss the issue. Members’ names are published only after they have been contacted directly by Academy staff and have given consent to the use of their names. Please email the Academy at qualityimprovement@entnet.org to engage us in a patient safety and quality discussion that is pertinent to your practice.
George Washington University School of Medicine
Children’s National Medical Center, Washington, DC
There is a continuum in the patient safety and quality improvement area that essentially involves quality improvement initiatives, research to build on initiatives, metrics to measure outcomes with the interventions and benchmark, and ultimately measurement of the initiative’s impact on the end users — the patient and the physician.
Reviewing the recent history of patient safety and quality improvement in otolaryngology reveals an interesting progression along this spectrum. The early literature focused on what was wrong with the system with suggestions to ameliorate latent systems defects. The next iteration has involved attempts to produce tightly focused patient safety and quality improvement studies using research methodology, paradigms, and statistical measures.
David W. Roberson, MD, co-chair of the Patient Safety and Quality Improvement (PSQI) committee, and a national thought leader in this arena, often explains that quality improvement initiatives are distinctly different from basic science studies in terms of the significance of the burden of proof needed to show demonstrable differences. In other words, a great quality improvement initiative may not necessarily need a robust statistical significance to pass the basic litmus test of, “is this good for the patient/physician/system, etc.” An excellent and well-known example of this is that we do not need a randomized, double-blinded study to show that using a parachute when sky-diving improves outcomes. Of course, that example is an exaggeration of Dr. Roberson’s point – that basic science and quality improvement have different research end-points.
With this caveat, we have seen a trend in the literature, within the body of surgery, and specifically for otolaryngology that quality improvement initiatives are now becoming more robust. The most beneficial result of the application of statistical tests and measures is that the initiatives are more readily accepted by physicians. Surgeons are accustomed to reading and interpreting literature with such statistics and now have come to expect such from many of our journals. The infusion of these measures into quality improvement initiatives has resulted in adoption of the findings much more readily than before.
Most recently, the Joint Commission and other regulatory bodies have been ensuring that hospitals follow the Ongoing Professional Practice Evaluation (OPPE) and Focused Professional Practice Evaluation (FPPE) recommendations. These are excellent initiatives. However, their success and ultimately validity depends on the input. For example, to recite the well-known adage – garbage in, garbage out. I have spoken with more than a dozen hospitals and otolaryngology programs representing the community and academic setting, and the common sentiment is that many of the metrics that we propose are basic and do not differentiate or facilitate stratification among otolaryngologists. My fear is that if we do not produce robust metrics that allow such and are created, vetted, and adopted by us, then others (regulatory or insurance bodies) will use their vast databases to produce metrics or measures. We will then be forced to follow OPPE guidelines set by others and controlled by others – not the physicians.
Hence we see the continuum from quality improvement initiatives morphing toward studies with application of research methodologies, which will result in data that can be used for setting and creating metrics to help demonstrate the quality of care that we provide to our patients while preserving the autonomy of our practices.
The programming at the Annual Meeting attempts to cover this changing trend by having topics on apology and disclosure, quality/research in patient safety/QI, and how to make the transition to putting the literature to work in the form of metrics for maintenance of certification and hospital regulatory obligations. It has been excellent to see how the programming has evolved over the past years. This year’s programming should allow otolaryngologists to be updated on how to continue to provide care and contribute to this continuum of patient safety and quality improvement.
We encourage members to write us with any topic of interest. We will try to research and discuss the issue. Members’ names are published only after they have been contacted directly by Academy staff and have given consent to the use of their names. Please email the Academy at qualityimprovement@entnet.org to engage us in a patient safety and quality discussion that is pertinent to your practice.