Public Demands!
Rahul K. Shah, MD, George Washington University School of Medicine, Children’s National Medical Center, Washington, DC In 1999, the Institute of Medicine released what has become the clarion call to rejuvenate the patient safety and quality improvement movement in U.S. healthcare. Since then, there has been an explosion of government, insurer, and public initiatives to drive success in improvements in the U.S. healthcare system. The successes have included marked decrease in central-line infection rates, the demonstrable improvements in hand hygiene practices, and the proof-of-principle of using benchmarking to drive change in surgical outcomes (American College of Surgeons National Surgical Quality Improvement Project). For the last decade, these changes have been driven by a tacit acknowledgement that it is simply the right thing to do, coupled with a fear that the public will demand such changes once they become knowledgeable about the system. Fortunately—or not—that time has come. The movement toward greater transparency has been mounting over the past couple of years. This includes taxpayer-funded programs such as the Health and Human Services (www.hospitalcompare.hhs.gov) website. I implore Academy members to spend a few minutes to understand what our patients are looking at when choosing to come to see us and have surgery in our hospitals. The sophistication of the website, the ease of use, and the ability to compare one hospital to another is seamless. For the patient, this is obviously very convenient. For the physician, it means that your outcomes are out there for evaluation and consideration. The October 3 issue of Modern Healthcare listed the hospitals with the highest 30-day readmission rate using Medicare data. It was amazing that I personally knew and have had friends or family at almost half of the hospitals listed. Furthermore, I was startled to see that many of the hospitals that were listed have a readmission rate of almost one-third of discharges! I cannot fathom that this would be a result of sub-par delivery of care or systematic breaches of patient safety and quality. My immediate explanation is that these hospitals undoubtedly must take care of the sickest patients, who inevitably would have poorer outcomes than healthy, more advantaged individuals. However, I am sure the truth lies somewhere in between. I am a self-purported expert in this arena. However, how is the lay public going to interpret this data? One cannot blame them if they are also immediately startled at some of the data. It would behoove us to know where the hospitals we practice in fall in these publicly available scorecards so that we can explain to our patients why we believe we have such ratings/rankings. Various organizations have also started bestowing quality awards upon hospitals for metrics that they deem as the most vital. This is of course good because competition is certainly healthy in a capitalist economy such as ours, and we can surmise that hospitals would strive to improve their practices in hopes of receiving such accolades and recognition. The problem is that there are a lot of organizations that are currently recognizing specific quality achievements and there are many more that are starting the process. How is the public supposed to discern the value of one quality award compared to another? One may argue that the 30-day readmission rate does not matter if someone is seeking a regional expert to care for a patient’s specific sinus complaint. However, if this is the only metric the patient is exposed to, it may be hard for him or her to interpret. Indeed, a local hospital that I would never have considered seeking care from was recently awarded a rare and prestigious quality award. Rightly or wrongly, knowledge of such an award has immediately changed my perception of this hospital. There is not too much we as otolaryngologists can do to prepare ourselves for the tremendous transparency of quality and safety metrics that are inevitably coming in the future. However, as physicians, we can take an active role in our medical staffs and our local, regional, and national organizations to ensure that the metrics that quality organizations and hospitals want to report reflect the realities of our practices, our outcomes, and our patient profiles (case mix indexes, etc.). As I often say in this column, we control the ultimate metrics—our patient outcomes. How we define and make these available to the public will ultimately play a profound role in our future. Specialty-specific databases or patient data registries will assist our practices and our efforts to collate, disseminate, and compare such outcomes. We encourage members to write us with any topic of interest, and we will try to research and discuss the issue. Members’ names are published only after they have been contacted directly by Academy staff and have given consent to the use of their names. Please email the Academy at qualityimprovement@entnet.org to engage us in a patient safety and quality discussion that is pertinent to your practice.
Rahul K. Shah, MD, George Washington University School of Medicine, Children’s National Medical Center, Washington, DC
In 1999, the Institute of Medicine released what has become the clarion call to rejuvenate the patient safety and quality improvement movement in U.S. healthcare. Since then, there has been an explosion of government, insurer, and public initiatives to drive success in improvements in the U.S. healthcare system. The successes have included marked decrease in central-line infection rates, the demonstrable improvements in hand hygiene practices, and the proof-of-principle of using benchmarking to drive change in surgical outcomes (American College of Surgeons National Surgical Quality Improvement Project). For the last decade, these changes have been driven by a tacit acknowledgement that it is simply the right thing to do, coupled with a fear that the public will demand such changes once they become knowledgeable about the system.
Fortunately—or not—that time has come. The movement toward greater transparency has been mounting over the past couple of years. This includes taxpayer-funded programs such as the Health and Human Services (www.hospitalcompare.hhs.gov) website. I implore Academy members to spend a few minutes to understand what our patients are looking at when choosing to come to see us and have surgery in our hospitals. The sophistication of the website, the ease of use, and the ability to compare one hospital to another is seamless. For the patient, this is obviously very convenient. For the physician, it means that your outcomes are out there for evaluation and consideration.
The October 3 issue of Modern Healthcare listed the hospitals with the highest 30-day readmission rate using Medicare data. It was amazing that I personally knew and have had friends or family at almost half of the hospitals listed. Furthermore, I was startled to see that many of the hospitals that were listed have a readmission rate of almost one-third of discharges! I cannot fathom that this would be a result of sub-par delivery of care or systematic breaches of patient safety and quality. My immediate explanation is that these hospitals undoubtedly must take care of the sickest patients, who inevitably would have poorer outcomes than healthy, more advantaged individuals. However, I am sure the truth lies somewhere in between.
I am a self-purported expert in this arena. However, how is the lay public going to interpret this data? One cannot blame them if they are also immediately startled at some of the data. It would behoove us to know where the hospitals we practice in fall in these publicly available scorecards so that we can explain to our patients why we believe we have such ratings/rankings.
Various organizations have also started bestowing quality awards upon hospitals for metrics that they deem as the most vital. This is of course good because competition is certainly healthy in a capitalist economy such as ours, and we can surmise that hospitals would strive to improve their practices in hopes of receiving such accolades and recognition. The problem is that there are a lot of organizations that are currently recognizing specific quality achievements and there are many more that are starting the process. How is the public supposed to discern the value of one quality award compared to another? One may argue that the 30-day readmission rate does not matter if someone is seeking a regional expert to care for a patient’s specific sinus complaint. However, if this is the only metric the patient is exposed to, it may be hard for him or her to interpret. Indeed, a local hospital that I would never have considered seeking care from was recently awarded a rare and prestigious quality award. Rightly or wrongly, knowledge of such an award has immediately changed my perception of this hospital.
There is not too much we as otolaryngologists can do to prepare ourselves for the tremendous transparency of quality and safety metrics that are inevitably coming in the future. However, as physicians, we can take an active role in our medical staffs and our local, regional, and national organizations to ensure that the metrics that quality organizations and hospitals want to report reflect the realities of our practices, our outcomes, and our patient profiles (case mix indexes, etc.).
As I often say in this column, we control the ultimate metrics—our patient outcomes. How we define and make these available to the public will ultimately play a profound role in our future. Specialty-specific databases or patient data registries will assist our practices and our efforts to collate, disseminate, and compare such outcomes.