How Comparatively Effective Are We?
David R. Nielsen, MD AAO-HNS/F EVP/CEO As everyone knows by now, embedded in the Patient Protection and Affordable Care Act of 2010 (ACA) is language designed to address the unsustainable cost of healthcare in the United States by reducing waste, eliminating unnecessary care, and dealing with the unwanted and unexplained variations in care. One specific method the ACA employs is support for comparative effectiveness research (CER)—defined by the Agency for Health Research and Quality (AHRQ) as research methods “designed to inform healthcare decisions by providing evidence on the effectiveness, benefits, and harms of different treatment options. The evidence is generated from research studies that compare drugs, medical devices, tests, surgeries, or ways to deliver healthcare.” See http://effectivehealthcare.ahrq.gov/index.cfm/what-is-comparative-effectiveness-research1/ . The Patient-Centered Outcomes Research Institute (PCORI) was founded within the ACA language specifically for the purpose of providing direction and oversight for an entire spectrum of envisioned comparative effectiveness research that could dramatically and positively influence the decision making of professionals, bring consensus around the most efficient ways of providing high quality care for those conditions and interventions for which there is enough data to support a conclusion, and achieve the three aims of the National Quality Strategy: better individual health outcomes, better population health, and reduced cost of healthcare. While using this approach to improve quality and resource use is a laudable goal and one that every physician and surgeon can support, the challenge of prioritizing clinical topics, designing relevant and meaningful studies, and acting on what is learned can be complex and daunting. The concept of CER is not new. The medical profession has many years of experience in this approach, but, to date, limited benefit to show from what we have learned. While there are many examples of how such research has improved quality and reduced cost, the promise of CER as envisioned by those who crafted the ACA language remains largely unfulfilled. What are the reasons for this? A recent article in Health Affairs (October 30, 2012) is instructive. After careful study of the literature on many types of CER, the authors conclude that five root causes appear to be responsible for the failure of CER to be translated into positive changes in clinical practice. Misalignment of incentives, ambiguity of results, cognitive biases in interpreting the new information, failure to take into consideration the needs of end users of the data, and limited use of clinical decision support tools all impair the goal of changing clinical behavior. The cognitive biases alone reveal that physicians are not exempt from the powerful effect of traditional behavior and thought processes. As clinicians, the paper discovers, we demonstrate confirmation bias (the effect of believing and acting on that data that supports our pre-conceived notions of what is true); pro-intervention bias (that is, we tend to want to act, rather than to observe or wait, even when the evidence clearly shows that intervention has little or no benefit or may be harmful); and a pro-technology bias (more recent technological advances are superior to existing modalities). The article concludes that PCORI has learned that multi-stakeholder involvement in CER from design to implementation is essential to minimize the negative effects of these five barriers and three biases to changing clinical practice for the better. The AAO-HNS/F agrees that collaboration is essential, and has made multi-disciplinary engagement in our Guidelines Task Force a hallmark of our published evidence-based guideline development process. Now in its third edition, if you have not read it, please take the time to review the supplement to the January issue of Otolaryngology–Head and Neck Surgery. Since learning to eliminate bias, carefully searching for and critically examining data, and being willing to change our clinical practice to achieve better results are all essential to improving quality, we each need to become familiar with relevant health services research and CER and master the ability to implement what we learn. Source: Timbie JW, Fox DS, Van Busum K, Schneider EC. Health Aff (Millwood). Five reasons that many comparative effectiveness studies fail to change patient care and clinical practice. 2012 Oct;31(10):2168-75. The particular approach championed by ARHQ and the ACA includes seven distinct steps for optimal implementation: Identify new and emerging clinical interventions. Review and synthesize current medical research. Identify gaps between existing medical research and the needs of clinical practice. Promote and generate new scientific evidence and analytic tools. Train and develop clinical researchers. Translate and disseminate research findings to diverse stakeholders. Reach out to stakeholders via a citizens’ forum.
David R. Nielsen, MD
AAO-HNS/F EVP/CEO
As everyone knows by now, embedded in the Patient Protection and Affordable Care Act of 2010 (ACA) is language designed to address the unsustainable cost of healthcare in the United States by reducing waste, eliminating unnecessary care, and dealing with the unwanted and unexplained variations in care. One specific method the ACA employs is support for comparative effectiveness research (CER)—defined by the Agency for Health Research and Quality (AHRQ) as research methods “designed to inform healthcare decisions by providing evidence on the effectiveness, benefits, and harms of different treatment options. The evidence is generated from research studies that compare drugs, medical devices, tests, surgeries, or ways to deliver healthcare.” See http://effectivehealthcare.ahrq.gov/index.cfm/what-is-comparative-effectiveness-research1/ .
The Patient-Centered Outcomes Research Institute (PCORI) was founded within the ACA language specifically for the purpose of providing direction and oversight for an entire spectrum of envisioned comparative effectiveness research that could dramatically and positively influence the decision making of professionals, bring consensus around the most efficient ways of providing high quality care for those conditions and interventions for which there is enough data to support a conclusion, and achieve the three aims of the National Quality Strategy: better individual health outcomes, better population health, and reduced cost of healthcare.
While using this approach to improve quality and resource use is a laudable goal and one that every physician and surgeon can support, the challenge of prioritizing clinical topics, designing relevant and meaningful studies, and acting on what is learned can be complex and daunting. The concept of CER is not new. The medical profession has many years of experience in this approach, but, to date, limited benefit to show from what we have learned. While there are many examples of how such research has improved quality and reduced cost, the promise of CER as envisioned by those who crafted the ACA language remains largely unfulfilled. What are the reasons for this?
A recent article in Health Affairs (October 30, 2012) is instructive. After careful study of the literature on many types of CER, the authors conclude that five root causes appear to be responsible for the failure of CER to be translated into positive changes in clinical practice. Misalignment of incentives, ambiguity of results, cognitive biases in interpreting the new information, failure to take into consideration the needs of end users of the data, and limited use of clinical decision support tools all impair the goal of changing clinical behavior. The cognitive biases alone reveal that physicians are not exempt from the powerful effect of traditional behavior and thought processes. As clinicians, the paper discovers, we demonstrate confirmation bias (the effect of believing and acting on that data that supports our pre-conceived notions of what is true); pro-intervention bias (that is, we tend to want to act, rather than to observe or wait, even when the evidence clearly shows that intervention has little or no benefit or may be harmful); and a pro-technology bias (more recent technological advances are superior to existing modalities).
The article concludes that PCORI has learned that multi-stakeholder involvement in CER from design to implementation is essential to minimize the negative effects of these five barriers and three biases to changing clinical practice for the better. The AAO-HNS/F agrees that collaboration is essential, and has made multi-disciplinary engagement in our Guidelines Task Force a hallmark of our published evidence-based guideline development process. Now in its third edition, if you have not read it, please take the time to review the supplement to the January issue of Otolaryngology–Head and Neck Surgery. Since learning to eliminate bias, carefully searching for and critically examining data, and being willing to change our clinical practice to achieve better results are all essential to improving quality, we each need to become familiar with relevant health services research and CER and master the ability to implement what we learn.
Source:
Timbie JW, Fox DS, Van Busum K, Schneider EC. Health Aff (Millwood). Five reasons that many comparative effectiveness studies fail to change patient care and clinical practice. 2012 Oct;31(10):2168-75.
The particular approach championed by ARHQ and the ACA includes seven distinct steps for optimal implementation:
- Identify new and emerging clinical interventions.
- Review and synthesize current medical research.
- Identify gaps between existing medical research and the needs of clinical practice.
- Promote and generate new scientific evidence and analytic tools.
- Train and develop clinical researchers.
- Translate and disseminate research findings to diverse stakeholders.
- Reach out to stakeholders via a citizens’ forum.