Promise & Pitfalls of Comparative Effectiveness Research
The October 2012 issue of Health Affairs contains many articles on comparative effectiveness research (CER). The issue may be purchased for 30 days for $29.95 and individual articles for 24 hours are $12.95. The CER articles include these:
Of particular interest in this issue is an article by Justin W. Timbie and colleagues from the RAND Corporation entitled Five Reasons That Many Comparative Effectiveness Studies Fail To Change Patient Care And Clinical Practice. The article cites research demonstrating that scientific evidence may be slow to change clinical practice. The reasons for slow adoption include 1) financial disincentives (e.g., fee-for-service), 2) ambiguity, 3) cognitive biases, 4) inadequate link of research to clinical use, and 5) limited use of decision support. Another article in the same issue addressing the same issue entitled Academic Detailing Can Play A Key Role In Assessing And Implementing Comparative Effectiveness Research Findings argues that knowledge derived from CER must be communicated to clinicians in a way that will actually change practice. The authors suggest "...that academic detailing--direct outreach education that gives clinicians an accurate and unbiased synthesis of the best evidence for practice in a given clinical area--can translate comparative effectiveness research findings into actions that improve health care decision making and patient outcomes." (Health Affairs Blog, October 9, 2012)
- The Patient-Centered Outcomes Research Institute Should Focus On High-Impact Problems That Can Be Solved Quickly, Harold C. Sox, associate director for faculty at the Dartmouth Institute for Health Policy and Clinical Practice.
- Reviewing Hypothetical Migraine Studies Using Funding Criteria From The Patient-Centered Outcomes Research Institute, Joe V. Selby, executive director, Patient-Centered Outcomes Research Institute.
- Regulatory Requirements Of The Food And Drug Administration Would Preclude Product Claims Based On Observational Research, Joseph P. Griffin, associate director for policy development, Office of Medical Policy, Center for Drug Evaluation and Research, Food and Drug Administration.
- Among Other Flaws, Hypothetical Migraine Study Lacks Independent Evaluation And Patient Engagement, Marc Boutin, executive vice president and chief operating officer, National Health Council
- Communication About Results Of Comparative Effectiveness Studies: A Pharmaceutical Industry View, Eleanor M. Perfetto, senior director of reimbursement and regulatory affairs, federal government relations, Pfizer.
- The Hypothetical Migraine Drug Comparative Effectiveness Study: A Payer's Recommendations For Obtaining More Useful Results, Robert S. Epstein, president and CEO, Epstein French Associates.
Patient-reported Outcomes Essential to Comparative Effectiveness Research
An article from the Journal of Clinical Oncology recommends that patient-reported symptoms and health-related quality of life measures should be assessed in comparative effectiveness studies. Ethan Basch, MD, Director of the Cancer Outcomes Research Program at the University of North Carolina Lineberger Comprehensive Cancer Center, said, "Comparative effectiveness research looks at how treatment options perform in a real-world setting, and is particularly important in cancer treatment, where patients are not only fighting their disease but also enduring treatments that may have a significant impact on their ability to function and their quality of life." (Medical Xpress, October 16, 2012)
What Is Comparative Effectiveness Research
The Agency for Healthcare Research and Quality (AHRQ) defines comparative effectiveness research (CER) this way: "Comparative effectiveness research is designed to inform health-care decisions by providing evidence on the effectiveness, benefits, and harms of different treatment options. The evidence is generated from research studies that compare drugs, medical devices, tests, surgeries, or ways to deliver health care." This page is an introduction to CER and includes links to other resources on their site. (AHRQ, October 2012)
Medication Adherence Interventions: Comparative Effectiveness
The Agency for Healthcare Research and Quality (AHRQ) has released a new report as part of their Closing the Gap: A Critical Analysis of Quality Improvement Strategies (CQG) series which began in June 2011. (See the CQG intro page at http://effectivehealthcare.ahrq.gov/index.cfm/search-for-guides-reviews-and-reports/?productid=715&pageaction=displayproduct#3151.) The new report, entitled "Medication Adherence Intervention: Comparative Effectiveness" focuses on the importance of patients' adherence to medication-based treatments and was prepared with the assistance of AHRQ's Evidence-based Practice Centers (EPCs). Nonadherence to medication requirements is not only common (shown to be as high as 30%) it is estimated to cost as much as $289 billion annually. Improvement in self-management of chronic diseases could result in a cost-to-savings ratio of approximately 1:10. Research finds that: "...medication adherence can be improved via formal programs of various sorts. At this stage, new studies need to be asking, 'What specific intervention element or elements work best for improving medication adherence?' and 'How can we further enhance medication adherence interventions to improve health outcomes?'" (AHRQ, September 11, 2012)
FDA Holds Off on CER Promotion Policy as Industry Seeks Further Clarity
FDA officials said in a Health Affairs article that they have postponed clarifying their policies on comparative product promotion claims pending the resolution of methodological problems associated with comparative effectiveness observational studies. Drug industry officials had welcomed a February statement from the FDA indicating that they would allow drug company rebuttals of CER findings reflecting negatively on their products provided they were not promotional. "My sense is the FDA's hands may be tied in a lot of ways by legislation, by tradition perhaps, even by Section 114," said Peter Neumann, director of the Center for the Evaluation of Value and Risk in Health at the Institute for Clinical Research and Health Policy Studies at Tufts Medical Center. "The agency could write guidance, but legislative language would be a "safer bet," Neumann said. (Inside Health Policy, October 15, 2012)