PharmacoEconomics & Outcomes News 630 - 11 Jun 2011
CER – evolution or revolution? – Pauline McWilliams – Comparative effectiveness research (CER) is a powerful tool to help decision makers determine the most effective treatment when several alternatives exist. The 2009 American Recovery and Reinvestment Act provided $US1.1 billion to improve the quality and effectiveness of healthcare through CER. Just how far CER has evolved since then was the topic for the Second Plenary Session of the 16th Annual International Meeting of the International Society for Pharmacoeconomics and Outcomes Research [ISPOR; Baltimore, Maryland, USA; May 2011]. In addition, a number of podium and posters presented at the meeting further discussed the use and recent developments made in CER. The use of evidence-based medicine focusing on realworld comparative assessments has evolved in recent years, particularly in the US with the recent healthcare reforms. The US Agency for Healthcare Research and Quality (AHRQ) defines CER as research that "is designed to inform health-care decisions by providing evidence on the effectiveness, benefits, and harms of different treatment options. The evidence is generated from research studies that compare drugs, medical devices, tests, surgeries, or ways to deliver health care". Several issues addressing CER were discussed in the plenary session entitled ‘Evolution of Comparative Effectiveness Research: Learning from the Past; Challenges for the Future’. This session, moderated by Professor Sandy Schwartz from the University of Pennsylvania, consisted of two presentations given by Dr Carolyn Clancy, Director of AHRQ, and Bruce Psaty, Professor of Medicine, Epidemiology and Health Services at the University of Washington.
AHRQ’s role in CER In the plenary session, Dr Clancy presented on the investments made in patient-centered outcomes research at the AHRQ.1 She said that the AHRQ acts as "a convener" for this research and that it is the first "legal agency to have a legislatively mandated center for conducting patient-centered outcomes research". Through its Effective Health Care Program, the AHRQ has published more than 50 products (including guides for policymakers, clinicians and consumers) with plans to produce 50 more in the next 2 years. She emphasised the need to synthesis existing evidence and create new evidence. She said that investments had to be made – without exception – in the distribution of data networks, development of methods, retrospective and prospective non-experimental/observational studies, and pragmatic clinical trials that focus on comparative effectiveness. In discussing evidence generation activities, she highlighted the progress made in several areas including awards made for creating and enhancing national patient registries, and iADAPT – the innovative dissemination and implementation of grants for CER. Dr Clancy went on to discuss the role of the PatientCentered Outcomes Research Institute (PCORI), an independent nonprofit organisation that was created by the Affordable Care Act in 2010. The PCORI, led by the recently appointed Executive Director, Joe Sleby, will form internal advisory panels (made up of clinicians, consumers, researchers, payers and manufacturers). As outlined in the Act, the Institute has to provide support tools and resources for these panels. There will also be comment periods for the public to give feedback on priorities and decisions. In concluding, Dr Clancy said that we have to "address the gap that exists between our ability to generate data
and having the capacity to produce actionable information that can be used right now".
CER: an epidemiologist’s view In the second presentation of the Plenary Session, Professor Psaty gave his views of CER as a practicing epidemiologist.2 He says that CER has legislative mandate and is the now the "current fashion", taking over from evidence-based medicine. However, going against the current support for CER, he recommends the "need for large-scale randomized evidence without undue emphasis on small trials, meta-analyses, or subgroup analyses". He said that existing data may not give reliable answers, and that it is better to acknowledge that we can not answer certain questions than to "pretend that we know the answers" by using inadequate data. He concludes by recommending we "use best methods and use advocacy".
How do CER assessments from AHRQ and NICE compare? Although CER is a relatively recent development in the US, the UK NICE has been conducting Health Technology Assessments (HTAs) for over 10 years. Both AHRQ CER and NICE HTAs involve comparative effectiveness assessments; however, one obvious difference between the two is the lack of economic considerations in CER. In a poster presentation, Kym Alnwick from Heron Evidence Development compared CER publications from the AHRQ with HTAs from the UK NICE.3 Of the 14 AHRQ CER publications available on drugs, only two corresponded to NICE HTAs – one on the treatment of rheumatoid arthritis and psoriatic arthritis (RA/PA), and one on lipid-modifying drugs. Conclusions from the AHRQ CER on RA/PA corresponded to two NICE HTAs, which were in general agreement with each other. However, for lipid-modifying drugs, the CER report concluded that data were insufficient to guide clinical decisions, but NICE made positive treatment recommendations. The analysis also revealed that several of NICE’s HTAs had been superceded by Clinical Guidelines "which in some ways are more similar to the AHRQ CER documents". A review of these found that three were relevant to AHRQ reports, all of which generally agreed in their conclusions.
Role of mixed treatment comparisons . . . Researchers from Costello Medical Consulting in the UK reviewed the use of Mixed Treatment Comparisons (MTC) in HTAs from NICE.4 Because head-to-head comparisons of all relevant treatments are relatively rare, indirect MTC provide a means to aid in the assessment of comparative effectiveness. These researchers identified 17 HTAs incorporating MTCs published by NICE between 2006 and March
1173-5503/10/0630-0001/$14.95 © 2010 Adis Data Information BV. All rights reserved
PharmacoEconomics & Outcomes News 11 Jun 2011 No. 630
1
Single Article
2
CER – evolution or revolution? – continued 2011. They found that MTCs "have become more frequently used in NICE technology appraisals" with eight of the 26 HTAs (31%) published in 2010 using MTCs compared with none in 2006 and 2007. They also found that most of the MTCs were conducted by the manufacturer (often in economic models). However, 12 out of 15 manufacturer-conducted MTCs were criticised by the Evidence Review Group (ERG) which led to the MTC results being disregarded or interpreted with caution in at least five cases. In a similar analysis, researchers from the University of York, identified 24 submissions made to NICE that included either a MTC or an indirect comparison.5 Considering the evidence, the ERG had concerns with the validity of 18 appraisals. Of the 24 submissions, 58% received a restricted decision. These researchers suggest that guidelines for the conduct of MTC be established and say that the "emerging ISPOR Good Research Practice guide on indirect treatment comparisons will provide a basis for these".
. . . and large simple trials The possible role of large simple trials – a hybrid of the observational cohort study and the randomised trial – in comparative effectiveness research was investigated by Lona Vincent from Quintiles Global Consulting.6 By searching ClinicalTrials.gov, this researcher identified 10 trials initiated between 2009 and 2010 that could be described as large simple trials. However, the analysis found that < 50% of these trials were designed to measure effectiveness as opposed to efficacy. Consequently, Vincent says that, although large simple trials are being conducted, they need to be designed "to truly capture effectiveness".
A comparative effectiveness index to guide decisions? In a podium presentation, the generation of a comparative effectiveness index (CEI) was suggested by researchers from Quintiles and Genesis BioPharma.7 Their CEI would consider relevant factors such as efficacy/effectiveness, compliance/persistence, safety, and health-related QOL/utility combined into a single CEI "to avail decision makers with comprehensive information at a glance". They say that data to create this index would be obtained from various sources including meta-analysis of randomised controlled trials and observational data, and pharmacy claims. A quality
PharmacoEconomics & Outcomes News 11 Jun 2011 No. 630
rating would be incorporated to ensure that the index included only robust data. In concluding, they suggested "the creation of a US publically available web-based database of information underlying the index".
Where to from here? With the focus on CER as a part of healthcare reform legislation in the US, the tools needed for wellconducted CER are likely to expand with the move toward increased input from other methodologies besides the traditional randomised controlled trial. It remains to be seen how CER will ultimately develop in the US but, with the formation of the PCORI (and in particular the formation of the Methodology Committee in January 2011), methodological standards to guide CER should hopefully be addressed. According to Dr Clancy, we need to "form alliances, partnerships and other strategies that promote collaboration between care systems and public health/community resources" to make this work. 1. Clancy CM. Investments in Patient-Centered Outcomes Research 16th Annual International Meeting of the International Society for Pharmacoeconomics and Outcomes Research : (plus oral presentation), 21 May 2011. Available from: URL: http://www.ispor.org. 2. Psaty BM. Conductin Comparative Effectiveness Research: the View of a Practising Epidemiologist from the other Washington 16th Annual International Meeting of the International Society for Pharmacoeconomics and Outcomes Research : (plus oral presentation), 21 May 2011. Available from: URL: http:// www.ispor.org. 3. Alnwick K. Ahrq Versus Nice: Do the Conclusions in CER Reports Correspond Closely to the Comparative Effectiveness Assessments Made in HTA Reports? 16th Annual International Meeting of the International Society for Pharmacoeconomics and Outcomes Research : (plus poster) abstr. PHP84, 21 May 2011. Available from: URL: http://www.ispor.org. 4. Brooks-Rooney C, et al. The use of Mixed Treatment Comparisons in NICE Technology Appraisals. 16th Annual International Meeting of the International Society for Pharmacoeconomics and Outcomes Research : (plus poster) abstr. PMS69, 21 May 2011. Available from: URL: http://www.ispor.org. 5. Bending MW, et al. Demonstrating Clinical-Effectiveness Using Indirect and Mixed Treatment Comparison Analysis: a Review of Manufacturers? Single Technology Appraisal (Sta) Submissions to the National Institute for Health and Clinical Excellence (NICE). 16th Annual International Meeting of the International Society for Pharmacoeconomics and Outcomes Research : abstr. PHP56, 21 May 2011. Available from: URL: http://www.ispor.org. 6. Vincent L. The Outlook of Large Simple Trials for Comparative Effectiveness Research: an Application of the Precis Framework. 16th Annual International Meeting of the International Society for Pharmacoeconomics and Outcomes Research : abstr. PHP76, 21 May 2011. Available from: URL: http:// www.ispor.org. 7. Horowicz-Mehler N, et al. A Comparative Effectiveness Index to Inform Clinical Decisions. 16th Annual International Meeting of the International Society for Pharmacoeconomics and Outcomes Research : (plus oral presentation) abstr. EV3, 21 May 2011. Available from: URL: http:// www.ispor.org. 801157907
1173-5503/10/0630-0002/$14.95 © 2010 Adis Data Information BV. All rights reserved