Bethesda, MD -- A national push on comparative effectiveness research is under way as a result of federal stimulus and health reform legislation. The research, which is aimed at answering critical questions about what works--and what doesn't--in health care, is the subject of the October issue of Health Affairs. The issue explores the myriad challenges inherent in making the most of the research, and using it to better inform the health care decisions of the future.
Comparative effectiveness research has been described by the Institute of Medicine as assisting "consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and population levels." Key to the new national comparative effectiveness research initiative will be the new Patient-Centered Outcomes Research Institute, established under health reform as a nongovernmental entity that will set priorities for comparative effectiveness research, develop and implement a research agenda, and disseminate research findings to health care decision makers.
The nation as a whole now faces a number of challenges, including how it will make use of comparative effectiveness research to improve the value of health care. A key topic examined by several authors in the October Health Affairs is how the research can be used to improve the cost-effectiveness of care. For example, Steven Pearson, the president and founder of the Institute for Clinical and Economic Review (ICER) at Harvard University Medical School, and Peter B. Bach, a pulmonary and critical care physician and member of the Health Outcomes Research Group at Memorial Sloan-Kettering Cancer Center, propose an innovative way for Medicare to draw on the research to set payment for services that provide comparable patient outcomes.
The October issue of Health Affairs is funded by the National Pharmaceutical Council, WellPoint Foundation, and Association of American Medical Colleges.
Other studies in this edition of Health Affairs examine additional research challenges, including the following:
- An analysis by Joshua S. Benner and colleagues at the Brookings Institution offers early insights into how the new comparative effectiveness strategy is developing and the gaps that need to be addressed. Nearly 90 percent of the $1.1 billion allocated for comparative effectiveness research under the stimulus legislation will be spent on evidence development and synthesis and on improving research capacity, the study authors found. They recommend stronger emphasis on experimental research, evaluating of broad health system-level reforms, identifying subgroups of patients most likely to benefit from given interventions, addressing the needs of understudied groups, and developing effective strategies for disseminating research results.
Designing the research
- Engaging patients, doctors, and other stakeholders in the design of comparative effectiveness studies would help ensure the relevance of this research to health care decision makers. Ari Hoffman, of the University of California, San Francisco, and colleagues at the Center for Medical Technology Policy in Baltimore detail five principles for effective engagement of a broad coalition of research participants: (1) ensure balance among participating stakeholders; (2) get participants to "buy in" to the enterprise and understand their roles; (3) provide neutral and expert facilitators for research discussions; (4) establish connections among the participants; and (5) keep participants engaged throughout the research process.
Garnering public support
- Americans have mixed feelings about comparative effectiveness research. Two studies from national opinion surveys by Alan S. Gerber of Yale University, Eric M. Patashnik of the University of Virginia, and colleagues find that people see the value of information generated by comparative effectiveness research, but fear that it may be used to ration care, or limit doctors' ability to tailor their care. Although people want information to help them make health care decisions, they do not want their treatment options restricted.
Disseminating research findings
- Historically, it takes a long time for new research to make its way into everyday clinical practice. Jerry Avorn, of Harvard Medical School, and Michael Fischer of Brigham and Women's Hospital, describe a variety of ways to speed "bench to behavior" translation of new comparative effectiveness research studies, including: plan early for dissemination; develop new models of continuing medical education based on best available evidence instead of marketing; use academic detailing, which allows for tailored communication through education outreach; embed new research findings into health technology applications, like computer-assisted prompts for doctors and computerized physician order entry; and require pharmaceutical and device manufacturers to include a balanced summary of research findings in their promotional materials.
More could--and should--be done to maximize the value from this new research enterprise, according to Lynn M. Etheredge, of Chevy Chase, Md., a consultant to the Rapid Learning Project at George Washington University. Etheredge recommends a presidential order establishing a national database for effectiveness research studies as part of a strategy to instill a rapid-learning culture across the health care system. Ultimately, he observes, the system must be able to learn the best use of new technologies as quickly as it produces them. Building a high-performance infrastructure for comparative effectiveness research will help bring this about.
- Ann C. Bonham and Mildred Z. Solomon, of the Association of American Medical Colleges, describe how academic medicine can also play a strong role in moving comparative effectiveness research into practice.
Selecting research methods and tools
- To maximize the value of effectiveness research, the new Patient-Centered Outcomes Research Institute should take a balanced, flexible approach to the types of studies it sponsors, writes Louis P. Garrison Jr., of the University of Washington, Seattle, and colleagues. The authors note that findings from the Institute will be used by a range of decision makers, including government regulators, policy makers, payers, providers, and patients, who will have different information needs and evidence standards. Overly strict, one-size-fits-all research standards could impede the real-world use of effectiveness research by a full range of stakeholders.
- Two papers support the role of observational evidence in comparative effectiveness research, in addition to clinical trials, long considered the "gold standard" for research. Unlike controlled trials, observational research consists of retrospective and prospective studies based on treatment choices made by patients and their providers, not by assignment according to a research protocol. These "real-world" data sets can be enormously useful to understanding treatment benefits and harms, according to Nancy Dreyer, of Outcome Sciences in Cambridge, Mass., and co-authors, who write that, in order to guide good decision making, effectiveness research should encompass a range of methods. Rachael L. Fleurence, of United Biosource Corp., Bethesda, Md., and colleagues agree, noting that observational studies offer quicker results and the opportunity to investigate large numbers of interventions and outcomes among diverse populations, often at a lower cost than clinical trials.
Addressing differences among population groups
- Three papers--by Lisa A. Simpson, of the Cincinnati Children's Hospital Medical Center, and colleagues; David L. Shern and colleagues at Mental Health America; and a Web First article by C. Daniel Mullins, of the University of Maryland--discuss the potential for comparative effectiveness research to substantially improve health and health care among children, minorities, and those with mental illness. Historically, these groups have been underrepresented in many medical research studies or underserved by the health care system.