Research Article
Health Information TechnologyPrimary Care Practices’ Abilities And Challenges In Using Electronic Health Record Data For Quality Improvement
- Deborah J. Cohen ([email protected]) is a professor of family medicine and vice chair of research in the Department of Family Medicine at Oregon Health & Science University, in Portland.
- David A. Dorr is a professor and vice chair of medical informatics and clinical epidemiology, both at Oregon Health & Science University.
- Kyle Knierim is an assistant research professor of family medicine and associate director of the Practice Innovation Program, both at the University of Colorado School of Medicine, in Aurora.
- C. Annette DuBard is vice president of Clinical Strategy at Aledade, Inc., in Bethesda, Maryland.
- Jennifer R. Hemler is a research associate in the Department of Family Medicine and Community Health, Research Division, Rutgers Robert Wood Johnson Medical School, in New Brunswick, New Jersey.
- Jennifer D. Hall is a research associate in family medicine at Oregon Health & Science University.
- Miguel Marino is an assistant professor of family medicine at Oregon Health & Science University.
- Leif I. Solberg is a senior adviser and director for care improvement research at HealthPartners Institute, in Minneapolis, Minnesota.
- K. John McConnell is a professor of emergency medicine and director of the Center for Health Systems Effectiveness, both at Oregon Health & Science University.
- Len M. Nichols is director of the Center for Health Policy Research and Ethics and a professor of health policy at George Mason University, in Fairfax, Virginia.
- Donald E. Nease Jr. is an associate professor of family medicine at the University of Colorado School of Medicine, in Aurora.
- Samuel T. Edwards is an assistant research professor of family medicine and an assistant professor of medicine at Oregon Health & Science University and a staff physician in the Section of General Internal Medicine, Veterans Affairs Portland Health Care System.
- Winfred Y. Wu is clinical and scientific director in the Primary Care Information Project at the New York City Department of Health and Mental Hygiene, in Long Island City, New York.
- Hang Pham-Singer is senior director of quality improvement in the Primary Care Information Project at the New York City Department of Health and Mental Hygiene.
- Abel N. Kho is an associate professor and director of the Center for Health Information Partnerships, Northwestern University, in Chicago, Illinois.
- Robert L. Phillips Jr. is vice president for research and policy at the American Board of Family Medicine, in Washington, D.C.
- Luke V. Rasmussen is a clinical research associate in the Department of Preventive Medicine, Northwestern University.
- F. Daniel Duffy is professor of medical informatics and internal medicine at the University of Oklahoma School of Community Medicine–Tulsa.
- Bijal A. Balasubramanian is an associate professor in the Department of Epidemiology, Human Genetics, and Environmental Sciences, and regional dean of UTHealth School of Public Health, in Dallas, Texas.
Abstract
Federal value-based payment programs require primary care practices to conduct quality improvement activities, informed by the electronic reports on clinical quality measures that their electronic health records (EHRs) generate. To determine whether EHRs produce reports adequate to the task, we examined survey responses from 1,492 practices across twelve states, supplemented with qualitative data. Meaningful-use participation, which requires the use of a federally certified EHR, was associated with the ability to generate reports—but the reports did not necessarily support quality improvement initiatives. Practices reported numerous challenges in generating adequate reports, such as difficulty manipulating and aligning measurement time frames with quality improvement needs, lack of functionality for generating reports on electronic clinical quality measures at different levels, discordance between clinical guidelines and measures available in reports, questionable data quality, and vendors that were unreceptive to changing EHR configuration beyond federal requirements. The current state of EHR measurement functionality may be insufficient to support federal initiatives that tie payment to clinical quality measures.
Since 2008, adoption of office-based physician electronic health records (EHRs) has more than doubled.1 Federal investment played a critical role in accelerating EHR adoption through a combination of financial incentives (the EHR Incentive Program) and technical assistance programs (Regional Extension Centers).2–6 The expectation was that widespread adoption of EHRs would efficiently generate meaningful data, enabling accurate measurement of quality, informing practice quality improvement efforts, and ultimately leading to improved care processes and outcomes. Yet little is known about how well EHRs meet these expectations, particularly among primary care practices with scarce technical resources.7–11
The EHR Incentive Program set standards for the meaningful use of EHRs, which included implementing an EHR system and demonstrating its use to improve care. There were seventeen core standards defined in stages 1 and 2 of the meaningful-use program (2015–17). Stage 3 began in 2017 and expanded the requirements to include health information exchange, interoperability, and advanced quality measurement to maximize clinical effectiveness and efficiency by supporting quality improvement. As of 2017 the EHR Incentive Program defined sixty-four electronic clinical quality measures12 that are aligned with national quality standards. The rationale behind using these measures was to reduce the need for clinicians’ involvement in reporting by using data already collected within the EHR and automating the electronic submission of results.
Quality measurement for payment grew with the 2006 implementation of the Physician Quality Reporting System, as an increasing number of clinicians and practices reported their quality data electronically. In 2016 the Quality Payment Program6 was developed as a way to streamline quality reporting programs while expanding the expectations of electronic reporting as defined by the Merit-based Incentive Payment Program. A core expectation of meaningful use and the subsequent Quality Payment Program was for EHRs to have the capability to measure and report electronic clinical quality measures and for practices to use these data to improve quality. To that end, the Office of the National Coordinator for Health Information Technology (ONC) worked with the Centers for Medicare and Medicaid Services (CMS) and stakeholders to establish a set of certification criteria for EHRs. Use of an ONC-certified EHR was a core requirement of meaningful use. The functionality of certified EHR systems’ reporting of electronic clinical quality measures was aligned with CMS-based incentives and quality criteria; it is anticipated that these quality-based incentives will continue as meaningful use evolves into the Quality Payment Program.
Clinicians participating in the Quality Payment Program were required to report on the full 2017 performance period by March 31, 2018. In addition to meeting external reporting requirements, EHRs must help practices identify delivery gaps and “bright spots” of performance13,14 that are critical for quality improvement. This requires the ability to produce patient-, clinician-, and practice-level reports across various measurement periods and at different frequencies and to allow for customized specifications to conduct improvement cycles.15 EHR systems often fail to meet these expectations, but it is unclear whether this is because of implementation differences, providers’ lack of knowledge about capabilities, or lack of capabilities in the EHRs themselves.16–18
We explore how well EHRs—as currently implemented—meet the measurement-related quality improvement needs in primary care practice. To do so, we examined survey data from 1,492 practices and combined this information with qualitative data to gain a richer answer than surveys alone could provide. Our findings highlight the challenges that practices face as value-based payment replaces volume-based systems.
Study Data And Methods
Study Design And Cohort
In 2015 the Agency for Healthcare Research and Quality (AHRQ) launched EvidenceNOW: Advancing Heart Health in Primary Care. EvidenceNOW is a three-year initiative dedicated to helping small and medium-size primary care practices across the US use the latest evidence to improve cardiovascular health and develop their capacity for ongoing improvement. AHRQ funded seven grantees (called cooperatives) that span seven US regions (and twelve states). Cooperatives were tasked with developing and leveraging sustainable infrastructure to support over 200 practices in their regions in improving electronic clinical quality measures endorsed by CMS and the National Quality Forum for aspirin use,19 blood pressure monitoring,20 cholesterol management,21 and smoking screening and cessation support22 (the ABCS measures).
AHRQ also funded an evaluation of this initiative called Evaluating System Change to Advance Learning and Take Evidence to Scale (ESCALATES) to centralize, harmonize, collect, and analyze mixed-methods data with the goal of generating cross-cooperative, generalizable findings.23 ESCALATES started at the same time the cooperatives’ work began. The goals of ESCALATES included identifying facilitators of and barriers to implementing regionwide infrastructure to support quality improvement among primary care practices, of which health information technology (IT) was a central component.
Data Sources
ESCALATES compiled quantitative survey data collected by the cooperatives from the 1,492 practices. While cooperative study designs (for example, stepped wedge, group randomized trials) varied, all cooperatives used their first year (May 2015–April 2016) for recruitment and start-up activities, and all staggered the time at which practices received the intervention. Survey data were collected from practices before the start of the intervention (that is, at baseline), which ranged from September 2015 to April 2017. We collected complementary qualitative data (observation, interview, and online diary) for this study in the period May 2015–April 2017.23 We chose this time period because it gave us exposure to the data issues that manifested themselves during start-up and implementation.
Qualitative Data Collection And Management
We conducted two site visits with every cooperative. The first site visit occurred before implementation of the intervention (August 2015–March 2016) and focused on understanding the cooperative, its partners, regional resources (including EHR and data capacities), and approach to supporting large-scale practice improvement. The second site visit was conducted during implementation of the intervention (July 2016–April 2017) and focused on observing practice facilitators work with practices. We observed forty-one facilitators conducting sixty unique practice quality improvement visits. During site visits we took field notes and conducted and recorded (and later transcribed the recordings of) semistructured interviews with key stakeholders (for example, investigators, facilitators, and health IT experts).
To supplement observation and interview data, we attended and took notes at a meeting of an AHRQ-initiated cooperative work group to discuss health IT challenges, and we implemented an online diary24 for each cooperative that included documentation by key stakeholders (such as investigators, health IT experts, and facilitators) of implementation experiences in real time (approximately twice a month).
Online diary data, interviews, meeting notes, and field notes were deidentified for individual participants and reviewed for accuracy. To confirm our findings, cooperative representatives completed a table that characterized obstacles to using EHR data for quality improvement.
We used Atlas.ti for data management and analysis. The Oregon Health & Science University Institutional Review Board approved and monitored this study.
Survey Measures
Cooperatives administered a survey to all of their practices. The survey, completed by a lead clinician or practice manager, consisted of a subset of questions from the National Ambulatory Medical Care Survey’s Electronic Medical Records Questionnaire25–28 and assessed practice characteristics, EHR characteristics,29 and reporting capabilities (see online appendix exhibit A1 for survey items).30
Qualitative Data Analysis
Three authors (Deborah Cohen, Jennifer Hemler, and Jennifer Hall) analyzed qualitative data in real time following an immersion-crystallization approach31 and coded data to identify text related to clinical quality measurement, quality improvement, and EHRs. We analyzed data within and across cooperatives to identify nuanced findings and variations regarding usage of EHRs for quality improvement. Data collection and analysis were iterative; initial findings prompted additional questions that were later answered in the online diaries and during site visits to cooperatives.32 We triangulated data with other sources, discussing differences until we reached saturation—the point at which no new findings emerged.32 Qualitative findings informed the selection of variables for quantitative analyses, and both quantitative and qualitative data informed interpretations.
Quantitative Data Analysis
Two authors (Bijal Balasubramanian and Miguel Marino) used descriptive statistics to characterize the EvidenceNOW practice sample and used multivariable logistic regression to evaluate the association between practice characteristics and EHR reporting capability, measured as a “yes” or “no” response to the following question: “Does your practice have someone who can configure or write quality reports from the EHR?” Indicator variables for cooperatives were included in the logistic model to account for regional variability, and we used multiple imputation by chained equations to account for missing data (see appendix exhibit A2).30 We performed statistical analyses using R, version 3.4.0.
Limitations
Our study had several limitations. First, our findings may have underestimated the challenges that practices face in using EHRs for quality measurement, as the practices recruited to participate in EvidenceNOW may have self-selected based on their greater quality improvement and health IT confidence.
Second, our understanding of practices’ challenges in using EHRs for quality measurement was based on the views of cooperative experts and does not necessarily represent the practices’ perspectives. Thus, we were unable to quantify the extent to which practices experienced these problems. Yet it is from the cooperatives’ vantage point that we identified problems that are often difficult to characterize using practice-level surveys, and it may be that solutions are most effective at the regional rather than practice level.
Third, our primary survey outcome—the response to the question “Does your practice have someone who can configure or write quality reports from the EHR?”—combines workforce and reporting capacity in a single item. While it might be preferred to parse these issues in separate items, we did not do this because of concerns about response burden. Our qualitative data suggest that directing more survey questions to practices might not have been useful, since practices lack staff with the expertise to answer more technically complex questions. Data collected from cooperatives’ health IT experts complemented practice survey data, shedding light on this complex issue.
Fourth, our study findings were also limited by our inability to identify whether some EHRs faced more or fewer challenges than others, and by the fact that some survey items had more than 10 percent missing data. However, our conclusions were based on one of the largest studies of geographically dispersed primary care practices, and the use of multiple imputation leveraged this scale to minimize potential bias due to missing data.
Study Results
Of the 1,710 practices recruited to EvidenceNOW, 1,492 (87.3 percent) completed the practice survey. The majority of these practices had ten or fewer clinicians (84 percent), were located in urban or suburban areas (71 percent), and were owned by clinicians (40 percent) or hospital/health systems (23 percent) (exhibit 1). Over 93 percent used EHRs, of which 81 percent were certified by the ONC for 2014. While sixty-eight different EHRs were represented, Epic, eClinicalWorks, and NextGen were the most commonly used systems. The number of different EHR systems among practices within a cooperative ranged from four to thirty-two. Sixty percent of practices participated in stages 1 and 2 of meaningful use. (More detailed findings are in exhibit 1 and appendix exhibit A2.)30
Practices | Range across cooperatives (%) | ||
Number | Percent | ||
1 | 356 | 23.9 | 6.2–52.4 |
2–5 | 696 | 46.6 | 16.2–59.1 |
6–10 | 205 | 13.7 | 6.8–17.2 |
11 or more | 160 | 10.7 | 1.9–23.4 |
Clinician | 603 | 40.4 | 27.8–72.8 |
Hospital/health system | 342 | 22.9 | 1.6–53.9 |
Federala | 322 | 21.6 | 8.4–42.7 |
Academic | 19 | 1.3 | 0.0–5.8 |
Other or noneb | 147 | 9.9 | 1.0–38.8 |
Urban | 948 | 63.5 | 34.9–100.0 |
Suburban | 107 | 7.2 | 0.0–14.8 |
Large town | 202 | 13.5 | 0.0–29.5 |
Rural area | 235 | 15.8 | 0.0–27.9 |
Practices using ONC-certified EHR (n = 1,490) | 1,215 | 81.5 | 58.9–100.0 |
Participation in meaningful use (n = 1,490) | |||
Neither stage 1 nor stage 2 | 230 | 15.4 | 8.4–23.8 |
Stage 1 only | 176 | 11.8 | 5.3–20.7 |
Stages 1 and 2 | 887 | 59.5 | 38.0–84.5 |
Produced a CQM in prior 6 monthsd (n = 1,281) | |||
Aspirin | 616 | 48.1 | 30.9–65.0 |
Blood pressure | 817 | 63.8 | 43.5–78.8 |
Smoking | 868 | 67.8 | 48.7–80.8 |
All three | 596 | 46.5 | 29.8–64.2 |
Report CQMs at practice leveld (n = 1,069) | 897 | 84.0 | 52.7–95.7 |
Report CQMs at provider leveld (n = 1,069) | 903 | 84.5 | 55.2–94.7 |
Ability to create CQM reports from EHRe (n = 1,490) | 913 | 61.3 | 37.2–75.2 |
Challenges Using Electronic Clinical Quality Measures For Quality Improvement
Practices and quality improvement facilitators experienced significant challenges using EHRs to generate tailored reports of electronic clinical quality measures for quality improvement, which led to substantial delays in reporting quality measures and engaging in measurement-informed quality improvement activities (exhibit 2).
Challenge | Specific problems |
Inability to produce clinical quality reports that align with quality improvement needs | ONC-certified EHRs for meaningful use do not provide customizable measure specifications, date ranges, and frequency of reports. Vendors are resistant to making changes to EHRs beyond what is required for ONC certification and meaningful use, and any changes are expensive and take too much time to deliver. Most practices lack the technical expertise to extract and prepare data and cannot afford external consultants. |
Inability to produce clinical quality reports at practice, clinical team, clinician, and patient levels | Most EHRs lack this functionality, which is necessary to compare clinicians and produce lists of patients in need of services or of services needed by individual patients. Purchasing this functionality is an upgrade expense that smaller practices cannot afford. When this functionality is present, smaller primary care practices usually lack the necessary health IT expertise to make use of these tools. |
Data from EHR reports are not credible or trustworthy | EHR design features lead to suboptimal documentation of clinical quality measures (for example, EHRs lack consistent or obvious places to document the measures). Clinical team documentation behavior leads to incomplete extraction of clinical quality variables. |
Delays in modifying specifications when guidelines or measures change | Delays in government revision of value sets after changes occur. Delays in vendor programmatic changes per value set changes. Delays in practice EHR upgrades. |
Cooperatives developing regional data infrastructure encounter developmental delays | Vendors charge excessive fees for connecting practices to a data warehouse, hub, or health information exchange. Vendors are unresponsive and “drag their heels” when working with cooperatives to create connections. Vendors exclude information from continuity-of-care documents that is critical to calculating clinical quality measures. Vendor tools for exporting batches of the documents are slow, making the documents difficult to export. Data exported in batches of the documents lack credibility and trustworthiness for the reasons listed above. |
Inability to benchmark performance because data extracted from different EHRs are not comparable | Variations in EHR system versions and implementations. Vendors make different decisions about what fields or codes to include when calculating clinical quality measures. |
Generating Reports Of Electronic Clinical Quality Measures For Quality Improvement
Practices participating in stages 1 and 2 of meaningful use were more likely to report being able to generate reports of electronic clinical quality measures at the practice and clinician levels, compared to practices not participating (odds ratio: 1.65) (exhibit 3). Similarly, practices participating in quality improvement demonstration projects or in external payment programs that incentivized quality measurement had 51–73 percent higher odds of reporting an ability to generate reports of electronic clinical quality measures (exhibit 3). Facilitators and health IT experts working directly with practices noted that practices could produce reports that complied with meaningful use. However, EHR reporting tools did not meet practices’ needs for quality improvement measurement.
Characteristic | Odds ratio | 95% CI |
1 | 0.59** | 0.38, 0.93** |
2–5 | 0.87 | 0.57, 1.33 |
6 or more | Ref | Ref |
Clinician | Ref | Ref |
Hospital/health system | 2.88** | 1.92, 4.33** |
Federal | 6.02** | 3.65, 9.92** |
Academic, other or none | 1.14 | 0.64, 2.01 |
Urban | Ref | Ref |
Suburban | 0.70 | 0.39, 1.26 |
Large town | 1.03 | 0.64, 1.67 |
Rural area | 0.61** | 0.39, 0.96** |
Neither stage 1 nor stage 2 | Ref | Ref |
Stage 1 only | 1.09 | 0.65, 1.85 |
Stages 1 and 2 | 1.65** | 1.08, 2.51** |
No | Ref | Ref |
Yes | 1.73** | 1.19, 2.51** |
No | Ref | Ref |
Yes | 1.51** | 1.09, 2.09** |
Practices reported needing reports with customizable time frames, which could be repeated as desired, to align with quality improvement activities. Cooperative experts reported that some ONC-certified EHRs, as implemented, could generate Physician Quality Reporting System or meaningful-use clinical quality reports only for a calendar year. When functions were available to customize measurement periods, significant manual configuration or additional modules were required. According to a report on measurement challenges from cooperative 3, “out of the box tools are inadequate to use for routine quality improvement. This necessitated working with vendors to deploy reports in the linked reporting tool, which required expertise in database query writing, which is almost universally absent from the skillset of staff at independent small practices.”
EHR vendors charged extra fees to access these tools, and smaller practices could not pay for this assistance. Additionally, some EHRs could generate meaningful-use metrics only for patients with Medicare or Medicaid coverage (often a minority of practice patients). Many vendors were resistant to making software changes beyond what was required for Physician Quality Reporting System or meaningful use reporting. Thus, most practices were unable to query EHR data for measurement in rapid-cycle tests of change.
Practices owned by health/hospital systems had higher odds of reporting the ability to generate reports of electronic clinical quality measures, compared to clinician-owned practices (OR: 2.88), while solo and rural practices were less likely than practices with six or more physicians and those in urban areas to report being able to generate such reports (exhibit 3). Complementary qualitative data showed that system-owned practices had greater health IT and data capability than solo and rural practices did, but these resources were centralized. These practices and facilitators experienced substantial and repeated delays in getting access to data needed for quality improvement, as organizational priorities took precedence (particularly when tied to payment), and their experts were overwhelmed with other demands.
New Clinical Guidelines
Quality measurement was complicated by changes in clinical guidelines. The American College of Cardiology and American Heart Association guidelines on cardiovascular disease risk changed dramatically in 2013.33 At the start of EvidenceNOW in 2015, measurements for the A, B, and S parts of the ABCS measures were routinely part of the Physician Quality Reporting System. However, CMS did not publish the criteria for the C part (the cholesterol measure) until May 4, 2017. The measure chosen for the EvidenceNOW initiative matched the 2013 guideline, but lack of a complementary official CMS measure meant that no EHR had yet implemented a similar measure in their system. Some practices created their own measures based on all or part of the new guidelines to inform quality improvement, but this was not useful for benchmarking.
Validity Across Different Electronic Health Record Systems
Facilitators and health IT experts often found verifiable problems in clinical quality reports. For example, a representative of cooperative 6 told us in an interview: “Doctors always look at our data and say it’s not [correct]…. Unless you put [information] in the exact spot, it doesn’t pull it [for the electronic clinical quality measures]…. They didn’t hit the little cog-radio button. It takes [you] to a template that you have to complete. In order to pull the data it has to be on there.”
It was common for there to be specific locations (for example, checkboxes) where structured data elements had to be recorded to be counted in a calculation of electronic clinical quality measures. The combination of vendor-standardized documentation requirements for the measures, lack of alignment of these requirements with clinical workflows, and clinical teams’ lack of awareness of documentation rules and the consequences of recording patterns on quality measurement led to many examples of unreliable reports of the measures.
Challenges Developing Regional Data Infrastructure For Quality Improvement
Cooperatives that used data warehouses, hubs, or health information exchanges in their regions did so to provide practices with clean data and tools for measurement. To develop this type of data infrastructure, cooperatives worked with EHR vendors to access back-end EHR data. Exhibit 2 summarizes the challenges cooperatives faced in using EHR data for this purpose.
Cooperatives reported that EHR vendors and other organizations charged high fees ($5,000–$15,000) for accessing data, and cooperatives found it difficult to export continuity-of-care documents (electronic documents standardized for patient information exchange) in batch format. Exporting batches of these documents—a requirement for ONC certification29—means that EHR data can be extracted from multiple patient records simultaneously and pulled into one file. The documents are meant to include most commonly needed patient information in a form that can be shared across computer applications. Yet cooperatives found that the documents met only minimum requirements. One representative of cooperative 7 said in an interview that one vendor “will only send ten [documents] at a time. Another only does, like, one an hour or something. There are these bizarre kinds of things where they’re meeting the requirement, but they aren’t useful.” Cooperatives and practices queried vendors about these issues, but vendors were not responsive. Lack of efficient, mass export of continuity-of-care documents meant that cooperatives were unable to help practices use their underlying data for quality improvement.
Cooperatives that developed data management infrastructure also created the capacity for combining and comparing data (performance benchmarking) and found differences in electronic clinical quality measures across EHRs. One expert attributed differences to variations in implementation of the EHR system and noted in cooperative 1’s report on measurement challenges that “editorial decisions” made by vendors about which EHR fields to pull data from when calculating the measures led to problems: “We have experienced challenges in defining the measures and achieving accurate results. We began with pre-built measures from our software vendor but often found differences in definition for a commonly named measure.” Extra steps were needed to ensure uniform definitions of measures for performance benchmarking. Without regional infrastructure to extract and normalize ABCS measures, performance data from different EHR systems might not be correct or comparable.
Discussion
Primary care practices have to exert too much effort to get usable data from their EHRs to improve care quality and meet reporting requirements. Despite the large national investment in health IT and substantial investments of time and expertise by practices and cooperatives, it has been difficult for them to generate timely and usable data for quality measurement and improvement. These findings are particularly salient given that the majority of practices in this large national sample used EHRs certified by the Office of the National Coordinator for Health Information Technology, and more than half reported participating in stages 1 and 2 of meaningful use and having the ability to produce reports of electronic clinical quality measures. Yet EHR reports that complied with meaningful use generally did not allow practices to customize date ranges or report frequency, and they rarely provided functionality for measuring performance by individual clinicians. In cases where this functionality was available, it was a costly upgrade and typically required health IT expertise to use. These resources are not often present in practices—particularly solo, rural, and clinician-owned practices. Additionally, practices that questioned the validity of meaningful-use reports did not have feasible ways to validate them. These factors inhibited practices’ ability to make the measurements—the ongoing identification of quality gaps and monitoring of the effects of changes made in care processes—that are essential for quality improvement.
These persistent challenges should be cause for concern. Our study amplifies findings from prior research that documented challenges in using EHRs in general and for quality improvement in particular.3,16–18,34–37 To our knowledge, previous studies have not examined this problem in a sample of the size and diversity that our sample attained, especially not with mixed methods. Our study shows that survey data alone are inadequate for fully understanding the problem. Furthermore, most studies of these challenges are nearly a decade old and do not reflect the impact of more recent federal programs.3,38
Regional and national organizations that connect disparate practices’ EHR systems to a central data repository offer a potential solution for mitigating measurement challenges through shared infrastructure for data extraction, normalization, validation, analysis, and reporting. However, most states in our sample did not have regional data infrastructure, and those that did had limited reach; the time, effort, and investment required to build these resources are extensive.8 Regional leaders struggle with financing, vendor relations, and governance structures.
To improve EHRs’ ability to achieve their potential and support sustainable payment reforms, policy makers should consider empowering the ONC and CMS to expand their standards and requirements for, and monitoring of, EHR vendors.39 The agencies need to make it more efficient for practices to generate quality reports with up-to-date definitions,40 ensure that organizations can extract data from batches of continuity-of-care documents for secondary use, and create explicit requirements to support quality improvement and practice population health measurement. New initiatives from the ONC are encouraging vendors to facilitate this process through standard application programming interfaces and mapping (for example, Fast Healthcare Interoperability Resources). With several new reporting requirements for clinicians (including those for recognition as a patient-centered medical home), payer requirements, and other federal demonstration projects (which have differing reporting requirements), the ONC should focus not just on EHR capacities to serve the Quality Payment Program but also on quality improvement and reporting needs generally.
Cooperatives’ current experience is that EHR data are “locked up,” which prevents even a well-resourced initiative from being able to use data for quality measurement across diverse practice settings. The first Quality Payment Program reporting period ended in December 2017, and many clinicians may be unable to achieve their full potential related to quality improvement despite having certified EHRs. CMS may need to be prepared to help practices not only comply but also use data for quality improvement—perhaps by loosening reporting options, expanding exclusion criteria, and allocating additional funds for technical assistance.
Conclusion
Primary care is an essential part of healthy communities. With federal value-based payment programs such as the Quality Payment Program poised to motivate clinicians to improve care quality, investment is needed to ensure that the health IT clinicians use delivers credible clinical quality data and has the functionality necessary to inform quality improvement efforts as well as external reporting for payment and other purposes without adding to an already high burden.
ACKNOWLEDGMENTS
A version of the findings presented in this article was reported at the 44th Annual North American Primary Care Research Group Meeting, Colorado Springs, Colorado, November 12–16, 2016. This research was supported by the Agency for Healthcare Research and Quality (Grant No. R01HS023940-01). This article could not have been completed without the help of people from the seven EvidenceNOW cooperatives, to whom the authors are greatly indebted. Members of the national evaluation team were also important in supporting this study, including Rachel Springer, David Cameron, Bernadette Zakher, Rikki Ward, Benjamin Crabtree, Kurt Stange, and William Miller. Without their efforts, this work would not have been possible. Finally, the authors acknowledge Amanda Delzer Hill, who assisted with copy editing.
NOTES
- 1 Office of the National Coordinator for Health Information Technology. Office-based physician electronic health record adoption [Internet]. Washington (DC): Department of Health and Human Services; 2016 Dec [cited
2018 Feb 6 ]. (Health IT Quick-Stat No. 50). Available from: https://dashboard.healthit.gov/quickstats/pages/physician-ehr-adoption-trends.php Google Scholar - 2 . Office-based physicians are responding to incentives and assistance by adopting and using electronic health records. Health Aff (Millwood). 2013;32(8):1470–7. Go to the article, Google Scholar
- 3 The Health IT Regional Extension Center program: evolution and lessons for health care transformation. Health Serv Res. 2014;49(1 Pt 2):421–37. Crossref, Medline, Google Scholar
- 4 Centers for Medicare and Medicaid Services. Electronic Health Records (EHR) Incentive Programs [Internet]. Baltimore (MD): CMS; [last updated 2017 Nov 29; cited
2018 Jan 23 ]. Available from: https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/index.html?redirect=/ehrincentiveprograms Google Scholar - 5 The rise of electronic health record adoption among family physicians. Ann Fam Med. 2013;11(1):14–9. Crossref, Medline, Google Scholar
- 6 CMS.gov. Quality Payment Program resource library [Internet]. Baltimore (MD): Centers for Medicare and Medicaid Services; [last modified 2018 Feb 6; cited
2018 Feb 6 ]. Available from: https://www.cms.gov/Medicare/Quality-Payment-Program/Resource-Library/Resource-library.html Google Scholar - 7 . Electronic health records in small physician practices: availability, use, and perceived benefits. J Am Med Inform Assoc. 2011;18(3):271–5. Crossref, Medline, Google Scholar
- 8 . A tale of two large community electronic health record extension projects. Health Aff (Millwood). 2009;28(2):345–56. Go to the article, Google Scholar
- 9 Electronic health record impact on work burden in small, unaffiliated, community-based primary care practices. J Gen Intern Med. 2013;28(1):107–13. Crossref, Medline, Google Scholar
- 10 A typology of electronic health record workarounds in small-to-medium size primary care practices. J Am Med Inform Assoc. 2014;21(e1):e78–83. Crossref, Medline, Google Scholar
- 11 . National findings regarding health IT use and participation in health care delivery reform programs among office-based physicians. J Am Med Inform Assoc. 2017;24(1):130–9. Crossref, Medline, Google Scholar
- 12 eCQI Resource Center. 2017 performance period EP/EC eCQMs [Internet]. Baltimore (MD): Centers for Medicare and Medicaid Services; [cited
2018 Feb 6 ]. Available from: https://ecqi.healthit.gov/eligible-professional-eligible-clinician-ecqms/2017-performance-period-epec-ecqms Google Scholar - 13 . Confidential physician feedback reports: designing for optimal impact on performance [Internet]. Rockville (MD): Agency for Healthcare Research and Quality; 2016 Mar [cited
2018 Feb 6 ]. (AHRQ Publication No. 16-0017-EF). Available from: https://www.ahrq.gov/sites/default/files/publications/files/confidreportguide_0.pdf Google Scholar - 14 Health information technology needs help from primary care researchers. J Am Board Fam Med. 2015;28(3):306–10. Crossref, Medline, Google Scholar
- 15 . Using health information technology to support quality improvement in primary care [Internet]. Rockville (MD): Agency for Healthcare Research and Quality; 2015 Mar [cited
2018 Feb 6 ]. (AHRQ Publication No. 15-0031-EF). Available from: https://pcmh.ahrq.gov/page/using-health-information-technology-support-quality-improvement-primary-care Google Scholar - 16 . Validity of electronic health record–derived quality measurement for performance monitoring. J Am Med Inform Assoc. 2012;19(4):604–9. Crossref, Medline, Google Scholar
- 17 . The challenge of measuring quality of care from the electronic health record. Am J Med Qual. 2009;24(5):385–94. Crossref, Medline, Google Scholar
- 18 . Review: electronic health records and the reliability and validity of quality measures: a review of the literature. Med Care Res Rev. 2010;67(5):503–27. Crossref, Medline, Google Scholar
- 19 eCQI Resource Center. Ischemic vascular disease (IVD): use of aspirin or another antiplatelet [Internet]. Baltimore (MD): Centers for Medicare and Medicaid Services; [last updated 2017 Oct 25; cited
2018 Feb 6 ]. Available from: https://ecqi.healthit.gov/ecqm/measures/cms164v5 Google Scholar - 20 eCQI Resource Center. Controlling high blood pressure [Internet]. Baltimore (MD): Centers for Medicare and Medicaid Services; [last updated 2017 Jul 12; cited
2018 Feb 6 ]. Available from: https://ecqi.healthit.gov/ecqm/measures/cms165v3 Google Scholar - 21 eCQI Resource Center. Statin therapy for the prevention and treatment of cardiovascular disease [Internet]. Baltimore (MD): Centers for Medicare and Medicaid Services; [last updated 2017 Oct 25; cited
2018 Feb 6 ]. Available from: https://ecqi.healthit.gov/ep/ecqms-2018-performance-period/statin-therapy-prevention-and-treatment-cardiovascular-disease Google Scholar - 22 eCQI Resource Center. Preventive care and screening: tobacco use: screening and cessation intervention [Internet]. Baltimore (MD): Centers for Medicare and Medicaid Services; [last updated 2017 Jul 12; cited
2018 Feb 6 ]. Available from: https://ecqi.healthit.gov/ecqm/measures/cms138v3 Google Scholar - 23 A national evaluation of a dissemination and implementation initiative to enhance primary care practice capacity and improve cardiovascular disease care: the ESCALATES study protocol. Implement Sci. 2016;11(1):86. Crossref, Medline, Google Scholar
- 24 . Online diaries for qualitative evaluation: gaining real-time insights. Am J Eval. 2006;27(2):163–84. Crossref, Google Scholar
- 25 . Transforming physician practices to patient-centered medical homes: lessons from the national demonstration project. Health Aff (Millwood). 2011;30(3):439–45. Go to the article, Google Scholar
- 26 Using Learning Teams for Reflective Adaptation (ULTRA): insights from a team-based change management strategy in primary care. Ann Fam Med. 2010;8(5):425–32. Crossref, Medline, Google Scholar
- 27 Effects of facilitated team meetings and learning collaboratives on colorectal cancer screening rates in primary care practices: a cluster randomized trial. Ann Fam Med. 2013;11(3):220–8, S1–8. Crossref, Medline, Google Scholar
- 28 National Center for Health Statistics. Ambulatory health care data [Internet]. Hyattsville (MD): NCHS; [last updated 2017 Dec 12; cited
2018 Feb 6 ]. Available from: https://www.cdc.gov/nchs/ahcd/index.htm Google Scholar - 29 Office of the National Coordinator for Health Information Technology. ONC fact sheet: 2015 edition health information technology (health IT) certification criteria, base electronic health record (EHR) definition, and ONC health IT certification program modifications final rule [Internet]. Washington (DC): Department of Health and Human Services; 2015 Oct [cited
2018 Feb 6 ]. Available from: https://www.cdc.gov/ehrmeaningfuluse/docs/onc_factsheet_2015_cehrt.pdf Google Scholar - 30 To access the appendix, click on the Details tab of the article online.
- 31 . Immersion/crystallization. In: , editors. Doing qualitative research. 2nd ed. Thousand Oaks (CA): Sage Publications; 1999. p. 179–94. Google Scholar
- 32 . Evaluative criteria for qualitative research in health care: controversies and recommendations. Ann Fam Med. 2008;6(4):331–9. Crossref, Medline, Google Scholar
- 33 2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. Circulation. 2014;129(25 Suppl 2):S1–45. Crossref, Medline, Google Scholar
- 34 . Quality improvement with an electronic health record: achievable, but not automatic. Ann Intern Med. 2007;147(8):549–52. Crossref, Medline, Google Scholar
- 35 . How the electronic health record did not measure up to the demands of our medical home practice. Health Aff (Millwood). 2010;29(4):622–8. Go to the article, Google Scholar
- 36 Electronic health record functionality needed to better support primary care. J Am Med Inform Assoc. 2014;21(5):764–71. Crossref, Medline, Google Scholar
- 37 VA QUERI informatics paper: information technology for clinical guideline implementation: perceptions of multidisciplinary stakeholders. J Am Med Inform Assoc. 2005;12(1):64–71. Crossref, Medline, Google Scholar
- 38 . A national study of challenges to electronic health record adoption and meaningful use. Med Care. 2014;52(2):144–8. Crossref, Medline, Google Scholar
- 39 . The HITECH era and the path forward. N Engl J Med. 2017;377(10):904–6. Crossref, Medline, Google Scholar
- 40 HealthIT.gov. ONC regulation FAQs: #42 Question [06-13-042-1] [Internet]. Washington (DC): Department of Health and Human Services; [last updated 2013 Nov 14; cited
2018 Feb 7 ]. Available from: https://www.healthit.gov/policy-researchers-implementers/42-question-06-13-042 Google Scholar