Published on in Vol 9, No 11 (2021): November

Preprints (earlier versions) of this paper are available at, first published .
The Role of Electronic Medical Records in Reducing Unwarranted Clinical Variation in Acute Health Care: Systematic Review

The Role of Electronic Medical Records in Reducing Unwarranted Clinical Variation in Acute Health Care: Systematic Review

The Role of Electronic Medical Records in Reducing Unwarranted Clinical Variation in Acute Health Care: Systematic Review


1The University of Queensland Business School, The University of Queensland, St Lucia, Australia

2Princess Alexandra Hospital, Metro South Health, Woolloongabba, Australia

3The University of Queensland Centre for Health Services Research, The University of Queensland, Herston, Australia

*all authors contributed equally

Corresponding Author:

Tobias Hodgson, BSc, MBA, PhD

The University of Queensland Business School

The University of Queensland

39 Blair Drive

St Lucia, 4067


Phone: 61 733468100


Background: The use of electronic medical records (EMRs)/electronic health records (EHRs) provides potential to reduce unwarranted clinical variation and thereby improve patient health care outcomes. Minimization of unwarranted clinical variation may raise and refine the standard of patient care provided and satisfy the quadruple aim of health care.

Objective: A systematic review of the impact of EMRs and specific subcomponents (PowerPlans/SmartSets) on variation in clinical care processes in hospital settings was undertaken to summarize the existing literature on the effects of EMRs on clinical variation and patient outcomes.

Methods: Articles from January 2000 to November 2020 were identified through a comprehensive search that examined EMRs/EHRs and clinical variation or PowerPlans/SmartSets. Thirty-six articles met the inclusion criteria. Articles were examined for evidence for EMR-induced changes in variation and effects on health care outcomes and mapped to the quadruple aim of health care.

Results: Most of the studies reported positive effects of EMR-related interventions (30/36, 83%). All of the 36 included studies discussed clinical variation, but only half measured it (18/36, 50%). Those studies that measured variation generally examined how changes to variation affected individual patient care (11/36, 31%) or costs (9/36, 25%), while other outcomes (population health and clinician experience) were seldom studied. High-quality study designs were rare.

Conclusions: The literature provides some evidence that EMRs can help reduce unwarranted clinical variation and thereby improve health care outcomes. However, the evidence is surprisingly thin because of insufficient attention to the measurement of clinical variation, and to the chain of evidence from EMRs to variation in clinical practices to health care outcomes.

JMIR Med Inform 2021;9(11):e30432



Variation in Health Care

Any health care service seeks to raise and refine the standard of care it provides to patients and to satisfy the quadruple aim of health care, that is, to improve patient care, population health, cost of care, and clinician experience [1,2]. It is commonly accepted that achieving this aim involves minimizing unwarranted clinical variation, that is, unjustified differences between health care processes or outcomes compared with peers, or with a gold standard [3].

Health care clinical practice variation has been observed, studied, and documented for many decades [4,5]. There are a plethora of potential causes of variation, such as the individuals involved (clinician and patient), their level of agency or motivation, organizational or system factors (eg, capacity) and the nature of the evidence available (clinical and scientific) [6,7]. The method of diffusion of best practice clinical knowledge and clinician adoption of these guidelines and standards has been long been identified as a potential cause of variation [8,9].

Many countries mandate efforts to reduce unwarranted clinical variation in health care provided [10]. While some level of variation is required for innovation and learning, low levels of variation are generally thought to be best [11]. As stated in [12], “The idea is to hold on to variation across patients (to meet the needs of individual patients) and to limit variation across clinicians (which is driven by individual clinician preferences or differences in knowledge and experience)”.

Variation is unwarranted if it is not justified by clinical imperatives, patient needs or preferences, or innovation. In its most basic form, clinical variation that leads to positive outcomes may be warranted, whereas variation that leads to negative outcomes is deemed unwarranted. Many health care services have looked to electronic medical record (EMR) systems to reduce unwarranted variation and thereby improve outcomes [10].


EMR use has become virtually ubiquitous in health services in developed countries [13]. EMRs can offer many benefits, including improvements in billing and cost management, reporting and analytics, real-time access to data by clinicians, information sharing, treatment management, patient safety, and clinical decision making [14-21].

EMRs provide the means to both monitor and address clinical variation through the provision of best practice guidelines and clinical decision support (CDS) to improve care and reduce waste [22]. At the same time, EMRs can also create variation by offering users multiple ways to perform a task. Work-as-done by clinicians also often varies from the work-as-imagined expectation of EMR designers [23]; as a result, it is an empirical question as to whether EMRs actually reduce unwarranted clinical variation.

Theoretical Framework

Studying how EMRs may affect unwarranted clinical variation requires understanding 3 elements: why clinical variation occurs, why and how EMRs may reduce clinical variation, and how measuring and altering variation are operationalized in practice (Textbox 1).

Clinical variation factors. CDS: clinical decision support, EMR: electronic medical record.
  • Clinical variation can occur due to supply-side, demand-side, or contextual factors [24]:
    • Clinician factors (supply side): expertise, training and experience, preference, practice style;
    • Consumer factors (demand side): case complexity, consumer preference, social determinants of health; and
    • Environmental factors (context): local guidelines, available resources, hospital case mix.
  • EMRs may reduce clinical variation through their ability to control process delivery and outcomes. It is common for health services to tackle clinical variation through EMR-related process control efforts (eg, clinical guidelines and pathways), and process design and development efforts [25,26]. EMRs hold promise for reducing unwarranted clinical variation because they can help tackle each of the 3 aforementioned factors:
    • Clinician factors: EMRs can constrain clinicians to perform similarly via restrictions to particular behaviors or range of behaviors.
    • Consumer factors: EMRs can inform and guide patients in a consistent manner via patient portals, and they can help standardize the reporting of patient outcomes.
    • Environmental factors: EMRs can provide standardized decision support and data that health services can use to monitor and improve operations and achieve greater consistency.
  • Understanding precisely how an EMR can reduce unwarranted variation requires opening the EMR “black box” and assessing its components. One set of EMR components designed to help reduce unwarranted clinical variation is CDS. There are numerous CDS tools and features in the marketplace, with EMR vendors naming and implementing components in proprietary ways. This review focuses on 2 CDS components from the 2 most prevalent EMR vendors globally (>50% of acute care market), with products that have similar aims to help reduce unwarranted clinical variation: PowerPlan (Cerner Corporation) and SmartSet (Epic Systems Corporation) [27-29]:
    • PowerPlan: “A Power Plan is a group of orders under a single title designed to support a procedure or a process.” [30]
    • SmartSet: “A documentation template. A group of orders and other elements, such as notes, chief complaints, SmartGroup Panels, and levels of service, that are commonly used together to document a specific type of visit.” [31].
Textbox 1. Clinical variation factors. CDS: clinical decision support, EMR: electronic medical record.

EMRs can implement tools to guide and constrain practice; however, clinicians do not always use these interventions as intended. For example, they may focus on using a PowerPlan to make ordering easier rather than using it to reduce variation. For this reason, it is important to empirically test whether in practice they reduce clinical variation as intended.

To understand how EMR interventions might or might not reduce unwarranted clinical variation as intended, a variation in clinical care framework was devised (Figure 1). The framework highlights the expected factors that must be accounted for if EMRs are to reduce unwarranted clinical variation. That is, the expectation is that EMRs—through their components—should help reduce unwarranted clinical variation if the following factors are considered:

  • Design: if the EMR and its PowerPlan or SmartSet components are configured to reduce unwarranted variation.
  • Implementation: if the goal of reducing unwarranted variation is kept in focus during implementation.
  • Use: if clinicians use the EMR as intended.
  • Clinical theory: if the clinical logic or theory underlying the design of the intervention and the clinical practice is mature (rather than lacking evidence and having ambiguity, allowing variation among clinicians).
  • Monitoring and intervention: if the health service monitors outcomes that flow from changes in clinical variation and iteratively improve the design and use of the EMR based on this learning feedback loop.
Figure 1. Variation in clinical care - theoretical framework. EMR: electronic medical record.
View this figure

If the use of an EMR can lead to changes in unwarranted clinical variation, how can this variation be measured? The framework (Figure 1) suggests that there are 2 archetypal changes in variation (Textbox 2), conveying different meanings of “variance.”

Combinations of these 2 archetypes may also occur. For instance, a health service may implement an EMR to both change a standard and encourage clinicians to achieve greater consistency around that standard.

Changes in variation.
  • Variation from a level or a standard: For example, assume that a health service has a guideline for a clinical practice. If clinicians follow the guideline, with appropriate variation in adherence and excellent outcomes, this will be reflected in an average level on that practice with variation around the average. If the health service shifts the guideline, unwarranted variation can be viewed as the degree to which the distribution of behavior fails to shift to the new standard and improve outcomes. Statistically, this can be tested by comparing the average practices (accounting for the variation around each average) before and after the intervention, (eg, via a t test).
  • Variation around a level or a standard: For example, assume that a health service has no guideline for a practice, and clinicians just follow their own practices. Assume also that the average behavior is close to the desired level, but the variation around this average is concerning. If the health service then implements a guideline to reduce this variation, unwarranted clinical variation and monitoring of outcomes can be operationalized as the degree to which the level of variance in practices fails to be reduced. Statistically, this can be tested by a change in the level of variance (eg, range or SD).
Textbox 2. Changes in variation.

The implication of these different meanings of clinical variation is that researchers need to be precise as to which type of variation and associated outcomes they are studying and how. In short, studying changes in variation requires careful attention to measurement.

Finally, variation is only unwarranted if it impairs outcomes, such as any of the quadruple aims of health care (Figure 1). That is, variance itself is not the outcome, nor it is necessarily negative. Rather, the aim is to learn how to design, implement, use, and monitor the EMR and find the “right” level of variation to achieve the best outcomes.

Objective of This Review

We aim to summarize the existing literature on the effects of EMRs on variation in clinical care processes and patient outcomes as mapped to the quadruple aim of health care. To account for the specifics of EMR systems, and for the specific ways that variation can occur, searches were conducted not only for the effects of EMRs in general, but also for the components of EMRs (PowerPlan and SmartSet). Studies were coded for changes in clinical variation and for how changes in variance affected both process and patient outcomes.

Because of differences in tools and methods used to achieve clinical standardization between the primary and acute care settings (eg, case complexity, technology utilized), this study focuses purely on the acute sector and hospital-based EMRs.

Eligibility Criteria

A Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)–compliant systematic review of studies examining clinical variation and EMRs was undertaken [32].

Inclusion Criteria

To be included in the review, studies were required to meet the following criteria:

  • The article is published in English.
  • The topic of the study is relevant to clinical variation and EMRs.
  • Articles published after 2000, due to the prominence of EMR/electronic health record (EHR) articles published since then (Multimedia Appendix 1).
  • Participants to include clinicians performing medical care duties or patients receiving medical treatment.
  • Measured outcomes were reported, whether immediate (eg, test results) or longer term (eg, length of stay, economic), and whether measured objectively or by self-report.
  • Peer-reviewed studies only.
  • Empirical studies are either qualitative or quantitative (or mixed): quantitative studies may have included experimental and observational study designs such as randomized controlled trials, quasi-experimental studies, before-and-after studies, case–control studies, cohort studies, and cross-sectional studies.
  • Quality improvement initiatives.
  • Articles focused on acute care settings (including ambulatory specialist care).
Exclusion Criteria

Studies were excluded from the review if they met any of the following criteria:

  • Abstracts in which full study data were unavailable.
  • Nonempirical studies.
  • Outcome measures of expected variation (not actual).
  • Articles with a care focus of primary care.

Information Sources

Searches were made on ACM Digital Library, CINAHL, EMBASE/MEDLINE, Google Scholar, IEEE Xplore, PubMed, Scopus, and Web of Science for articles from the year 2000 to November 2020.

The search query used was: “EHR” OR “EMR” AND “practice variation” OR “clinical variation” OR “unwanted variation” OR “unwarranted variation” OR “reduction in waste” OR “PowerPlan” OR “SmartSet”.

As noted earlier, understanding how EMRs have their effects requires opening the “black box” of the EMR to study its components, in this case PowerPlan and SmartSet. However, a given study may use these proprietary terms or instead use more generic terms. Including these specific vendor EMR components in the search string with an OR term increased the extent to which articles that examined clinical variation, even if an article did not specifically use those words (ie, to increase the level of recall), would be found.

Additional applicable search terms were assessed but excluded as they added no additional search results (eg, medical-order entry systems). The term “order sets” was excluded from the search as they are not necessarily electronic (often paper based) and many studies focus on discrete point-in-time events (eg, prescribing anithrombotics) rather the patient’s entire care process (as implemented in PowerPlan and SmartSet for specific conditions). As noted earlier, the focus of this study was on clinical care processes and outcomes.

The ultimate searches were undertaken in February 2021. Both backward and forward citation searching were undertaken for all included articles with a quality score over 50% (35/36 studies; Multimedia Appendix 2) [33-67]. Forward searches were undertaken with the assistance of Anne O’Tate, PubMed, Google Scholar, and Scopus [68].

Once duplicates were removed, these searches resulted in 4622 potential articles. Titles and abstracts were then identified and screened, with 3935 initial further exclusions, with 40 cases having only partial text available or requiring further information to make an assessment, resulting in 687 full texts that were retrieved and evaluated.

Interrater agreement during the screening phase was assessed based on 30 randomly sampled papers screened by 2 reviewers (TH [first author] and TL [research assistant]). The observed agreement was 90% (27/30), with an acceptable κ (Cohen κ) of 0.67 [69]. Given the reliable coding, the remainder of the screening phase was undertaken by TH.

Each of the included articles was assessed independently by 2 reviewers (TH and TL) against the inclusion criteria. After assessment, 36 studies remained [33-67,70] (Figure 2) (Table 1). In instances of doubtful eligibility, a consensus assignment was made after deliberation (5 articles were excluded). The 2 reviewers also measured the disposition of these studies as positive, mixed, or negative based on how the authors of the study discussed the outcomes.

Figure 2. Systematic review flow diagram (after Preferred Reporting Items for Systematic Reviews and Meta-Analyses [PRISMA] [32]).
View this figure
Table 1. Summary of included studies (N=36).
Study authorEMRa/EHRb vendorStudy
Quality assessment (QATSDDc), %Variation in clinical care processesVariance typedPatient careePopulation healtheCosts/
Adelson et al [33]EPIC (SmartSet)Positive54.17Orders/prescription2Clinical eventsLength of eventsQuality
Akenroye et al [34]Vendor not statedPositive64.58Orders/prescription2CostsQuality
Amland et al [35]Cerner (PowerPlan)Positive78.57Patient assessment2Clinical events
Asan et al [36]EPIC (SmartSet)Negative66.67Care provision5Clinical
Attaar et al [37]Other: Allscripts Sunrise Clinical ManagerPositive66.67Orders/prescription4Quality (patient)Length of stay
Ballesca et al [38]EPIC (SmartSet)Positive66.67Orders/prescription2Clinical events
Test measures
Length of stay
Borok et al [39]EPIC (SmartSet)Positive61.90Orders/prescription
Bradywood et al [40]Vendor not statedPositive80.95Clinical care pathway4Quality (Patient)
Clinical events
Length of stay
Length of events
Chisolm et al [41]Vendor not statedPositive77.08Orders/prescription5Costs
Length of stay
Dort et al [42]Vendor not statedPositive73.81Clinical care pathway5Clinical eventsLength of stay
Length of events
Ebinger et al [43]Vendor not statedPositive66.67Care provision4Clinical eventsCosts
Length of stay
Geltman et al [44]Vendor not statedMixed71.43Patient assessment2Test measures
Goga et al [45]Vendor not statedPositive54.76Orders/prescription4
Gulati et al [46]Cerner (PowerPlan)Positive76.19Orders/prescription2Clinical eventsLength of stay
Length of events
Hendrickson et al [47]Vendor not statedPositive78.57Orders/prescription2Clinical eventsNumber of tests
Hooper et al [48]Vendor not statedPositive66.67Patient assessment2Test measures
Horton et al [49]EPIC (SmartSet)Positive59.52Orders/prescription2Quality (patient)
Clinical events
Test measures
Jacobs et al [50]Other: ICIS, a web-based EHRPositive71.43Ordering2
Karajgikar et al [51]Cerner (PowerPlan)Positive54.76Orders/prescription5Clinical eventsLength of events
Length of stay
Kicker et al [67]Vendor not statedPositive57.14Ordering
Length of events
Lewin et al [52]Vendor not statedPositive59.52Orders/prescription
Use of intervention
Length of stay
Lindberg et al [53]EPIC (SmartSet)Positive76.19Patient assessment2Test levels
Lindberg et al [54]EPIC (SmartSet)Positive73.81Patient assessment5Test levels
Morrisette et al [55]Cerner (PowerPlan)Mixed69.05Ordering4Costs
Length of events
Prevedello et al [56]Other: Percipio; Medicalis CorpMixed73.81Patient assessment2Test measures
Reynolds et al [57]EPIC (SmartSet)Negative61.90Orders/prescription4
Rooholamini et al [58]Cerner (PowerPlan)Positive59.52Orders/prescription
Patient assessment
2Clinical eventsCosts
Length of events
Rosovsky et al [70]EPIC (SmartSet)Positive45.24Ordering4
Sim et al [59]Other: AllScriptsPositive69.05Ordering2
Sonstein et al [60]EPIC (SmartSet)Positive69.05Ordering4Clinical eventsLength of stay
Soo et al [61]Cerner (PowerPlan)Negative68.75Ordering4Length of eventsClinical
Studer et al [65]Vendor not statedPositive61.90Orders/prescription2Clinical events
Teich et al [66]Vendor not statedPositive42.86Ordering2
Terasaki et al [62]EPIC (SmartSet)Positive64.29Patient assessment2
Wang et al [63]EPIC (SmartSet)Positive52.38Orders/prescription4Quality (patient)Volume of drugs
Webber et al [64]Cerner (PowerPlan)Positive57.14Ordering4Costs

aEMR: electronic medical record.

bEHR: electronic health record.

cQATSDD: Quality Assessment Tool for Studies with Diverse Designs.

d1=Mean constant; variance change, 2=mean change; variance change, 3=mean change; variance constant, 4=mean change; variance unknown, 5=mean unknown; variance unknown (or N/A, assumed only).

eWhere the outcomes were not observed within the study table cells remain empty.

Study data including the intervention, population, study design, and effects were extracted by both reviewers using a standardized template within Covidence systematic review software (Multimedia Appendix 3) [71]. Data quality was assessed via a bespoke Covidence template employing the Quality Assessment Tool for Studies with Diverse Designs (QATSDD), a 16-item mixed methods quality assessment tool (Multimedia Appendix 2) [72].

Risk of Bias

The studies were examined to determine the risk of drawing biased inferences [73]. Five risks were identified (Textbox 3).

Risk of bias. EMR: electronic medical record.
  1. Publication bias: most papers (30/36, 83%) reported positive results [33-35,37-43,45-54,58-60,62-66,70], with a minority reporting mixed [44,55,56] or negative results [36,57,61] (3/36, 8% for both). The completeness of results including nonsignificant effects was not always assured.
  2. Selection: participation in the trials varied from compulsory to voluntary. Where the study was voluntary, it was more likely that those with interest in, and with a positive opinion toward, EMRs participated [36,41,57,58,67].
  3. Randomization of intervention: this only occurred in 1 study which randomized the use of the SmartSet intervention using block randomization, stratified by provider subspecialty [57].
  4. Performance: the studies were all composed of unblinded trials, and in many cases the participants of the study knew if they were utilizing the intervention or not.
  5. Time lag bias: some papers were reporting on data collected much earlier than publication date (eg, Teich et al [66] was based on 1993 data) [41,66].
Textbox 3. Risk of bias. EMR: electronic medical record.


The recruitment of participants for clinicians utilizing the interventions was voluntary in all but 2 studies, and existing clinic/hospital EMR data were utilized for patient data [33,55].


Following the earlier description of how variation in clinical practices can be observed, studies were coded for 5 types of variation, each reflecting different patterns in the change of a distribution (Figure 3 and Multimedia Appendix 3). Types 1 and 3 refer to the 2 archetypes noted earlier (“variance from” and “variance around”), whereas Type 2 reflects their combination. Type 4 reflects the possibility that a study refers to changes in average behavior without reporting changes in variance. Type 5 is where change is assumed but not measured.

Figure 3. How changes in variance can be operationalized in clinical practice.
View this figure

As this study’s aim is to learn the effects of the EMR on changes in clinical variation, the focus is on variance types 1, 2, and 3, which reflect different ways in which clinical variation can be expressed. By contrast, Types 4 and 5 do not provide clear measures of variation.

To code the study’s disposition, 2 reviewers (TH and TL) coded the overall disposition of a study as either positive, mixed, or negative, based on the following criteria:

  • Positive: a majority of studies stated expected outcomes were met.
  • Mixed: some elements of expected outcomes were met, some not (with an approximate 50/50 split).
  • Negative: intervention not used, majority of expected outcomes not met, or reverse outcomes seen.

Disposition reflects the authors’ overall conclusions in that study in favor of or against the EMR or the intervention. It is not a measure of whether a study measured clinical variation or outcomes. Interrater agreement on study disposition was calculated using Cohen κ, and showed high levels of agreement (33/36, 92%, κ=0.71) [69].

Clinical outcomes were coded according to the quadruple aim of health care: quality of patient care, population health, cost/efficiency, and clinician experience [1,2].

Almost all the studies were based on the implementation of an intervention (new or refined) into a clinical setting (35/36, 97%) with 1 qualitative analysis of EMRs by clinicians [36]. Most studies were quality or process improvement based (28/36, 78%) [33-35,37,39-45,47-49,51,52,54,55,58,59,61-67] or best practice/evidence-based intervention related (27/36, 75% for both) [33-35,37,38,40-42,45,47,48,50,52-60,62-66,70]. Over half of the studies examined EMR elements such as order sets (23/36, 64%) [33,34,36-38,40-42,46,47,49-51,52,54,55, 58,60,61,64-66,70] and care pathways/treatment plans (22/36, 61%) [33-36,39-43,46-48,50,52,54,58,60,62,63,65,66,70]. Many papers addressed the minimization or elimination of a particular drug prescription/use (17/36, 47%) [39,40,45,46,49,51-54,57, 58,60,63,65-67,70].

Of the papers where the specific EMR used by the health facility was identified (24/36, 67%), half were Epic (12/24, 50%) [33,36,38,39,49,53,54,57,59,60,62,63], some Cerner (7/24, 29%) [35,46,51,55,58,61,64], and few with other vendors (5/24, 21%) [37,50,52,56,59].

Regarding overall disposition, most studies reported positive results (30/36, 83%) [33-35,37-43,45-54,58-60,62-67,70], while a minority reported mixed [44,55,56] or negative results [36,57,61] (3/36, 8% each). That is, the authors concluded in most studies that the EMR was used successfully as part of an initiative to address clinical variation.

However, most studies did not measure or report variation. Of the 5 codes for coding variance (Figure 3), no studies reported Type 1 or 3, half reported Type 2 (18/36, 50%) [33-35,38,44,46-50,53,56,58,59,62,65-67], some reported Type 4 (13/36, 36%) [37,39,40,43,45,52,55,57,60,61,63,64,70], and a few reported Type 5 (5/36, 14%) [36,41,42,51,54]. The studies that reported results for variation coded as Types 2 and 4 generally examined how an intervention led to changes in the average of a clinical behavior. Such studies reflected Type 2 variation if they explicitly referred to measures of variance in addition to average practices or if the distribution of the variable examined was such that a change in the average clearly implied a change in variance (the dependency between the average and variance of a distribution is dependent on the type of distribution).

For example, if clinician behaviors were coded in a study as adhering or not adhering to a guideline, the rate of adherence would reflect a binomial distribution and so an increase in adherence (eg, from 60% to 80%), implying both an increase in the average behavior and a reduction in variance. Where this connection between a change in a behavior and the change in variance was not explicitly reported or could not be inferred clearly from the distribution, this reflected a Type 4 change. That is, the 13 studies coded as Type 4 found that the EMR affected clinical practices but not necessarily clinical variation.

Regarding the quadruple aims of health care outcomes, over half of the studies addressed individual care outcomes (19/36, 53%) [33,35,37,38,40,42-44,46-49,51,53,54,58,60,63,65], many examined efficiency (21/36, 58%) [33,34,37,38, 40-43,46,47,49,51,52,55,56,58,60,61,63,64,67], a handful examined clinician experience (6/36, 17%) [33,34,36,39,41,61], and none examined population health outcomes (Table 1). Some studies examined just 1 quadruple aim of health care outcome (13/36, 36%) [35,36,39,44,48,52-56,64,65,67], most studies examined 2 outcomes (15/36, 42%) [34,37,38,40-43, 46,47,49,51,58,60,61,63], 1 study examined 3 outcomes [33], and none of the studies examined all 4 outcomes associated with the widely accepted quadruple aims of health care.

Of the studies that measured changes in variation (18/36, 50%), many (11/18, 61%) [33,35,38,44,46-49,53,58,65] examined follow-on changes in clinical-care outcomes, half assessed cost outcomes (9/18, 50%) [33,34,38,46,47,49,56,58,67], few examined clinician experience outcomes (2/18, 11%) [33,34], and none addressed public health outcomes. In other words, even though studies generally reported positive findings (in terms of overall study disposition), this positive conclusion was based on a partial (rather than comprehensive) assessment of outcomes.

There was heterogeneity in study data quality, with QATSDD scores ranging from a low of 43% through to a high of 81% and a mean of 65% across all included studies (Multimedia Appendix 2).

Principal Findings

This review finds some evidence to justify that EMRs can help reduce unwanted clinical variation and thereby improve health care outcomes. The evidence, however, is not strong. This reflects that (1) study quality was not high, (2) not many studies examined the effect, and (3) clinical variation and outcomes were not examined consistently (different outcome measures across studies) or comprehensively (rarely studying more than 1 outcome).

Surprisingly, while all the studies retrieved by our search discussed clinical variation, few studies measured it, and even fewer tied these changes in clinical variation to a broad set of health care outcome measures.

The theoretical framework proposed earlier can be used to understand the results of the review and identify directions for research. Specifically, 5 factors can enhance the EMR’s effects on unwarranted clinical variation and follow-on health care outcomes: design, implementation, use, clinical theory, and outcome monitoring and re-adjustment (Figure 1). These factors were examined only sporadically across studies with an average of 3 addressed per paper, and only 4 of the 36 retrieved studies examined all 5 factors [34,41,43,49].


Intervention design was discussed in most studies (27/36, 75%) [33,34,36,37,40-50,53-55,58,59,61-65,67,70] but not in depth. While not a core focus of the studies, design-related issues that may affect clinical variation were identified, such as in the insights that “design characteristics that are intended to make documentation more efficient can have unintended consequences” and that “some of the suboptimal design characteristics of the EHR may be exacerbated by user-related practices.” [36].


Almost all the studies (35/36, 97%) [33-35,37-67,70] examined the implementation of a new or refined intervention into a clinical setting, but specific implementation details were found in fewer studies (23/36, 64%) [33-35,37,40-46, 48-50,52,55,56,59-62,64,70]. The introduction of EMRs and their components are in large part a change management process, with both situational and psychological aspects to consider [74]. The successful implementation of change requires the participation, commitment, and support of key organizational stakeholders throughout the life span of the process to provide the highest chance of success [75,76].


One way to improve outcomes is to educate and train users to employ the EMR more effectively. The role of education and training was addressed in the majority of studies (25/36, 69%) [34,37,39-41,43,44,46-49,52-58,60-64,66,70] and frequently mentioned as critical for the intervention’s success or failure. Education/training was also identified as requiring primary focus in those studies deemed as having a negative or mixed disposition [36,44,55-57,59]. A multifaceted approach with local super-user support, high-quality training materials, and education and feedback sessions is likely to help. For instance, a 2018 study by Robinson [77] of Kaiser Permanente saw a significant increase in the use of many order sets after the implementation of a 3-day intensive EMR education intervention specifically tailored for the physicians with interactive teaching methods.

Underlying Clinical Theory

The interventions in the retrieved studies were all developed on underlying clinical theory that explicitly or implicitly directs clinical practice via pathway, program, or guidelines. These varied from locally developed standards established from journal articles and consensus guidelines, or more commonly the implementation of established national or peak body guidelines. Given that clinical care should be tailored to the needs of patients in the local setting, how best to identify and customize the appropriate underlying theory for a guideline and how stringently to implement it in the EMR are open questions that require further research.

Outcome Monitoring and Re-adjustment

Only 10 studies addressed monitoring of clinical outcomes and re-adjustment of the interventions. Even when addressed, they were typically confined to the implementation phase, rather than long-term and ongoing monitoring and revisions. Using EMRs to implement feedback loops and quality management life cycles can help health care organizations improve safety and quality and become learning organizations [78]. Intermountain Healthcare has shown how this can be achieved via repeated cycles of create, distribute, use, monitor, and feedback [79,80].


Despite steps taken to perform high-quality searching, sample bias may still exist. Because this is an understudied topic, the required search terms and meta-tags on the topic are not yet mature and validated. As a result, different search terms could potentially have retrieved additional relevant publications. Gray literature (such as internal health service reports) may also exist on the topics that were not retrieved. The time span of included studies was broad, covering over 20 years, but a longer time span may have identified additional papers.

Differences in the design and scope of the retrieved papers prevented direct comparisons among studies and meta-analytic tests. Judgment also needed to be exercised when coding articles. While interrater reliability tests suggested that the coding was reliable, some subjectivity inevitably remained. Finally, the context faced by a health service (eg, its resources and patient mix) influences how an EMR can help. Given the small number of studies in this area and their heterogeneity, it was not possible to pinpoint the most salient elements of context.

Expanding the study to include nonacute health care settings, articles in languages other than English, and specifying additional EMR vendors may provide valuable insight into additional means and methods available to address EMR-based clinical variation beyond those identified within this review.

Comparison With Prior Work

Existing studies and reviews on comparable topics were examined and while there is much existing work addressing the effects of EMRs on health care quality and outcomes, and measuring various criteria (efficiency, guideline adherence, errors, clinical outcomes), none adequately or directly address these aspects through the lens of clinical variation and outcomes [81,82]. No previous studies related variation and clinical outcomes back to the quadruple aims of health care. The ability to map variation in EMR-related clinical care processes and outcomes to all 4 of the quadruple aims (patient experience, public health, cost, and clinician experience) sets this review apart from any prior work in the field (Figure 1).


EMRs and their components such as PowerPlans/SmartSets are not a panacea, but rather tools to assist health care provision. It is widely thought that evidence-based clinical guidelines play an essential role in promoting quality of care and minimizing unwanted variation [83]. Ideally, EMRs should be able to improve both the average clinical practices and reduce unwarranted variation. However, the effects of unwarranted variation on clinical outcomes are unclear and understudied.

This review finds some evidence to suggest that unwarranted variation can be reduced, but the evidence is not strong. Many studies focused on technical outcomes (eg, adoption, reduction in variation), rather than on the clinical health care outcomes themselves. More research is needed to learn how EMRs can be implemented and used to reduce unwarranted variation; however, it is important to remember that reduction in clinical variation itself is not the desired outcome. Rather, improved health care outcomes are the ultimate goal.

It is critical that these health care outcomes are clearly defined and monitored, in concert with the ongoing reduction in variation driven by EMRs as a mechanism, to create a continuous learning health care system with appropriate governance to keep iteratively improving health care outcomes over time.


Additional empirical research on EMRs and how their elements such as PowerPlans/SmartSets affect clinical variation and patient outcomes is needed. More attention needs to be given on how to: (1) measure clinical variation and unwarranted variation; (2) improve the effects of an EMR on reducing unwarranted clinical variation; (3) measure multiple elements of the quadruple aim of health care in a single study; and (4) articulate and test the chain of evidence from the EMR to changes in clinical variation to outcomes.


This research was funded via an Australian Research Council Linkage Projects grant (LP170101154). Tri Lam (TL) was the second reviewer who assisted in the phase of article screening and data extraction.

Authors' Contributions

AB-J and CS conceived the study and its design. TH conducted the research, the primary analysis, and the initial drafting of the paper. AB-J, CS, and RD contributed to the analysis and drafting of the paper and all authors approved the final manuscript. TH is the corresponding author.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Electronic medical record/electronic health record prevalence in published studies figure.

PNG File , 52 KB

Multimedia Appendix 2

Quality assessment summary table using the Quality Assessment Tool for Studies with Diverse Designs (QATSDD).

DOCX File , 14 KB

Multimedia Appendix 3

Data extraction coding table.

DOCX File , 9 KB

  1. Sikka R, Morath JM, Leape L. The Quadruple Aim: care, health, cost and meaning in work. BMJ Qual Saf 2015 Oct;24(10):608-610. [CrossRef] [Medline]
  2. Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff (Millwood) 2008;27(3):759-769. [CrossRef] [Medline]
  3. Australian Commission on Safety and Quality in Health Care. Healthcare Variation. 2019.   URL: [accessed 2021-09-29]
  4. Glover JA. The incidence of tonsillectomy in school children: (section of epidemiology and state medicine). Proc R Soc Med 1938 Aug;31(10):1219-1236 [FREE Full text] [Medline]
  5. Wennberg J, Gittelsohn. Small area variations in health care delivery. Science 1973 Dec 14;182(4117):1102-1108. [CrossRef] [Medline]
  6. Kennedy PJ, Leathley CM, Hughes CF. Clinical practice variation. Med J Aust 2010 Oct 18;193(S8):S97-S99. [CrossRef] [Medline]
  7. Sutherland K, Levesque J. Unwarranted clinical variation in health care: Definitions and proposal of an analytic framework. J Eval Clin Pract 2020 Jun;26(3):687-696 [FREE Full text] [CrossRef] [Medline]
  8. Phelps CE. Diffusion of information in medical care. J Econ Perspect 1992;6(3):23-42. [CrossRef] [Medline]
  9. Timmermans S. From autonomy to accountability: the role of clinical practice guidelines in professional power. Perspect Biol Med 2005;48(4):490-501. [CrossRef] [Medline]
  10. Australian Commission on Safety and Quality in Health Care. National Safety and Quality Health Service Standards: User Guide for the Review of Clinical Variation in Health Care. 2020.   URL: [accessed 2021-09-29]
  11. Harrison R, Manias E, Mears S, Heslop D, Hinchcliff R, Hay L. Addressing unwarranted clinical variation: A rapid review of current evidence. J Eval Clin Pract 2019 Feb;25(1):53-65. [CrossRef] [Medline]
  12. Imison C, Castle-Clarke S, Watson R, Edwards N. Delivering the Benefits of Digital Health Care. London, UK: Nuffield Trust; 2016.
  13. WHO Global Observatory for eHealth. Global Diffusion of EHealth: Making Universal Health Coverage Achievable: Report of the Third Global Survey on EHealth. Geneva, Switzerland: World Health Organization; 2016.
  14. Hoover R. Benefits of using an electronic health record. Nursing2018 2016;46(7):21-22. [CrossRef]
  15. Menachemi N, Collum TH. Benefits and drawbacks of electronic health record systems. Risk Manag Healthc Policy 2011;4:47-55 [FREE Full text] [CrossRef] [Medline]
  16. Menachemi N, Brooks RG. Reviewing the benefits and costs of electronic health records and associated patient safety technologies. J Med Syst 2006 Jun;30(3):159-168. [CrossRef] [Medline]
  17. Jang J, Yu SH, Kim C, Moon Y, Kim S. The effects of an electronic medical record on the completeness of documentation in the anesthesia record. Int J Med Inform 2013 Aug;82(8):702-707. [CrossRef] [Medline]
  18. Tang PC, LaRosa MP, Gorden SM. Use of computer-based records, completeness of documentation, and appropriateness of documented clinical decisions. J Am Med Inform Assoc 1999;6(3):245-251 [FREE Full text] [CrossRef] [Medline]
  19. Gunter TD, Terry NP. The emergence of national electronic health record architectures in the United States and Australia: models, costs, and questions. J Med Internet Res 2005 Mar 14;7(1):e3 [FREE Full text] [CrossRef] [Medline]
  20. Musen MA, Middleton B, Greenes RA. Clinical decision-support systems. In: Biomedical Informatics. Berlin, Germany: Springer; 2014:643-674.
  21. Raposo VL. Electronic health records: Is it a risk worth taking in healthcare delivery? GMS Health Technol Assess 2015;11:Doc02 [FREE Full text] [CrossRef] [Medline]
  22. Pelletier LR. Information-Enabled Decision-Making in Health Care: EHR-Enabled Standardization, Physician Profiling and Medical Home. 2010.   URL: https:/​/web.​​Pubs/​ETD/​Available/​etd-042510-120618/​unrestricted/​Dissertation_Lori_Pelletier_FINAL.​pdf [accessed 2021-10-05]
  23. Thomas J, Dahm MR, Li J, Smith P, Irvine J, Westbrook JI, et al. Variation in electronic test results management and its implications for patient safety: A multisite investigation. J Am Med Inform Assoc 2020 Aug 01;27(8):1214-1224 [FREE Full text] [CrossRef] [Medline]
  24. Detsky AS. Regional variation in medical care. N Engl J Med 1995 Aug 31;333(9):589-590. [CrossRef] [Medline]
  25. McLaughlin CP. Why variation reduction is not everything: a new paradigm for service operations. Int J of Service Industry Mgmt 1996 Aug;7(3):17-30. [CrossRef]
  26. McLaughlin C, Johnson S. Inherent variability in service operations: identification, measurement and implications. In: Services Management: New Directions and Perspectives. London, UK: Cassell; 1995:226-229.
  27. KLAS Research. Global (Non-US) EMR Market Share. 2019.   URL: [accessed 2021-10-05]
  28. KLAS Research. US Hospital EMR Market Share. 2020.   URL: [accessed 2021-10-05]
  29. Fierce Healthcare. Epic, Meditech Gain U.S. Hospital Market Share as Other EHR Vendors Lose Ground.: Fierce Health care; 2020.   URL: https:/​/www.​​tech/​epic-meditech-gain-u-s-hospital-market-share-as-other-ehr-vendors-lose-ground [accessed 2021-10-05]
  30. Heckel K. Power Plans. 2014.   URL:
  31. BJC HealthCare. EPIC- Resources - How To Speak Epic. 2020.   URL: [accessed 2021-10-05]
  32. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med 2009 Jul 21;6(7):e1000100 [FREE Full text] [CrossRef] [Medline]
  33. Adelson KB, Qiu YC, Evangelista M, Spencer-Cisek P, Whipple C, Holcombe RF. Implementation of electronic chemotherapy ordering: an opportunity to improve evidence-based oncology care. J Oncol Pract 2014 Mar;10(2):e113-e119. [CrossRef] [Medline]
  34. Akenroye AT, Stack AM. The development and evaluation of an evidence-based guideline programme to improve care in a paediatric emergency department. Emerg Med J 2016 Feb;33(2):109-117. [CrossRef] [Medline]
  35. Amland RC, Dean BB, Yu H, Ryan H, Orsund T, Hackman JL, et al. Computerized clinical decision support to prevent venous thromboembolism among hospitalized patients: proximal outcomes from a multiyear quality improvement project. J Healthc Qual 2015;37(4):221-231. [CrossRef] [Medline]
  36. Asan O, Nattinger AB, Gurses AP, Tyszka JT, Yen TWF. Oncologists' views regarding the role of electronic health records in care coordination. JCO Clin Cancer Inform 2018 Dec;2:1-12 [FREE Full text] [CrossRef] [Medline]
  37. Attaar A, Wei J, Brunetti L. Evaluating adherence to guideline-directed infection management pre- and postimplementation of an electronic order set. J Pharm Pract 2021 Oct;34(5):721-726. [CrossRef] [Medline]
  38. Ballesca MA, LaGuardia JC, Lee PC, Hwang AM, Park DK, Gardner MN, et al. An electronic order set for acute myocardial infarction is associated with improved patient outcomes through better adherence to clinical practice guidelines. J Hosp Med 2014 Mar;9(3):155-161. [CrossRef] [Medline]
  39. Borok J, Udkoff J, Vaida F, Murphy J, Torriani F, Waldman A, et al. Transforming acne care by pediatricians: An interventional cohort study. J Am Acad Dermatol 2018 Nov;79(5):966-968 [FREE Full text] [CrossRef] [Medline]
  40. Bradywood A, Farrokhi F, Williams B, Kowalczyk M, Blackmore CC. Reduction of inpatient hospital length of stay in lumbar fusion patients with implementation of an evidence-based clinical care pathway. Spine (Phila Pa 1976) 2017 Feb;42(3):169-176. [CrossRef] [Medline]
  41. Chisolm DJ, McAlearney AS, Veneris S, Fisher D, Holtzlander M, McCoy KS. The role of computerized order sets in pediatric inpatient asthma treatment. Pediatr Allergy Immunol 2006 May;17(3):199-206. [CrossRef] [Medline]
  42. Dort JC, Sauro KM, Chandarana S, Schrag C, Matthews J, Nakoneshny S, et al. The impact of a quality management program for patients undergoing head and neck resection with free-flap reconstruction: longitudinal study examining sustainability. J Otolaryngol Head Neck Surg 2020 Jun 23;49(1):42 [FREE Full text] [CrossRef] [Medline]
  43. Ebinger JE, Porten BR, Strauss CE, Garberich RF, Han C, Wahl SK, et al. Design, challenges, and implications of quality improvement projects using the electronic medical record: case study: a protocol to reduce the burden of postoperative atrial fibrillation. Circ Cardiovasc Qual Outcomes 2016 Sep;9(5):593-599 [FREE Full text] [CrossRef] [Medline]
  44. Geltman PL, Fried LE, Arsenault LN, Knowles AM, Link DA, Goldstein JN, et al. A planned care approach and patient registry to improve adherence to clinical guidelines for the diagnosis and management of attention-deficit/hyperactivity disorder. Acad Pediatr 2015;15(3):289-296. [CrossRef] [Medline]
  45. Goga JK, Depaolo A, Khushalani S, Walters JK, Roca R, Zisselman M, et al. Lean methodology reduces inappropriate use of antipsychotics for agitation at a psychiatric hospital. Consult Pharm 2017 Jan 01;32(1):54-62. [CrossRef] [Medline]
  46. Gulati S, Zouk AN, Kalehoff JP, Wren CS, Davison PN, Kirkpatrick DP, et al. The use of a standardized order set reduces systemic corticosteroid dose and length of stay for individuals hospitalized with acute exacerbations of COPD: a cohort study. Int J Chron Obstruct Pulmon Dis 2018;13:2271-2278 [FREE Full text] [CrossRef] [Medline]
  47. Hendrickson MA, Wey AR, Gaillard PR, Kharbanda AB. Implementation of an electronic clinical decision support tool for pediatric appendicitis within a hospital network. Pediatr Emerg Care 2018 Jan;34(1):10-16 [FREE Full text] [CrossRef] [Medline]
  48. Hooper DK, Kirby CL, Margolis PA, Goebel J. Reliable individualized monitoring improves cholesterol control in kidney transplant recipients. Pediatrics 2013 Apr;131(4):e1271-e1279 [FREE Full text] [CrossRef] [Medline]
  49. Horton JD, Corrigan C, Patel T, Schaffer C, Cina RA, White DR. Effect of a standardized electronic medical record order set on opioid prescribing after tonsillectomy. Otolaryngol Head Neck Surg 2020 Aug;163(2):216-220. [CrossRef] [Medline]
  50. Jacobs BR, Hart KW, Rucker DW. Reduction in clinical variance using targeted design changes in Computerized Provider Order Entry (CPOE) order sets: impact on hospitalized children with acute asthma exacerbation. Appl Clin Inform 2012;3(1):52-63 [FREE Full text] [CrossRef] [Medline]
  51. Karajgikar ND, Manroa P, Acharya R, Codario RA, Reider JA, Donihi AC, et al. Addressing pitfalls in management of diabetic ketoacidosis with a standardized protocol. Endocr Pract 2019 May;25(5):407-412. [CrossRef] [Medline]
  52. Lewin SM, McConnell RA, Patel R, Sharpton SR, Velayos F, Mahadevan U. Improving the quality of inpatient ulcerative colitis management: promoting evidence-based practice and reducing care variation with an inpatient protocol. Inflamm Bowel Dis 2019 Oct 18;25(11):1822-1827. [CrossRef] [Medline]
  53. Lindberg SM, Anderson CK. Improving gestational weight gain counseling through meaningful use of an electronic medical record. Matern Child Health J 2014 Nov;18(9):2188-2194 [FREE Full text] [CrossRef] [Medline]
  54. Lindberg SM, DeBoth A, Anderson CK. Effect of a best practice alert on gestational weight gain, health services, and pregnancy outcomes. Matern Child Health J 2016 Oct;20(10):2169-2178 [FREE Full text] [CrossRef] [Medline]
  55. Morrisette M, Hammer J, Anderson W, Norton H, Green M, Gesin G. Impact of a multifaceted intervention on prescribing of proton pump inhibitors for stress ulcer prophylaxis in the critically ill. Arch Crit Care Med 2015 May 30;1(2):e1. [CrossRef]
  56. Prevedello LM, Raja AS, Ip IK, Sodickson A, Khorasani R. Does clinical decision support reduce unwarranted variation in yield of CT pulmonary angiogram? Am J Med 2013 Nov;126(11):975-981 [FREE Full text] [CrossRef] [Medline]
  57. Reynolds EL, Burke JF, Banerjee M, Callaghan BC. Randomized controlled trial of a clinical decision support system for painful polyneuropathy. Muscle Nerve 2020 May;61(5):640-644. [CrossRef] [Medline]
  58. Rooholamini SN, Clifton H, Haaland W, McGrath C, Vora SB, Crowell CS, et al. Outcomes of a clinical pathway to standardize use of maintenance intravenous fluids. Hosp Pediatr 2017 Dec;7(12):703-709. [CrossRef] [Medline]
  59. Sim EY, Tan DJA, Abdullah HR. The use of computerized physician order entry with clinical decision support reduces practice variance in ordering preoperative investigations: A retrospective cohort study. Int J Med Inform 2017 Dec;108:29-35. [CrossRef] [Medline]
  60. Sonstein L, Clark C, Seidensticker S, Zeng L, Sharma G. Improving adherence for management of acute exacerbation of chronic obstructive pulmonary disease. Am J Med 2014 Nov;127(11):1097-1104 [FREE Full text] [CrossRef] [Medline]
  61. Soo G, Wong Doo N, Burrows J, Ritchie A, Zhang J, Burke R. Improving the adoption of an electronic clinical decision support tool and evaluating its effect on venous thromboembolism prophylaxis prescribing at a Sydney tertiary teaching hospital. J Pharm Pract Res 2019 Jul 09;49(6):508-516. [CrossRef]
  62. Terasaki J, Singh G, Zhang W, Wagner P, Sharma G. Using EMR to improve compliance with clinical practice guidelines for management of stable COPD. Respir Med 2015 Nov;109(11):1423-1429 [FREE Full text] [CrossRef] [Medline]
  63. Wang EJ, Helgesen R, Johr CR, Lacko HS, Ashburn MA, Merkel PA. Targeted program in an academic rheumatology practice to improve compliance with opioid prescribing guidelines for the treatment of chronic pain. Arthritis Care Res (Hoboken) 2021 Oct;73(10):1425-1429. [CrossRef] [Medline]
  64. Webber EC, Warhurst HM, Smith SS, Cox EG, Crumby AS, Nichols KR. Conversion of a single-facility pediatric antimicrobial stewardship program to multi-facility application with computerized provider order entry and clinical decision support. Appl Clin Inform 2013;4(4):556-568 [FREE Full text] [CrossRef] [Medline]
  65. Studer A, Billings K, Thompson D, Ida J, Rastatter J, Patel M, et al. Standardized order set exhibits surgeon adherence to pain protocol in pediatric adenotonsillectomy. Laryngoscope 2021 Jul;131(7):E2337-E2343. [CrossRef] [Medline]
  66. Teich JM, Merchia PR, Schmiz JL, Kuperman GJ, Spurr CD, Bates DW. Effects of computerized physician order entry on prescribing practices. Arch Intern Med 2000 Oct 09;160(18):2741-2747 [FREE Full text] [CrossRef] [Medline]
  67. Kicker JS, Hill HS, Matheson CK. Better pairing propofol volume with procedural needs: a propofol waste reduction quality improvement project. Hosp Pediatr 2018 Oct;8(10):604-610. [CrossRef] [Medline]
  68. Smalheiser NR, Zhou W, Torvik VI. Anne O'Tate: A tool to support user-driven summarization, drill-down and browsing of PubMed search results. J Biomed Discov Collab 2008 Feb 15;3:2 [FREE Full text] [CrossRef] [Medline]
  69. Cohen J. Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychol Bull 1968 Oct;70(4):213-220. [CrossRef] [Medline]
  70. Rosovsky RP, Barra ME, Roberts RJ, Parmar A, Andonian J, Suh L, et al. When pigs fly: a multidisciplinary approach to navigating a critical heparin shortage. Oncologist 2020 Apr;25(4):334-347 [FREE Full text] [CrossRef] [Medline]
  71. Covidence Systematic Review Software. Melbourne, VIC, Australia: Veritas Health Innovation; 2020.   URL: [accessed 2021-10-05]
  72. Sirriyeh R, Lawton R, Gardner P, Armitage G. Reviewing studies with diverse designs: the development and evaluation of a new tool. J Eval Clin Pract 2012 Aug;18(4):746-752. [CrossRef] [Medline]
  73. Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ. Cochrane Handbook for Systematic Reviews of Interventions. Hoboken, NJ: John Wiley & Sons; 2019.
  74. Bencomo M. Implementing a Clinical Protocol: Early Enteral Nutrition Therapy in Critically Ill Patients. Peoria, IL: Saint Francis Medical Center College of Nursing; 2019.   URL: [accessed 2020-06-22]
  75. Kotter JP. Leading Change: Why Transformation Efforts Fail. Harvard Business Review. 1995.   URL: [accessed 2021-10-05]
  76. Campbell RJ. Change management in health care. Health Care Manag (Frederick) 2008;27(1):23-39. [CrossRef] [Medline]
  77. Robinson KE, Kersey JA. Novel electronic health record (EHR) education intervention in large healthcare organization improves quality, efficiency, time, and impact on burnout. Medicine (Baltimore) 2018 Sep;97(38):e12319 [FREE Full text] [CrossRef] [Medline]
  78. Al-Abri RK, Al-Hashmi IS. The learning organisation and health care education. Sultan Qaboos Univ Med J 2007 Dec;7(3):207-214 [FREE Full text] [Medline]
  79. Hulse NC, Lee J, Borgeson T. Visualization of order set creation and usage patterns in early implementation phases of an electronic health record. AMIA Annu Symp Proc 2016;2016:657-666 [FREE Full text] [Medline]
  80. Senge PM. The Fifth Discipline: The Art and Practice of the Learning Organization. Redfern, NSW: Currency Press; 2006.
  81. Campanella P, Lovato E, Marone C, Fallacara L, Mancuso A, Ricciardi W, et al. The impact of electronic health records on healthcare quality: a systematic review and meta-analysis. Eur J Public Health 2016 Feb;26(1):60-64. [CrossRef] [Medline]
  82. Riza R, Nurwahyuni A. The implementation and outcome of clinical pathway: a systematic review. 2019 Presented at: The 5th International Conference on Public Health; February 13-14, 2019; Solo, Indonesia p. 677-686. [CrossRef]
  83. Patel BN. Impact of implementing a computerised quality improvement intervention in primary healthcare. 2018.   URL: [accessed 2021-10-05]

CDS: clinical decision support
EHR: electronic health record
EMR: electronic medical record
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
QATSDD: Quality Assessment Tool for Studies with Diverse Designs

Edited by G Eysenbach; submitted 16.05.21; peer-reviewed by A Kazley; comments to author 17.06.21; revised version received 22.06.21; accepted 19.09.21; published 17.11.21


©Tobias Hodgson, Andrew Burton-Jones, Raelene Donovan, Clair Sullivan. Originally published in JMIR Medical Informatics (, 17.11.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.