Published on in Vol 11 (2023)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/43097, first published .
Monitoring the Implementation of Tobacco Cessation Support Tools: Using Novel Electronic Health Record Activity Metrics

Monitoring the Implementation of Tobacco Cessation Support Tools: Using Novel Electronic Health Record Activity Metrics

Monitoring the Implementation of Tobacco Cessation Support Tools: Using Novel Electronic Health Record Activity Metrics

Original Paper

1iDAPT Implementation Science Center for Cancer Control, Wake Forest University School of Medicine, Winston-Salem, NC, United States

2Department of Internal Medicine, Wake Forest University School of Medicine, Winston-Salem, NC, United States

3Department of Population and Quantitative Health Sciences, University of Massachusetts Chan Medical School, Worcester, MA, United States

4Department of Preventive Medicine and Epidemiology, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, United States

5Department of Implementation Science, Division of Public Health Sciences, Wake Forest University School of Medicine, Winston-Salem, NC, United States

6Wake Forest Center for Healthcare Innovation, Winston-Salem, NC, United States

7Wake Forest Center for Biomedical Informatics, Winston-Salem, NC, United States

8Center for Health Analytics, Media, and Policy, RTI International, Research Triangle Park, NC, United States

9Clinical & Translational Science Institute, Wake Forest University School of Medicine, Winston-Salem, NC, United States

10Department of Physiology and Pharmacology, Wake Forest University School of Medicine, Winston-Salem, NC, United States

11Department of Social Sciences and Health Policy, Wake Forest University School of Medicine, Winston-Salem, NC, United States

12Wake Forest University School of Medicine, Winston-Salem, NC, United States

Corresponding Author:

Jinying Chen, PhD

iDAPT Implementation Science Center for Cancer Control

Wake Forest University School of Medicine

1 Medical Center Blvd

Winston-Salem, NC, 27101

United States

Phone: 1 617 358 5838

Email: jinchen@wakehealth.edu


Background: Clinical decision support (CDS) tools in electronic health records (EHRs) are often used as core strategies to support quality improvement programs in the clinical setting. Monitoring the impact (intended and unintended) of these tools is crucial for program evaluation and adaptation. Existing approaches for monitoring typically rely on health care providers’ self-reports or direct observation of clinical workflows, which require substantial data collection efforts and are prone to reporting bias.

Objective: This study aims to develop a novel monitoring method leveraging EHR activity data and demonstrate its use in monitoring the CDS tools implemented by a tobacco cessation program sponsored by the National Cancer Institute’s Cancer Center Cessation Initiative (C3I).

Methods: We developed EHR-based metrics to monitor the implementation of two CDS tools: (1) a screening alert reminding clinic staff to complete the smoking assessment and (2) a support alert prompting health care providers to discuss support and treatment options, including referral to a cessation clinic. Using EHR activity data, we measured the completion (encounter-level alert completion rate) and burden (the number of times an alert was fired before completion and time spent handling the alert) of the CDS tools. We report metrics tracked for 12 months post implementation, comparing 7 cancer clinics (2 clinics implemented the screening alert and 5 implemented both alerts) within a C3I center, and identify areas to improve alert design and adoption.

Results: The screening alert fired in 5121 encounters during the 12 months post implementation. The encounter-level alert completion rate (clinic staff acknowledged completion of screening in EHR: 0.55; clinic staff completed EHR documentation of screening results: 0.32) remained stable over time but varied considerably across clinics. The support alert fired in 1074 encounters during the 12 months. Providers acted upon (ie, not postponed) the support alert in 87.3% (n=938) of encounters, identified a patient ready to quit in 12% (n=129) of encounters, and ordered a referral to the cessation clinic in 2% (n=22) of encounters. With respect to alert burden, on average, both alerts fired over 2 times (screening alert: 2.7; support alert: 2.1) before completion; time spent postponing the screening alert was similar to completing (52 vs 53 seconds) the alert, and time spent postponing the support alert was more than completing (67 vs 50 seconds) the alert per encounter. These findings inform four areas where the alert design and use can be improved: (1) improving alert adoption and completion through local adaptation, (2) improving support alert efficacy by additional strategies including training in provider-patient communication, (3) improving the accuracy of tracking for alert completion, and (4) balancing alert efficacy with the burden.

Conclusions: EHR activity metrics were able to monitor the success and burden of tobacco cessation alerts, allowing for a more nuanced understanding of potential trade-offs associated with alert implementation. These metrics can be used to guide implementation adaptation and are scalable across diverse settings.

JMIR Med Inform 2023;11:e43097

doi:10.2196/43097

Keywords



Background

Provider-facing computerized clinical decision support (CDS) tools in electronic health records (EHRs) are common digital health interventions supporting health care quality improvement programs [1-6]. Monitoring (ie, continual evaluation) of the impact of these tools is important for program evaluation and may ultimately contribute to implementation success [7,8]. Approaches for evaluating CDS tools largely rely on surveys, qualitative interviews, and data collected through direct observation or audio/video recording [9-12]. These approaches require substantial human effort (from implementation staff and clinical teams) for data collection. Automated methods leveraging EHR activity data offer a promising solution to reduce the data collection burden, but research on these methods is still in the earliest stage.

This study aimed to develop automatic metrics to monitor the implementation of EHR-embedded CDS tools and demonstrate their use within the context of a smoking cessation program sponsored by a National Cancer Institute (NCI)–designated cancer center.

Tobacco Control Programs in NCI Cancer Centers

Tobacco use increases the risk of cancer and leads to poor prognosis after cancer diagnosis [13-16]. Clinical practice guidelines recommend routine screening for tobacco use and referral to evidence-based cessation interventions in patients with cancer [17,18], but this practice is underused [19]. To address this practice gap, the NCI’s Beau Biden Cancer Moonshot program launched the Cancer Center Cessation Initiative (C3I) in 2017 to provide funding to NCI-designated cancer centers to implement or enhance their tobacco treatment services [20].

Electronic alerts (e-alerts) are common CDS tools in EHRs, promoting adherence to practice guidelines [2-5,21,22], including tobacco screening and treatment at the point of care [23-25]. This strategy has been adopted by some C3I-funded cancer centers [26,27]. However, effective implementation of alerts into the clinical workflow is nontrivial [28-32]. Monitoring of provider responses to newly implemented alerts can identify barriers to adoption and the burden imposed by the alerts.

Study Objectives

We developed and applied EHR activity metrics to answer three questions. (1) Did the alert completion rate change over time or vary across clinics? (2) What was the burden introduced by the alerts? (3) What factors were associated with variation in alert completion? Our research questions were motivated by three factors. First, sustainability (eg, sustained use and completion of the alerts) is a key construct of implementation outcomes [8] and should be monitored over time. Second, monitoring variations in alert completion across clinics can support the adaptation of alert implementation to the local context. Third, alerts could add a “burden” on providers [30-33], which should be evaluated.


Study Design

We developed and applied new EHR activity metrics to monitor the tobacco cessation tools (two Best Practice Advisory [BPA] alerts) implemented in cancer clinics for 12 months (Figure 1).

Figure 1. Study overview. The support alert would fire only if the screening result was positive (ie, patient being a current smoker) and answers to both Smoking Screener questions (Q1: “When did you last smoke (even 1 or 2 puffs)?”; Q2: “Quitting smoking could help improve your health. Are you interested in quitting?”) were documented (see Multimedia Appendix 1, step B1). EHR: electronic health record.

Ethics Approval

The study was approved by the Wake Forest School of Medicine Institutional Review Board (IRB00066841). Deidentified EHR data were used, with informed consent for data access waved by the institutional review board.

Digital Health Intervention

The CDS tools were two conditional sequential alerts integrated into the Epic EHR, a commercial cloud-based EHR system (Figure 1; detailed in Multimedia Appendix 1): (1) a screening alert to remind clinic staff to complete tobacco screening, triggered if “current smoker” or “unknown smoking status” was previously documented in the EHR, and (2) a support alert to prompt the clinical provider to discuss support and referral to a tobacco cessation clinic, triggered if the screening result was positive and answers to both Smoking Screener questions were answered (see note for Figure 1). Each alert had two modalities: (1) interruptive (triggered when the patient chart was opened; if postponed, presenting again after 10 minutes or when the patient chart was reopened) and (2) noninterruptive (in the general BPA section of the EHR).

Implementation Context

The Tobacco Control Center of Excellence (TCCOE) at the Wake Forest Baptist Comprehensive Cancer Center implemented the alerts in the Epic EHR system used by 7 cancer clinics (medical oncology: n=3; radiation oncology: n=3; cancer survivorship: n=1) in the Atrium Health Wake Forest Baptist Comprehensive Cancer Center in 2019 and 2020. The alerts were integrated into the Epic EHR as BPAs, a form of CDS in the EHR that reminds providers to attend to important tasks [4]. The implementation team from the TCCOE worked with the hospital information technology team on implementing the alerts. The alerts were customized by using rule-based logic (eg, rules on who will receive the alerts and when to fire the alerts; detailed in Multimedia Appendix 1). All 7 clinics implemented the screening alert; 5 implemented the support alert. Training was provided to clinic staff and providers (1-month weekly before or in the first month of implementation and monthly check-in after alert implementation). Some clinics used extensive support from patient navigators and tobacco treatment specialists to complete screening documentation and referral to the cessation clinic.

Evaluation

Metrics Development and Automation

Metrics development took three steps: (1) identifying relevant EHR variables, (2) developing SQL queries to extract variables from the EHR database, and (3) developing computer code to calculate the metrics. We used EHR data associated with 2 clinics to develop and test the metrics. A team of experts in health informatics and implementation science, EHR specialists, and physicians participated in the metrics development.

Using the computer code we developed, EHR data extraction takes about 10 minutes, and the calculation of each metric takes tens of seconds. This speed is adequate for monitoring CDS tools used by implementation programs. Full automation of these metrics is possible after their integration into the EHR.

EHR Variables Used to Derive the Metrics

We extracted alert activity data from event log files of the Epic EHR system. The variables used to develop the metrics included alert id, alert instance id, alert name (eg, a tobacco screening alert), timestamps corresponding to alert firing and provider responding (called alert firing time point and alert response time point for convenience), alert triggering condition (eg, triggered by opening the patient chart), subsequent actions taken (eg, acknowledge/override warning), alert override reason, and alert-associated signed orders.

Each alert id is associated with a unique encounter id and a patient id. An alert id corresponds to multiple alert instance ids if the alert is fired again after being postponed. Alert triggering condition was used to distinguish interruptive alerts from noninterruptive ones.

We used subsequent actions taken, alert override reason, and alert-associated signed orders to identify providers’ actions on the alerts. When the clinic staff completed or postponed the screening alert, subsequent actions taken recorded a value “acknowledge/override warning,” and alert overridereason recorded whether the staff acknowledged screening completion (ie, hit the button “Documented in Flowsheet,” step A in Figure A1-1, Multimedia Appendix 1), postponed the alert (hit “Defer”), or determined that the patient was inappropriate for screening (hit “Not appropriate”). For the support alert, subsequent actions taken recorded a value “acknowledge/override warning” when the provider hit the buttons under “acknowledge reason,” and alert override reason recorded the provider’s actions (eg, discussed or not discussed with patients) and patient’s readiness to quit (Figure A1-2, Multimedia Appendix 1). Alert-associated signed orders recorded whether the provider placed an order for a referral to the cessation clinic.

In addition, we used two encounter-level variables, flowsheet name and flowsheet value, to determine whether the clinic staff documented screening results (ie, answers to Q1-Q3 in step B1 in Figure A1-1, Multimedia Appendix 1) in the EHR.

Metrics

We defined three metrics to measure alert completion and burden (Multimedia Appendix 2).

The alert completion rate was defined as the number of encounters where a provider completed alert-prompted actions divided by the number of encounters where the alert fired. We defined screening alert completion by either staff’s acknowledging completion of screening or completion of EHR documentation of screening results. We defined support alert completion at two levels: (1) discussing with patients and assessing patient readiness to quit (“discussion”) or (2) referring patients to the on-site tobacco cessation clinic (“referral”).

We measured the burden of interruptive alerts by two metrics: alert firing rate and alert handling time. We focused on interruptive alerts because they were more likely to add a “burden” on providers [30-33]. We defined alert firing rate as the number of times the alert fired during a specific period divided by the number of times the alert was completed during that period. We calculated the average time providers spent completing an alert per encounter, using encounters in which the alert was completed; similarly, we calculated the average time spent postponing alerts per encounter using encounters in which the alert was postponed at least once.

Data Collection

For each clinic, we collected EHR data about clinic characteristics, patient characteristics, and EHR activities related to tobacco cessation alerts. We collected EHR alert activity data as described previously. Each instance of an alert was linked to a specific patient encounter and the patient’s demographic information (sex and race), using encounter and patient IDs.

Data Analysis

We summarized clinic and patient characteristics for each clinic. We then used EHR activity metrics to address the research questions related to alert completion and burden. Statistical analyses were conducted using STATA/MP 15.1 (StataCorp LLC) [34].

We measured the overall and per-clinic alert completion rates for the screening alert during every 3-month period across 12 months post alert implementation. The 12-month postimplementation period was specified for each clinic. We measured the support alert completion rate at two levels: “discussion” and “referral.”

We measured the alert firing rate and handling time of interruptive screening alerts and support alerts to assess the burden of interruptive alerts.

Factors Associated With Alert Completion

As a secondary analysis, we examined the distribution of alert completion over patients’ demographics (sex and race) and encounter types.

Three physicians reviewed all encounter types and selected “relevant encounter” types as those in which screening for smoking status was an appropriate part of routine care (Multimedia Appendix 3).


Clinic Characteristics

The clinics varied in the number of encounters (from n=1464 to n=110,553) and patients (from n=328 to n=9410) during 12 months post alert implementation (Table 1). The typical structure of these clinics was for nurses to support multiple providers across multiple days.

Table 1. Clinic and patient characteristics during the 12 months after implementing tobacco cessation alerts.

Medical oncologyaRadiation oncologybCancer survivorship (Sc)

M1dM2M3R1dR2R3
Clinic characteristics

Service areaUrbanRuralUrbanUrbanRuralUrbanUrban

Staffing, ne5-1010-20100-11010-205-1020-3010-20

Encounters, n30,7279102110,5534670276921,3621464

Patients, n46881193941010593283196623
Patient characteristics

Age (years), mean (SD)64 (14)65 (13)61 (15)66 (11)67 (11)64 (14)59 (19)

Sex, n (%)f


Female3122 (66.6)766 (64.2)4956 (52.7)603 (56.9)156 (47.6)1479 (46.3)327 (52.5)


Male1566 (33.4)427 (35.8)4454 (47.3)456 (43.1)172 (52.4)1716 (53.7)296 (47.5)

Race, n (%)f


African American1070 (22.8)160 (13.4)1731 (18.4)223 (21.1)50 (15.2)512 (16.0)98 (15.7)


White3350 (71.5)988 (82.8)7227 (76.8)780 (73.7)263 (80.2)2546 (79.7)500 (80.3)


Otherg259 (5.5)45 (3.8)436 (4.6)53 (5.0)14 (4.3)136 (4.3)24 (3.9)

Hispanic or Latino, n (%)f


Yes114 (2.4)23 (1.9)349 (3.7)19 (1.8)8 (2.4)85 (2.7)20 (3.2)


No4545 (96.9)1170 (98.1)9042 (96.1)1034 (97.6)320 (97.6)3104 (97.1)603 (96.8)

Insurance, n (%)


Medicare2721 (58.0)775 (65.0)4961 (52.7)646 (61.0)205 (62.5)1699 (53.2)334 (53.6)


Medicaid243 (5.2)81 (6.8)598 (6.4)60 (5.7)19 (5.8)190 (5.9)27 (4.3)


Other insurance1659 (35.4)305 (25.6)3537 (37.6)334 (31.5)99 (30.2)1226 (38.4)253 (40.6)


No insurance65 (1.4)32 (2.7)266 (2.8)19 (1.8)5 (1.5)77 (2.4)9 (1.4)

Smoking rate, n/N (%)h590/4606 (12.8)198/1150 (17.2)1006/9230 (10.9)174/1051 (16.6)58/325 (17.8)435/3155 (13.8)45/506 (8.9)

aM1-M3: medical oncology clinics 1-3.

bR1-R3: radiation oncology clinics 1-3.

cS: cancer survivorship clinic.

dM1 and R1 implemented only the screening alert.

eThe approximate number of clinic team members (physicians, advanced practice practitioners, nurses, and other clinical staff) in a clinic. The number is not precise due to staff turnover and the hiring of temporary staff.

fSome clinics have a small percentage of patients missing information on sex (0.03% missing for R3; complete for other clinics), race (complete for M2; less than 0.3% missing for other clinics), and ethnicity (1% missing for M1, 0.6% missing for R1, 0.2% missing for M3 and R3; complete for other clinics).

gOther: American Indian or Alaska Native, Asian, Native Hawaiian or Other Pacific Islander, Latin American or Hispanic, and other.

hThe percent of patients who were active smokers during 12 months post alert implementation. The denominator is the number of patients who had their smoking status documented in the electronic health record.

Patient Characteristics

The patients seen by the cancer survivorship clinic were 5-8 years older than patients seen by other clinics (mean age for each clinic 59-67; Table 1). Most patients were non-Hispanic White and were beneficiaries of Medicare. The smoking rate ranged between 8.9% (n=45 among 506 patients who had smoking status documented in the EHR; cancer survivorship clinic) and 17.8% (58/325; radiology oncology clinic 2).

Alert Completion Rate

The screening alert fired in 5121 (2.8% of 180,647) encounters 12 months post implementation. The alert completion rate was 0.55 (2817/5121) based on the staff’s acknowledgment of screening completion in EHRs and 0.32 (1647/5121) based on the completion of EHR documentation of screening results. Both alert completion rates remained stable over time (Figure 2A) but varied considerably across clinics (Figure 2B-D). Among the 2817 encounters where the staff acknowledged completion of screening, 84.7% completed interruptive alerts and 15.4% completed noninterruptive ones.

The support alert was implemented for 5 clinics (medical oncology clinic 2 and 3, radiation oncology clinic 2 and 3, and cancer survivorship clinic) and fired in 1074 encounters. Providers responded without postponing (n=938, 87.3%), discussed tobacco use treatment options (n=640, 59.6%), identified patients who were ready to quit (n=129, 12%), and placed referrals to the cessation clinic (n=22, 2%).

Figure 2. Completion rate of tobacco screening alert for (A) all clinics and (B-D) individual clinics. Clinics in (C) and (D) were categorized into three levels based on the number of encounters in which a screening alert was fired during 12 months post alert implementation. Level 1: >1000; level 2: >100 and ≤1000; level 3: ≤100. Line thickness was used to represent these three levels. EHR: electronic health record. M1-M3: medical oncology clinics 1-3. R1-R3: radiation oncology clinics 1-3. S: cancer survivorship clinic.

The Burden of Interruptive Alerts

On average, the number of times a screening alert was fired before completion was 2.7 (range 1.0-12.7 for individual clinics; Table 2); the average number of times a support alert was fired before completion was 2.1 (range 1.8-3.3 for individual clinics; Table 2).

Table 2. Alert firing rate of the screening alert and the support alert by clinics.

Medical oncologyaRadiation oncologybCancer survivorship (Sc)

M1dM2M3R1dR2R3
Screening alert4.9e1.32.22.11.04.012.7
Support alertN/Af1.83.3N/A2.33.03.0

aM1-M3: medical oncology clinics 1-3.

bR1-R3: radiation oncology clinics 1-3.

cS: cancer survivorship clinic.

dM1 and R1 implemented only the screening alert.

eWe defined the alert firing rate as the number of times the alert fired during 12 months post alert implementation divided by the number of times the alert was completed during the same period. We did not calculate the alert firing rate at the encounter level because it was undefined (ie, division by 0) for encounters that did not complete the alert.

fN/A: not applicable.

On average, time spent completing the screening alert per encounter was 53 seconds (50 seconds for support alert); time spent postponing screening alerts per encounter was 52 seconds (67 seconds for support alerts).

Factors Associated With Alert Completion

Completion rates of the screening alert and the support alert were balanced across patient subgroups (sex, race, and their interaction).

Among 5121 encounters for which the screening alert was fired, 4425 (86.4%) were “relevant” and 696 (13.6%) were “less relevant” to routine tobacco screening. The alert completion rate for “relevant” encounters was higher than that for “less relevant” ones (2793/4425, 63.1% vs 24/696, 3.5%; P<.001).


Principal Results

We developed and applied EHR activity metrics to monitor two tobacco cessation CDS alerts implemented in 7 cancer clinics. Our metrics were able to capture variation in alert completion across clinics, monitor alert efficacy, identify discrepancies between staff-acknowledged screening completion and screening documentation, and provide insights into the balance between alert efficacy and imposed burden. These findings inform four areas where CDS tool design or use can be improved (Table 3), which we discuss below.

Table 3. Key findings from the application of the electronic health record (EHR) activity metrics and implications for clinical decision support (CDS) tool design and use.
Key findings from the application of EHR activity metricsImplications for CDS tools
Variation in alert completionPotential for improving alert adoption/completion through local adaptation

  • The screening alert completion rates varied substantially across the clinics.
Strategies to support use:
  • Use clinic-specific strategies to support CDS tool adoption

  • The screening alert completion rate was higher for encounters perceived as relevant to routine tobacco screening by physicians.
Strategies to support use:
  • Consider this factor when promoting the use of tobacco cessation CDS tools among health providers
Limited alert efficacyPotential for improving support alert efficacy

  • Providers responded to most support alerts, but few patients were ready to quit, and referral to the tobacco cessation clinic was rare.
Strategies to support use:
  • Use additional strategies (eg, patient education, provider training in patient-provider communication) to increase the impact of the CDS tools
Inconsistencies between the acknowledgment of alert completion and documented screeningPotential for improving the accuracy of tracking for alert completion

  • EHR documentation of screening results was rare for some clinics, even though their clinic staff acknowledged completion of screening for most encounters.
Design:
  • Improve CDS tool design to allow accurate tracking of screening completion at the alert level
Strategies to support use:
  • Use metrics that can accurately track screening completion, such as metrics calculated based on the completion of EHR documentation of screening results
Interruptive alerts received more responses but also added burden to providersImportance of balancing alert efficacy with the burden

  • Providers were more responsive to interruptive alerts than noninterruptive ones.
  • Postponing the interruptive alert did not save providers time compared with completing the alert.
Design:
  • Increase the time interval between postponing and refiring an interruptive alert
  • Set a threshold to limit the total number of firing of tobacco cessation alerts during a single encounter

Improving Alert Adoption and Completion Through Local Adaptation

Clinics varied substantially in completing the alert, calling for clinic-specific strategies to improve alert adoption. We also identified a modifiable factor (ie, the alert encounter relevance) that affects alert completion. Our physician coauthors considered certain encounter types (eg, initial consultation and office visit) to be relevant for routine tobacco use screening, while others (eg, lab visit and radiation oncology treatment visit) were deemed less relevant. While existing guidelines recommend repeating the smoking assessment at every encounter [17,18], we found that the completion rate of the screening alert was much lower for “less relevant” encounters, which may appear to be guideline noncompliance. This finding could be informative for committees that develop tobacco screening and treatment guidelines. Implementation teams that want to enforce the “screening at every encounter” rule may need additional strategies. These could include using provider orientation and local champions to influence the culture surrounding tobacco screening [35].

Improving Support Alert Efficacy

Although providers responded to support alerts frequently, referral to the tobacco cessation clinic was rare. One reason was that few patients were ready to quit at the point of care. Future programs may incorporate additional strategies, such as patient education, provider training in patient-provider communication, and addressing patient-level barriers (eg, barriers associated with health beliefs and socioeconomic factors) [36,37]. Note that the 2% referral rate may underestimate the effect of the support alert because it was calculated based on referrals directly linked to the alert. If tobacco treatment specialists contacted the patients interested in quitting after the patient visits, these follow-up activities would be documented elsewhere without a link to the alert, or if a patient chose other treatment methods (eg, quitline or medications), the alert-driven referral would not happen.

Improving Accuracy of Tracking for Alert Completion

The completion rates of the EHR documentation of screening results were lower for some clinics, even though their clinic staff acknowledged screening completion for most encounters. Through discussion with the team coordinating the tobacco cessation program, we identified one major reason for this gap. In clinics using support from patient navigators to complete screening documentation, the clinic staff were likely to bypass the screening but still acknowledged completion. Therefore, measuring EHR documentation is important for the accurate tracking of alert success. We used encounter-level data for this measurement. Alert-level tracking may be necessary for the future development of targeted strategies (eg, provider-specific training) to improve alert adoption. The alert design can be improved to allow this, for example, by disabling the button for acknowledging the completion of screening until the EHR documentation is completed.

Balancing Alert Efficacy With Burden

Although commonly used, effective integration of e-alerts into the clinical workflow has proven difficult [29-33,38,39]. Medication alerts were frequently overridden by health care providers [29,30,33,40], and providers experienced alert-related burden and fatigue [9,29,31,41]. Our study found that postponing the interruptive alert did not save providers time compared with completing the alert. This was partly due to the refiring of postponed alerts. An overabundance of interruptive alerts in EHRs may lead to frequent “postpone” or “override” actions and user dissatisfaction [31-33]. However, our findings do not support disabling the interruptive alerts, as we found that providers were much more responsive to interruptive alerts than noninterruptive ones. One way to alleviate the alert burden is increasing the time interval between postponing and refiring or setting the maximum number of times (eg, 2 or 3) to fire a tobacco cessation alert during each encounter.

Contribution to Implementation Science Methods

New methods are needed for monitoring implementation, including automated approaches that reduce the data collection burden [7,42]. We contributed to this literature by developing automatic EHR activity metrics for monitoring the implementation of CDS tools. Our approach has three merits. First, automatic metrics are suitable for rapid periodic evaluation of implementation programs. These metrics can identify deviations and variations of CDS use at clinic and provider levels, which may inform the selection of key informants for interviews to identify causes of deviation and variation, and the development of strategies to improve CDS design and use. Second, EHR activity data work “behind the scenes” to capture EHR use behavior without interruptions [43-45]. Metrics built on this data can reduce reporting bias and may minimize Hawthorne effects (ie, participants’ engagement with an intervention changes when they are aware of attention from observers) [46]. Third, EHRs have been adopted by most US hospitals [47], and EHR-embedded CDS tools are frequently used to support health care quality improvement [1-6]. The ubiquity of EHRs contributes to the generalizability of our approach.

Our work relates to studies using EHR audit logs (one type of EHR activity data) but is different in methodology. The metrics described in these studies measure EHR use and associated burden (eg, total time on EHR, time spent using the EHR after hours, time spent on chart review per patient per day) [48-52] nonspecific to CDS tools. Using EHR audit logs to measure providers’ response to a specific EHR tool is challenging, typically involving manual mapping of low-level actions recorded in the log files to EHR use activities [39,50,53]. We used alert activity data generated by Epic’s built-in functions to eliminate manual mapping.

Prior studies on alert burden focused on medication alerts and used alert override rate and alert volume as markers for burden in the context of de-implementation [30-33,40]. To our knowledge, this study is the first to systematically measure the burden of preventive care alerts. Our findings did not support a simple de-implementation approach but call for better local adaptation to balance alert efficacy and burden.

Limitations

This study has several limitations. First, the EHR data we analyzed only contained alert-linked referrals to the tobacco cessation clinic. Our analysis may underestimate the actual effect of the support alert. Second, EHR activity data only capture provider interaction with the EHR and lack information about other clinical activities (eg, discussion with patients, pager ringing) during an encounter. In-depth investigations on clinical workflows and their impact on alert response are needed to better understand the variation of alert completion across clinics.

Conclusions

This study developed EHR activity metrics and demonstrated their use in monitoring the impact of CDS tools implemented by a C3I-funded implementation program that promotes tobacco cessation in patients with cancer. These metrics can be used to guide implementation adaptation and are scalable and adaptable to other settings that use e-alerts to promote adherence to health practice guidelines.

Acknowledgments

The tobacco cessation alerts were implemented by the Tobacco Control Center of Excellence for clinics at the Wake Forest Baptist Comprehensive Cancer Center. The Wake Forest Baptist Comprehensive Cancer Center was renamed Atrium Health Wake Forest Baptist Comprehensive Cancer Center in 2021.

This work was supported by the National Cancer Institute of the US National Institutes of Health (grants P50CA244693 and P30 CA012197, and CRDF award 66590 through the Cancer Center Cessation Initiative Coordinating Center contract) and the National Heart, Lung, and Blood Institute of the US National Institutes of Health (grant K12HL138049 to JC through the Massachusetts Consortium for Cardiopulmonary Implementation Science Scholars K12 Training Program). The study funder had no role in the design of the study; the collection, analysis, and interpretation of the data; and the decision to submit for publication.

Data Availability

Data supporting the study reported in this paper can be made available in deidentified form subject to establishing a data use agreement with the Wake Forest University School of Medicine. The code supporting this study can be accessed on GitHub [54].

Authors' Contributions

All authors take responsibility for the manuscript content, made critical revisions or contributed important intellectual content, and took the decision to submit the final manuscript. KLF, TKH, SLC, JC, ECD, and ELS obtained funding for this study. JC, TKH, and SLC conceptualized the study. JC designed and developed the electronic health record metrics, with advice from TKH and input from all coauthors. AB, AM, BO, SCB, and ERH obtained or provided data. JC, TKH, SLC, and AD analyzed data. JC and TKH visualized and interpreted the results. SLC, AD, SCB, KLF, ECD, and ELS provided critical feedback on metrics design and result interpretation. JC, TKH, SLC, and KLF wrote the first draft of the manuscript. JC, TKH, SLC, AD, SCB, KLF, BO, ERH, AB, AM, ECD, and ELS contributed to the manuscript revision.

Conflicts of Interest

AD serves as an EHR Consultant for the AAMC CORE program. AD is a co-inventor of WHIRL, which is licensed to IllumiCare, Inc. They have an ownership interest in the WHIRL application. AD is also a co-inventor of mPATH. They have equity in Digital Health Navigation (DHN) Solutions, which has licensed mPATH. None of these potential COIs overlap in any way with the content of the current study.

Multimedia Appendix 1

Clinical workflows associated with tobacco cessation alerts.

PDF File (Adobe PDF File), 801 KB

Multimedia Appendix 2

Electronic health record activity metrics.

PDF File (Adobe PDF File), 57 KB

Multimedia Appendix 3

Encounter types relevant to tobacco use screening for patients with cancer.

PDF File (Adobe PDF File), 13 KB

  1. Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med 2020;3:17. [CrossRef] [Medline]
  2. Classification of digital health interventions v1.0: a shared language to describe the uses of digital technology for health. World Health Organization. 2018.   URL: https://apps.who.int/iris/handle/10665/260480 [accessed 2023-02-03]
  3. McCoy AB, Thomas EJ, Krousel-Wood M, Sittig DF. Clinical decision support alert appropriateness: a review and proposal for improvement. Ochsner J 2014;14(2):195-202 [FREE Full text] [Medline]
  4. Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005 Apr 02;330(7494):765 [FREE Full text] [CrossRef] [Medline]
  5. Page N, Baysari MT, Westbrook JI. A systematic review of the effectiveness of interruptive medication prescribing alerts in hospital CPOE systems to change prescriber behavior and improve patient safety. Int J Med Inform 2017 Sep;105:22-30. [CrossRef] [Medline]
  6. Kwok R, Dinh M, Dinh D, Chu M. Improving adherence to asthma clinical guidelines and discharge documentation from emergency departments: implementation of a dynamic and integrated electronic decision support system. Emerg Med Australas 2009 Feb;21(1):31-37. [CrossRef] [Medline]
  7. Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ 2015 Mar 19;350:h1258 [FREE Full text] [CrossRef] [Medline]
  8. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health 2011 Mar;38(2):65-76 [FREE Full text] [CrossRef] [Medline]
  9. Gregory ME, Russo E, Singh H. Electronic health record alert-related workload as a predictor of burnout in primary care providers. Appl Clin Inform 2017 Jul 05;8(3):686-697 [FREE Full text] [CrossRef] [Medline]
  10. Fraser HSF, Mugisha M, Remera E, Ngenzi JL, Richards J, Santas X, et al. User perceptions and use of an enhanced electronic health record in Rwanda with and without clinical alerts: cross-sectional survey. JMIR Med Inform 2022 May 03;10(5):e32305 [FREE Full text] [CrossRef] [Medline]
  11. D'Angelo H, Land SR, Mayne RG. Assessing electronic nicotine delivery systems use at NCI-designated cancer centers in the Cancer Moonshot-funded Cancer Center Cessation Initiative. Cancer Prev Res (Phila) 2021 Aug;14(8):763-766 [FREE Full text] [CrossRef] [Medline]
  12. Zheng K, Ratwani RM, Adler-Milstein J. Studying workflow and workarounds in electronic health record-supported work to improve health system performance. Ann Intern Med 2020 Jun 02;172(11 Suppl):S116-S122 [FREE Full text] [CrossRef] [Medline]
  13. National Center for Chronic Disease Prevention and Health Promotion (US) Office on Smoking and Health. The Health Consequences of Smoking—50 Years of Progress: A Report of the Surgeon General. Atlanta, GA: Centers for Disease Control and Prevention; 2014.
  14. Toll BA, Brandon TH, Gritz ER, Warren GW, Herbst RS, AACR Subcommittee on Tobacco and Cancer. Assessing tobacco use by cancer patients and facilitating cessation: an American Association for Cancer Research policy statement. Clin Cancer Res 2013 Apr 15;19(8):1941-1948 [FREE Full text] [CrossRef] [Medline]
  15. Warren GW, Kasza KA, Reid ME, Cummings KM, Marshall JR. Smoking at diagnosis and survival in cancer patients. Int J Cancer 2013 Jan 15;132(2):401-410 [FREE Full text] [CrossRef] [Medline]
  16. Balduyck B, Sardari Nia P, Cogen A, Dockx Y, Lauwers P, Hendriks J, et al. The effect of smoking cessation on quality of life after lung cancer surgery. Eur J Cardiothorac Surg 2011 Dec;40(6):1432-7; discussion 1437. [CrossRef] [Medline]
  17. Shields PG, Herbst RS, Arenberg D, Benowitz NL, Bierut L, Luckart JB, et al. Smoking Cessation, Version 1.2016, NCCN Clinical Practice Guidelines in Oncology. J Natl Compr Canc Netw 2016 Nov;14(11):1430-1468. [CrossRef] [Medline]
  18. Hanna N, Mulshine J, Wollins DS, Tyne C, Dresler C. Tobacco cessation and control a decade later: American society of clinical oncology policy statement update. J Clin Oncol 2013 Sep 01;31(25):3147-3157. [CrossRef] [Medline]
  19. Price SN, Studts JL, Hamann HA. Tobacco use assessment and treatment in cancer patients: a scoping review of oncology care clinician adherence to clinical practice guidelines in the U.S. Oncologist 2019 Feb;24(2):229-238 [FREE Full text] [CrossRef] [Medline]
  20. Croyle RT, Morgan GD, Fiore MC. Addressing a Core Gap in Cancer Care - The NCI Moonshot Program to Help Oncology Patients Stop Smoking. N Engl J Med 2019 Feb 07;380(6):512-515 [FREE Full text] [CrossRef] [Medline]
  21. Bernstein SL, Rosner J, DeWitt M, Tetrault J, Hsiao AL, Dziura J, et al. Design and implementation of decision support for tobacco dependence treatment in an inpatient electronic medical record: a randomized trial. Transl Behav Med 2017 Jun;7(2):185-195 [FREE Full text] [CrossRef] [Medline]
  22. Mahabee-Gittens EM, Dexheimer JW, Gordon JS. Development of a tobacco cessation clinical decision support system for pediatric emergency nurses. Comput Inform Nurs 2016 Dec;34(12):560-569 [FREE Full text] [CrossRef] [Medline]
  23. Schindler-Ruwisch JM, Abroms LC, Bernstein SL, Heminger CL. A content analysis of electronic health record (EHR) functionality to support tobacco treatment. Transl Behav Med 2017 Jun;7(2):148-156 [FREE Full text] [CrossRef] [Medline]
  24. Mathias JS, Didwania AK, Baker DW. Impact of an electronic alert and order set on smoking cessation medication prescription. Nicotine Tob Res 2012 Jun;14(6):674-681. [CrossRef] [Medline]
  25. Boyle R, Solberg L, Fiore M. Use of electronic health records to support smoking cessation. Cochrane Database Syst Rev 2014 Dec 30;2014(12):CD008743 [FREE Full text] [CrossRef] [Medline]
  26. Jose T, Ohde JW, Hays JT, Burke MV, Warner DO. Design and pilot implementation of an electronic health record-based system to automatically refer cancer patients to tobacco use treatment. Int J Environ Res Public Health 2020 Jun 06;17(11):4054 [FREE Full text] [CrossRef] [Medline]
  27. Ramsey AT, Chiu A, Baker T, Smock N, Chen J, Lester T, et al. Care-paradigm shift promoting smoking cessation treatment among cancer center patients via a low-burden strategy, Electronic Health Record-Enabled Evidence-Based Smoking Cessation Treatment. Transl Behav Med 2020 Dec 31;10(6):1504-1514 [FREE Full text] [CrossRef] [Medline]
  28. Khairat S, Marc D, Crosby W, Al Sanousi A. Reasons for physicians not adopting clinical decision support systems: critical analysis. JMIR Med Inform 2018 Apr 18;6(2):e24 [FREE Full text] [CrossRef] [Medline]
  29. Légat L, Van Laere S, Nyssen M, Steurbaut S, Dupont AG, Cornu P. Clinical decision support systems for drug allergy checking: systematic review. J Med Internet Res 2018 Sep 07;20(9):e258 [FREE Full text] [CrossRef] [Medline]
  30. Poly TN, Islam MM, Yang H, Li YJ. Appropriateness of overridden alerts in computerized physician order entry: systematic review. JMIR Med Inform 2020 Jul 20;8(7):e15653 [FREE Full text] [CrossRef] [Medline]
  31. McGreevey JD, Mallozzi CP, Perkins RM, Shelov E, Schreiber R. Reducing alert burden in electronic health records: state of the art recommendations from four health systems. Appl Clin Inform 2020 Jan;11(1):1-12 [FREE Full text] [CrossRef] [Medline]
  32. Chaparro JD, Hussain C, Lee JA, Hehmeyer J, Nguyen M, Hoffman J. Reducing interruptive alert burden using quality improvement methodology. Appl Clin Inform 2020 Jan;11(1):46-58 [FREE Full text] [CrossRef] [Medline]
  33. Genco EK, Forster JE, Flaten H, Goss F, Heard KJ, Hoppe J, et al. Clinically inconsequential alerts: the characteristics of opioid drug alerts and their utility in preventing adverse drug events in the emergency department. Ann Emerg Med 2016 Feb;67(2):240-248.e3 [FREE Full text] [CrossRef] [Medline]
  34. Stata Statistical Software: Release 15. Stata. College Station, TX: StataCorp LLC; 2017.   URL: https://www.stata.com [accessed 2023-02-03]
  35. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci 2015 Feb 12;10:21 [FREE Full text] [CrossRef] [Medline]
  36. Weaver KE, Danhauer SC, Tooze JA, Blackstock AW, Spangler J, Thomas L, et al. Smoking cessation counseling beliefs and behaviors of outpatient oncology providers. Oncologist 2012;17(3):455-462 [FREE Full text] [CrossRef] [Medline]
  37. Simmons VN, Litvin EB, Patel RD, Jacobsen PB, McCaffrey JC, Bepler G, et al. Patient-provider communication and perspectives on smoking cessation and relapse in the oncology setting. Patient Educ Couns 2009 Dec;77(3):398-403 [FREE Full text] [CrossRef] [Medline]
  38. Sidebottom AC, Collins B, Winden TJ, Knutson A, Britt HR. Reactions of nurses to the use of electronic health record alert features in an inpatient setting. Comput Inform Nurs 2012 Apr;30(4):218-26; quiz 227. [CrossRef] [Medline]
  39. Cutrona SL, Fouayzi H, Burns L, Sadasivam RS, Mazor KM, Gurwitz JH, et al. Primary care providers' opening of time-sensitive alerts sent to commercial electronic health record InBaskets. J Gen Intern Med 2017 Nov;32(11):1210-1219 [FREE Full text] [CrossRef] [Medline]
  40. Nanji KC, Slight SP, Seger DL, Cho I, Fiskio JM, Redden LM, et al. Overrides of medication-related clinical decision support alerts in outpatients. J Am Med Inform Assoc 2014;21(3):487-491 [FREE Full text] [CrossRef] [Medline]
  41. Singh H, Spitzmueller C, Petersen NJ, Sawhney MK, Sittig DF. Information overload and missed test results in electronic health record-based settings. JAMA Intern Med 2013 Apr 22;173(8):702-704 [FREE Full text] [CrossRef] [Medline]
  42. Willmeroth T, Wesselborg B, Kuske S. Implementation outcomes and indicators as a new challenge in health services research: a systematic scoping review. Inquiry 2019;56:46958019861257 [FREE Full text] [CrossRef] [Medline]
  43. Dumais S, Jeffries R, Russell D, Tang D, Teevan J. Understanding user behavior through log data and analysis. In: Olson J, Kellogg W, editors. Ways of Knowing in HCI. New York, NY: Springer; 2014:349-372.
  44. Huerta T, Fareed N, Hefner JL, Sieck CJ, Swoboda C, Taylor R, et al. Patient engagement as measured by inpatient portal use: methodology for log file analysis. J Med Internet Res 2019 Mar 25;21(3):e10957 [FREE Full text] [CrossRef] [Medline]
  45. Wang JK, Ouyang D, Hom J, Chi J, Chen JH. Characterizing electronic health record usage patterns of inpatient medicine residents using event log data. PLoS One 2019;14(2):e0205379 [FREE Full text] [CrossRef] [Medline]
  46. Audrey S, Holliday J, Parry-Langdon N, Campbell R. Meeting the challenges of implementing process evaluation within randomized controlled trials: the example of ASSIST (A Stop Smoking in Schools Trial). Health Educ Res 2006 Jun;21(3):366-377 [FREE Full text] [CrossRef] [Medline]
  47. Henry J, Pylypchuk Y, Searcy T, Patel V. Adoption of electronic health record systems among U.S. non-federal acute care hospitals: 2008-2015. HealthIT.gov. 2008.   URL: https:/​/www.​healthit.gov/​data/​data-briefs/​adoption-electronic-health-​record-systems-among-us-non-federal-acute-care-1 [accessed 2023-02-03]
  48. Rule A, Chiang MF, Hribar MR. Using electronic health record audit logs to study clinical activity: a systematic review of aims, measures, and methods. J Am Med Inform Assoc 2020 Mar 01;27(3):480-490 [FREE Full text] [CrossRef] [Medline]
  49. Sinsky CA, Rule A, Cohen G, Arndt BG, Shanafelt TD, Sharp CD, et al. Metrics for assessing physician activity using electronic health record log data. J Am Med Inform Assoc 2020 Apr 01;27(4):639-643 [FREE Full text] [CrossRef] [Medline]
  50. Adler-Milstein J, Adelman JS, Tai-Seale M, Patel VL, Dymek C. EHR audit logs: a new goldmine for health services research? J Biomed Inform 2020 Jan;101:103343 [FREE Full text] [CrossRef] [Medline]
  51. Lou SS, Lew D, Harford DR, Lu C, Evanoff BA, Duncan JG, et al. Temporal associations between EHR-derived workload, burnout, and errors: a prospective cohort study. J Gen Intern Med 2022 Jul;37(9):2165-2172. [CrossRef] [Medline]
  52. Lou SS, Liu H, Warner BC, Harford D, Lu C, Kannampallil T. Predicting physician burnout using clinical activity logs: model performance and lessons learned. J Biomed Inform 2022 Mar;127:104015. [CrossRef] [Medline]
  53. Amroze A, Field TS, Fouayzi H, Sundaresan D, Burns L, Garber L, et al. Use of electronic health record access and audit logs to identify physician actions following noninterruptive alert opening: descriptive study. JMIR Med Inform 2019 Feb 07;7(1):e12650 [FREE Full text] [CrossRef] [Medline]
  54. MIER metrics. GitHub.   URL: https://github.com/jchen2017/MIER_metrics [accessed 2023-02-10]


BPA: Best Practice Advisory
C3I: Cancer Center Cessation Initiative
CDS: clinical decision support
e-alerts: electronic alerts
EHR: electronic health record
NCI: National Cancer Institute
TCCOE: Tobacco Control Center of Excellence


Edited by C Perrin; submitted 29.09.22; peer-reviewed by J Hefner; comments to author 21.10.22; revised version received 21.11.22; accepted 18.01.23; published 02.03.23

Copyright

©Jinying Chen, Sarah L Cutrona, Ajay Dharod, Stephanie C Bunch, Kristie L Foley, Brian Ostasiewski, Erica R Hale, Aaron Bridges, Adam Moses, Eric C Donny, Erin L Sutfin, Thomas K Houston, iDAPT Implementation Science Center for Cancer Control. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 02.03.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on https://medinform.jmir.org/, as well as this copyright and license information must be included.