Published on in Vol 9, No 10 (2021): October

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/32303, first published .
Harnessing the Electronic Health Record and Computerized Provider Order Entry Data for Resource Management During the COVID-19 Pandemic: Development of a Decision Tree

Harnessing the Electronic Health Record and Computerized Provider Order Entry Data for Resource Management During the COVID-19 Pandemic: Development of a Decision Tree

Harnessing the Electronic Health Record and Computerized Provider Order Entry Data for Resource Management During the COVID-19 Pandemic: Development of a Decision Tree

Original Paper

1Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX, United States

2Department of Advanced Analytics and Informatics, Children's Health, Dallas, TX, United States

3Division of Pediatric Emergency Medicine, Department of Pediatrics, University of Texas Southwestern Medical Center, Dallas, TX, United States

4Division of Pediatric Hospital Medicine, Department of Pediatrics, University of Texas Southwestern Medical Center, Dallas, TX, United States

5Clinical Informatics Center, University of Texas Southwestern Medical Center, Dallas, TX, United States

6Department of Pediatrics, University of Texas Southwestern Medical Center, Dallas, TX, United States

7Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX, United States

8Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, United States

9Division of Infectious Diseases, Department of Pediatrics, University of Texas Southwestern Medical Center, Dallas, TX, United States

Corresponding Author:

Hung S Luu, PharmD, MD

Department of Pathology

University of Texas Southwestern Medical Center

1935 Medical District Drive

Dallas, TX, 75235

United States

Phone: 1 2144562168

Fax:1 2144564713

Email: hung.luu@childrens.com


Background: The COVID-19 pandemic has resulted in shortages of diagnostic tests, personal protective equipment, hospital beds, and other critical resources.

Objective: We sought to improve the management of scarce resources by leveraging electronic health record (EHR) functionality, computerized provider order entry, clinical decision support (CDS), and data analytics.

Methods: Due to the complex eligibility criteria for COVID-19 tests and the EHR implementation–related challenges of ordering these tests, care providers have faced obstacles in selecting the appropriate test modality. As test choice is dependent upon specific patient criteria, we built a decision tree within the EHR to automate the test selection process by using a branching series of questions that linked clinical criteria to the appropriate SARS-CoV-2 test and triggered an EHR flag for patients who met our institutional persons under investigation criteria.

Results: The percentage of tests that had to be canceled and reordered due to errors in selecting the correct testing modality was 3.8% (23/608) before CDS implementation and 1% (262/26,643) after CDS implementation (P<.001). Patients for whom multiple tests were ordered during a 24-hour period accounted for 0.8% (5/608) and 0.3% (76/26,643) of pre- and post-CDS implementation orders, respectively (P=.03). Nasopharyngeal molecular assay results were positive in 3.4% (826/24,170) of patients who were classified as asymptomatic and 10.9% (1421/13,074) of symptomatic patients (P<.001). Positive tests were more frequent among asymptomatic patients with a history of exposure to COVID-19 (36/283, 12.7%) than among asymptomatic patients without such a history (790/23,887, 3.3%; P<.001).

Conclusions: The leveraging of EHRs and our CDS algorithm resulted in a decreased incidence of order entry errors and the appropriate flagging of persons under investigation. These interventions optimized reagent and personal protective equipment usage. Data regarding symptoms and COVID-19 exposure status that were collected by using the decision tree correlated with the likelihood of positive test results, suggesting that clinicians appropriately used the questions in the decision tree algorithm.

JMIR Med Inform 2021;9(10):e32303

doi:10.2196/32303

Keywords



COVID-19 is caused by SARS-CoV-2 and has quickly emerged as a global pandemic since its initial description in December 2019 [1]. The increased testing and isolation of patients with COVID-19 are important means of limiting the spread of infection. Many laboratories in the United States have expanded their testing capabilities rapidly [2]. As a result, the overall testing capacity in the United States is substantially larger than what the Centers for Disease Control and Prevention and state health agencies were able provide at the start of the pandemic. Testing shortages however persisted throughout 2020 and, to a lesser extent, into 2021 due to inadequate supplies of collection swabs, viral transport media, RNA extraction regents, and other reagents and consumables [3,4]. Institutions have had to prioritize testing by taking into account the severity of illnesses, the rapidness of results, bed availability, and staffing needs [4].

Electronic health records (EHRs) and computerized provider order entry (CPOE) systems offer the potential to reduce the number of medical errors and improve care quality by facilitating communication, providing access to information, monitoring patients, providing decision support, and enhancing clinicians’ situational awareness [5-7]. However, EHRs can also inadvertently result in clinicians introducing new errors, overlooking existing orders, and duplicating work [8-10]. Apart from the need to reduce costs, preventing the duplicate testing of patients for COVID-19 is essential for conserving existing testing supplies and maximizing the number of patients that can be tested.

Although the availability of testing is important, so is the timely dissemination of test results to care providers to optimally allocate valuable hospital resources, such as limited supplies of personal protective equipment (PPE), effectively [4]. Testing capacities have increased since the early days of the pandemic, but the proliferation of different testing platforms and methodologies has led to variations in test turnaround times and assay sensitivity. Commercial vendors have produced high-throughput, cartridge-based instruments that promise shorter testing turnaround times; however, the demand for these instruments currently exceeds the amount of available supplies [4].

To meet the testing needs of our patient population despite equipment shortages, institutions such as our pediatric health care system had to assemble a variety of COVID-19 testing modalities with varying performance characteristics. Matching testing modalities to the appropriate clinical scenario was a challenge. Some institutions developed decision-making algorithms to stratify their patient population into risk groupings [11]. Herein, we describe and evaluate the CPOE clinical decision support (CDS) tools that were developed to optimize the ordering of COVID-19 tests; the EHR functionalities that were leveraged to manage persons under investigation (PUIs); and the data analysis tools that were essential for monitoring changing variables, such as ordering patterns and available reagent supplies.


Setting and Institutional Approach to Managing the COVID-19 Pandemic

Our academically affiliated pediatric health care system in North Texas consists of 3 acute care hospitals that are licensed for a total of 601 beds and 24 ambulatory specialty care centers. Together, these facilities care for more than 227,000 unique patients per year and have provided services, including more than 19,600 surgeries and 107,800 emergency department visits [12]. Our health system’s efforts in preparing for patients with COVID-19 began early in 2020 and included the activation of the Hospital Incident Command Structure on March 5. A sick isolation unit was opened on March 23 for the management of patients who did not require critical care and were either suspected of SARS-CoV-2 infection—designated as PUIs—or confirmed to be infected. The first positive SARS-CoV-2 test result for a patient in our system was received later that month (March 31). With the activation of the Hospital Incident Command Structure, we recognized that the pandemic would require an organized, sustainable, and adaptable approach to caring for children with COVID-19 while minimizing staff exposure and optimizing the use of PPE and testing reagents and supplies. In this study, we describe and evaluate tools that were developed within the EHR and were vital components of this approach.

As COVID-19 spread across the world and within the United States, the epidemiology of the disease morphed over time. First, cases were seen predominately among patients who had been exposed to the disease during recent travel. Afterward, the disease began to spread within communities, but most new infections were still identified among individuals who had contact with a limited number of confirmed local cases. Finally, widespread community transmission developed, and many cases could no longer be reliably related to a known exposure or travel history [13-15]. In early 2020, the criteria recommended by the Centers for Disease Control and Prevention for identifying a person as a PUI changed several times [16,17]. Reflecting the changing disease epidemiology, these PUI definitions, which had initially focused on symptomatic individuals with a history of travel to Wuhan, China, or a history of contact with a laboratory-confirmed case of COVID-19, were later expanded by the addition of criteria related to travel from mainland China, travel from affected geographic areas within the United States, and, finally, even individuals with no known exposure risk factors [16]. Following the initial pandemic period, during which SARS-CoV-2 testing was available at our institution only through public health laboratories, the options for testing increased first thanks to offerings from commercial reference laboratories and then due to the launch of an internal, laboratory-developed test with a turnaround time of approximately 24 hours. Later, our laboratory implemented commercial rapid testing platforms that offered further improvements in turnaround times for a limited number of specimens depending on the availability of the required kits (Table 1).

Table 1. The SARS-CoV-2 assays implemented.
Assay
characteristic
Modified CDCa SARS-CoV-2 Assay (laboratory-developed test)Biofire Respiratory Panel 2.1
(bioMérieux SA)
Xpert Xpress SARS-CoV-2
(Cepheid)
SARSb Antigen FIAc (Quidel
Corporation)
Alinity m SARS-COV-2 Assay (Abbott Laboratories)dCobas SARS-CoV-2 (Roche Holding AG)e
AnalyteRNARNARNAAntigenRNARNA
Sample
collection
NPf swab in UTMgNP swab in UTMNP swab in UTMAnterior nares swabNP swab in UTMNP swab in UTM
SARS-CoV-2 targetNucleocapsid geneMembrane gene and surface gene

Envelope gene and nucleocapsid 2 geneNucleocapsid
protein
Nucleocapsid gene and RdRph geneEnvelope gene and RdRp gene
SARS-CoV-2 LoDi260 copies/mL160 copies/mL250 copies/mL113 TCID50j/mL100 copies/mL0.003 TCID50/mL
Other target(s)None21 additional viruses and bacteriaNoneNoneNoneNone
Instrument(s)EMAG (extraction; bioMérieux SA) and ABIk 7500 (polymerase chain reaction; Thermo Fisher Scientific)

FilmArray Torch System (bioMérieux SA)GeneXpert XVI (Cepheid)Sofia 2 (Quidel Corporation)Alinity m System (Abbott Laboratories)Cobas 6800 (Roche Holding AG)
Maximum throughputl150 samples/8-hour shift (extraction and polymerase chain reaction)<1 hour/test/ instrument module<1 hour/test/ instrument module20 min/test/ instrument module300 tests/8-hour shift864 tests/8-hour shift
Time to results, mean (SD)m0.79 (0.85) days70 (17) min

77 (29) min27 (5) min0.53 (0.35) days2.03 (1.56) days

aCDC: Centers for Disease Control and Prevention.

bSARS: severe acute respiratory syndrome.

cFIA: fluorescent immunoassay.

dThe assay was performed at reference lab 1.

eThe assay was performed at reference lab 2.

fNP: nasopharyngeal.

gUTM: universal transport medium.

hRdRp: RNA-dependent RNA polymerase.

iLoD: limit of detection (the LoD shown is either the lowest reported [highest sensitivity] value on the package insert or the lowest value observed in the laboratory).

jTCID50: median tissue culture infectious dose.

kABI: Applied Biosystems.

lMaximum throughput assumes sufficient reagents. Maximum throughput volumes were not achieved for most platforms due to limited reagent allocations.

mThe time from specimen (primary orders) or order (add-on orders) receipt in the lab to result reporting. This includes transport to outside labs (send-out testing only), laboratory processing, sample preparation, instrument time, and result reporting.

New institutional policies and procedures, in response to the COVID-19 pandemic, were instituted in parallel with the changed understanding of the disease’s epidemiology, the illness, and SARS-CoV-2 transmission. These changes included the adoption (on April 28, 2020) of universal SARS-CoV-2 testing for all patients who were admitted through the emergency department or directly to inpatient floors and the intensive care unit. At first, rapid testing was prioritized for patients with fevers or respiratory symptoms or those who had close contact with individuals with SARS-CoV-2 infection, while other patients were tested by using the laboratory-developed test. This strategy directed limited resources for rapid testing toward patients with the highest likelihood of infection but resulted in a delay in identifying asymptomatic positive cases, which represent a considerable portion of SARS-CoV-2 infections in children. As rapid testing became increasingly available, such tests were deployed subsequently for all admitted patients.

To optimize the use of resources, such as negative pressure rooms and PPE, we developed a policy for aerosol-generating procedures (AGPs). The policy governed the performance of AGPs, including any preceding SARS-CoV-2 testing and PPE requirements, in a systematic manner that was driven by patients’ symptoms, their COVID-19 status (if known), the prevalence of infection in the community, and the classification of AGPs into 2 risk tiers. SARS-CoV-2 testing was initially required in advance for all patients undergoing scheduled surgery, and the empiric use of PPE, including N95 respirators, was reserved for urgent or emergent cases when testing was not feasible. As community spread increased and access to rapid testing improved, the testing requirement was extended to any urgent surgical procedures for which sufficient time was available.

EHR Decision Tree for SARS-CoV-2 Test Order Placement

Given the scarcity of testing resources and growing demand during the early phase of the pandemic, formal criteria for SARS-CoV-2 testing were developed at our institution through consensus among physician and clinical laboratory leaders. Prior to the pandemic, our institution did not restrict the ordering of assays for non–SARS-CoV-2 respiratory viruses nor collect data on the reasons for ordering such tests systematically. Developing an ordering system that would be intuitive for clinicians to use and would capture data to guide the prioritization of orders and subsequent revisions to indications for ordering were therefore important priorities. However, the criteria for ordering specific COVID-19 tests were complex, and the metadata were frequently revised as new clinical scenarios were incorporated and new testing options became available. The implementation of the detailed ordering criteria in the EHR posed a challenge that increased with the number of available testing options. More importantly, the growing list of testing indications was a hard-to-navigate obstacle for care providers who needed to place orders. To ease the burden of ordering the correct test from a long list of choices, we built a decision tree within the EHR to automate the selection process based on answers that are provided to a branching set of hierarchical questions. This decision tree (Figure 1) was first implemented on April 28, 2020, and was subsequently updated and frequently modified during the early response to the pandemic.

Figure 1. Electronic health record decision tree for ordering SARS-CoV-2 tests. This flow diagram shows the branching set of hierarchical questions that resulted in the capture of data for test prioritization and symptom status identification. LDT: laboratory-developed test; PUI: person of interest; RP2.1: BioFire Respiratory Panel 2.1.
View this figure

PUI Flagging in the EHR

In addition to linking clinical indications to the appropriate SARS-CoV-2 test, the ordering process required setting a flag in the EHR for any patient who met our institutional PUI criteria. The flag alerted health care personnel to a patient’s PUI status and the need to use PPE beyond those for standard precautions, including N95 respirators or powered air purifying respirators, when caring for patients.

Testing for an infectious disease usually suggests a clinical index of suspicion that, in itself, may justify flagging patients in the EHR for the possibility of being infected with that disease. In the case of COVID-19 however, institutional policies required SARS-CoV-2 testing upon admission or before surgery for all patients, even in the absence of symptoms or exposure, thereby rendering the presence of an ordered test functionally meaningless as an indicator of clinical suspicion. Although some patients may be asymptomatic carriers and thus could expose the workforce, the pretest probability of infection in such patients was not expected to be above that of the general population. The universal usage of N95 respirators for all health care encounters during the pandemic was neither recommended or feasible, given the limited supplies. Therefore, our institution decided that patients without compatible symptoms or recent exposure to SARS-CoV-2 would not be designated as PUIs, even when routine testing is required by institutional screening protocols. Consequently, in addition to guiding the selection of the correct SARS-CoV-2 test, the decision tree needed to assign the appropriate PUI status to each patient based on the indication for testing.

The introduction of additional testing modalities with decreased sensitivity compared to that of molecular nasopharyngeal sample testing modalities presented another challenge. Although positive results from these less sensitive assays were considered reliable, negative results were not and required confirmation with a more sensitive molecular test. Accordingly, the EHR rules for the clearance of PUI flags were constructed to require a negative molecular test from a nasopharyngeal sample, even if the flag had originally been triggered by an order for a less sensitive screening test.

A flagging system was also created for displaying results from SARS-CoV-2 tests that had been performed at outside facilities with interoperable EHRs. Such outside test results were either flagged as being reliable and approved by our system’s laboratory as being equivalent to internal testing results (by the Happy Together EHR collaborative, which includes Children’s Health, Parkland Hospital, and University of Texas Southwestern Medical Center) or otherwise flagged as results for which equivalence to internal testing results could not be established. Whether flagged patients were being seen in the emergency department or were directly admitted to the wards, the availability of this information allowed bedside physicians to avoid unnecessary SARS-CoV-2 testing, thereby minimizing the waste of limited testing resources.

EHR Tools and the Maintenance of PPE Supplies

Like many US health care institutions, early in the pandemic, we recognized the potential for a shortfall in the critical PPE supplies required for the care of patients with COVID-19, including N95 respirators. Providing appropriate protection to health care workers while minimizing PPE consumption made the accurate identification and flagging of PUIs essential. Our supply of N95 respirators reached a nadir in late March—less than 14 days’ worth of stock on hand overall and less than 7 days’ worth of supply for the scarcest respirator size—but subsequently recovered. Although multiple concurrent strategies, including UV reprocessing and the enhanced scrutiny of N95 respirator usage, also contributed to the successful management of this shortfall, the proper assignment of PUI statuses was a critical component in the struggle to reduce PPE use. Improvements in the national supply of N95 respirators have since reduced the acute importance of these considerations, but the strategies developed during the COVID-19 pandemic for managing limited PPE supplies will be beneficial approaches to dealing with future resource challenges.


SARS-CoV-2 Test Ordering Metrics

The frequencies with which orders for SARS-CoV-2 tests needed to be revised due to user error or had to be repeated were used as measures for the impact of the CDS tools. The percentage of tests that were canceled and reordered due to errors in selecting the correct testing modality was 3.8% (23/608) prior to CDS implementation and 1% (262/26,643) after the implementation of CDS (Fisher exact test: P<.001). The percentages of patients for whom multiple tests were ordered during a 24-hour period were 0.8% (5/608) and 0.3% (76/26,643) prior to and after CDS implementation, respectively, as of October 31, 2020 (Fisher exact test: P=.03).

SARS-CoV-2 Infection Frequency

If the information captured by the decision tree regarding the assignment of SARS-CoV-2 test modalities and PUI statuses accurately reflected the risk of infection, it would be expected that the incidence of positive test results would vary accordingly. Patients were classified as symptomatic or asymptomatic via the decision tree based on the presence or absence of a fever without an identified source or the presence of respiratory symptoms. Consistent with our expectations, the observed frequency of positive nasopharyngeal molecular assays for asymptomatic patients (826/24,170, 3.4%; Table 2) was significantly lower (Fisher exact test: P<.001) than that frequency for symptomatic patients (1421/13,074, 10.9%). Likewise, the incidence of positive test results was higher among asymptomatic patients with a history of exposure to an individual with COVID-19 (36/283, 12.7%) than among asymptomatic patients without such an exposure history (790/23,887, 3.3%; Fisher exact test: P<.001).

Table 2. SARS-CoV-2 testing volumes and results by ordering indication.
Testing indication categoryaTesting volumeb, NPositive testsb, n (%)
Asymptomatic patientsc24,170826 (3.4)

Preprocedural screeningd12,864428 (3.3)

Admission screening10,625329 (3.1)

Screening before behavioral health placement39833 (8.3)

Admission screening of asymptomatic patients with a history of close contact with an individual with COVID-1928336 (12.7)
Symptomatic patients13,0741421 (10.9)

Admission screening or hospitalized patients5573433 (7.8)

Preprocedural screeningd29831 (10.4)

Outpatients with risk factors for severe illness30748 (15.6)

Lower respiratory tract disease without an alternative explanatione303 (10)

Symptomatic patient with a history of close contact with an individual with COVID-19e30 (0)

Symptomatic patient without other specified criteria6863906 (13.2)
Symptom status not specified15,3411146 (7.5)

Preprocedural screeningd6796177 (2.6)

Unrestricted send-out testing5330791 (14.8)

Testing approved by the Division of Infectious Diseases53572 (13.5)

Patient screening after health care exposure892 (2.2)

Unclassified testing2591104 (4)
Total testing52,5853393 (6.5)

aThe testing indication categories listed summarize a larger number of actual indications displayed in the electronic health record, which were dynamically modified over the course of the pandemic.

bTesting data cover the period from March 13, 2020, through March 24, 2021.

cPatients without fevers and without respiratory symptoms were classified as asymptomatic.

dIncludes testing before surgery and other qualifying aerosol-generating procedures.

eThese criteria were used only briefly during the early phase of the pandemic, after which test eligibility was expanded to include symptomatic patients and tests did not need to consider these criteria.

Another group of asymptomatic patients for whom we observed a significantly increased incidence of positive SARS-CoV-2 test results included patients awaiting behavioral health placement (33/398, 8.3%; other asymptomatic patients without a history of COVID-19 exposure: 757/23,489, 3.2%; P<.001). The reason for this increased positivity rate is unclear, but some of these patients likely had a history of prior infection and were referred to our facilities for repeated testing before behavioral health placement to assess for viral clearance. Furthermore, the behavior patterns of these patients may have included decreased adherence to prevention measures such as mask wearing and social distancing, which placed them at an increased infection risk.

Testing for symptomatic patients when resources were the most limited was initially targeted toward those who (1) required hospitalization, (2) had comorbid conditions that increased their risk for developing a serious illness, (3) had a history of COVID-19 exposure, or (4) had a lower respiratory tract infection without another explanation. As the availability of test reagents improved, test eligibility was expanded more broadly to include symptomatic patients, and several of these more specific indications were retired. However, clinicians continued to use the decision tree to identify hospitalized patients and those with risk factors for severe illness to prioritize such patients for rapid testing. All symptomatic patients were designated as PUIs, even when the decision tree did not require more detailed information.

Symptom status was not captured for a subset of test orders (15,341/52,585, 29.2%). Many of these tests were assays that were either sent out to off-site laboratories for nonhospitalized patients or collected as screening tests several days in advance of a scheduled procedure. In the first case, symptomatic patients were instructed to isolate at home pending the result of the test. In the second case, presurgical screening results were generally available by the time patients returned for surgery. The empiric assignment of PUI statuses in the EHR at the time of testing was therefore not prioritized for these patients. Since September 2020 however, improvements in implementation resulted in the consistent capturing of symptom information for ≥80% of tested patients every month.

To manage rare or unanticipated circumstances, our testing algorithm allowed physicians in the Division of Infectious Diseases to authorize testing for patients who exhibited testing indications outside of those that were approved and implemented in the EHR. Once off-site testing became unrestricted, this approval option was used primarily for requests for locally performed tests that offered a shorter turnaround time or for patients who exhibited clinical indications that favored a specific testing platform. This approval route was needed only for 1% (535/52,585) of orders, indicating that the decision tree effectively managed a large majority of scenarios and prevented the approval activity from becoming an excessive burden on the physicians who were tasked with evaluating these nonstandard requests. The yield of positive results from such tests that were approved by infectious disease physicians was high (72/535, 13.5%), as was the frequency of positive results among unrestricted send-out tests (791/5330, 14.8%). These high rates of positive results suggest that clinicians were applying appropriate judgement to selecting patients for testing when considering these ordering options.


Principal Findings

During the period following the implementation of CDS for SARS-CoV-2 test ordering, we documented improvements in the number of cancelled and reordered tests as well as decreases in the number of patients who underwent unnecessary duplicate testing. The goals of CPOE systems include submitting appropriate and efficient orders for patients [5]. Based on our data, it can be argued that this was indeed accomplished by using the decision tree for SARS-CoV-2 test ordering to help clinicians navigate the complex test eligibility criteria. However, the implementation of CPOE and CDS systems has been found to provoke strong emotions in care providers, with negative emotions being the most prevalent. In addition to contributing to the stressors that care providers already face, poorly implemented CDSs can fail if they are too cumbersome to be used as intended [18]. A successful CDS system needs to (1) provide clinicians with the best available knowledge when needed, (2) be highly adopted, (3) be effectively used, and (4) result in continuous improvements in knowledge [19].

Evaluating the effective adoption of CDS can be difficult, as care providers always have the option of selecting criteria randomly in order to complete the ordering process. When evaluating the positivity rates for the patient groups that were defined by the decision tree algorithm, we found statistically significant differences (as expected) in rates of SARS-CoV-2 test positivity between asymptomatic and symptomatic patients and between asymptomatic patients without a history of exposure to SARS-CoV-2 and asymptomatic patients with a history of such exposure. These findings suggest that clinicians appropriately used the questions in the CDS algorithm to help triage patients.

Limitations

Our study has several limitations. First, this was an observational study and not a randomized controlled trial. Therefore, other interventions and institutional changes could have explained the decrease in order error rates. Second, the period prior to the implementation of CDS was relatively brief; during this period, a comparatively lower volume of testing was performed. Third, the decision tree was continually modified over time; new indications, such as patients awaiting behavioral health placement, were added relatively late into the pandemic. Some of the positivity rates that were observed in particular patient cohorts could have been influenced by fluctuations in the infection rate within the community.

Conclusions

The leveraging of the EHR and implementation of the decision support algorithm resulted in the decreased incidence of order entry errors, including decreases in the percentage of cancelled and reordered SARS-CoV-2 tests and the rate of duplicate testing, and the appropriate flagging of PUIs. Collectively, these interventions optimized reagent and PPE usage and protected health care workers. The data gathered through the decision tree could be used to predict differences in the likelihood of positive test results for distinct categories of patients, suggesting that clinicians appropriately used the questions in the decision tree algorithm.

Conflicts of Interest

LMF is an unpaid advisory board member for Avsana Labs and has received grant funding for an investigator-initiated study from Biofire Diagnostics.

  1. Pneumonia of unknown cause – China. World Health Organization. 2020 Jan 05.   URL: https://www.who.int/emergencies/disease-outbreak-news/item/2020-DON229 [accessed 2021-08-16]
  2. Zitek T. The appropriate use of testing for COVID-19. West J Emerg Med 2020 Apr 13;21(3):470-472 [FREE Full text] [CrossRef] [Medline]
  3. Beeching NJ, Fletcher TE, Beadsworth MBJ. Covid-19: testing times. BMJ 2020 Apr 08;369:m1403. [CrossRef] [Medline]
  4. Babiker A, Myers CW, Hill CE, Guarner J. SARS-CoV-2 testing. Am J Clin Pathol 2020 May 05;153(6):706-708 [FREE Full text] [CrossRef] [Medline]
  5. Payne TH, Hoey PJ, Nichol P, Lovis C. Preparation and use of preconstructed orders, order sets, and order menus in a computerized provider order entry system. J Am Med Inform Assoc 2003;10(4):322-329 [FREE Full text] [CrossRef] [Medline]
  6. Horng S, Joseph JW, Calder S, Stevens JP, O'Donoghue AL, Safran C, et al. Assessment of unintentional duplicate orders by emergency department clinicians before and after implementation of a visual aid in the electronic health record ordering system. JAMA Netw Open 2019 Dec 02;2(12):e1916499 [FREE Full text] [CrossRef] [Medline]
  7. Westbrook JI, Li L, Raban MZ, Baysari MT, Mumford V, Prgomet M, et al. Stepped-wedge cluster randomised controlled trial to assess the effectiveness of an electronic medication management system to reduce medication errors, adverse drug events and average length of stay at two paediatric hospitals: a study protocol. BMJ Open 2016 Oct 21;6(10):e011811 [FREE Full text] [CrossRef] [Medline]
  8. Magid S, Forrer C, Shaha S. Duplicate orders: an unintended consequence of computerized provider/physician order entry (CPOE) implementation: analysis and mitigation strategies. Appl Clin Inform 2012 Oct 17;3(4):377-391 [FREE Full text] [CrossRef] [Medline]
  9. Wetterneck TB, Walker JM, Blosky MA, Cartmill RS, Hoonakker P, Johnson MA, et al. Factors contributing to an increase in duplicate medication order errors after CPOE implementation. J Am Med Inform Assoc 2011;18(6):774-782 [FREE Full text] [CrossRef] [Medline]
  10. Tsou AY, Lehmann CU, Michel J, Solomon R, Possanza L, Gandhi T. Safe practices for copy and paste in the EHR. Systematic review, recommendations, and novel model for health IT collaboration. Appl Clin Inform 2017 Jan 11;8(1):12-34 [FREE Full text] [CrossRef] [Medline]
  11. Feketea GM, Vlacha V. A decision-making algorithm for children with suspected coronavirus disease 2019. JAMA Pediatr 2020 Dec 01;174(12):1220-1222. [CrossRef] [Medline]
  12. 2020 key metrics. Children's Health.   URL: https://www.childrens.com/footer/about/our-system/key-metrics [accessed 2021-08-16]
  13. Schuchat A, CDC COVID-19 Response Team. Public health response to the initiation and spread of pandemic COVID-19 in the United States, February 24-April 21, 2020. MMWR Morb Mortal Wkly Rep 2020 May 08;69(18):551-556 [FREE Full text] [CrossRef] [Medline]
  14. Oster AM, Kang GJ, Cha AE, Beresovsky V, Rose CE, Rainisch G, et al. Trends in number and distribution of COVID-19 hotspot counties - United States, March 8-July 15, 2020. MMWR Morb Mortal Wkly Rep 2020 Aug 21;69(33):1127-1132 [FREE Full text] [CrossRef] [Medline]
  15. Myers JF, Snyder RE, Porse CC, Tecle S, Lowenthal P, Danforth ME, Traveler Monitoring Team. Identification and monitoring of international travelers during the initial phase of an outbreak of COVID-19 - California, February 3-March 17, 2020. MMWR Morb Mortal Wkly Rep 2020 May 15;69(19):599-602 [FREE Full text] [CrossRef] [Medline]
  16. McGovern OL, Stenger M, Oliver SE, Anderson TC, Isenhour C, Mauldin MR, et al. Demographic, clinical, and epidemiologic characteristics of persons under investigation for coronavirus disease 2019-United States, January 17-February 29, 2020. PLoS One 2021 Apr 15;16(4):e0249901. [CrossRef] [Medline]
  17. Updated guidance on evaluating and testing persons for coronavirus disease 2019 (COVID-19). Centers for Disease Control and Prevention.   URL: https://emergency.cdc.gov/han/2020/han00429.asp [accessed 2021-08-16]
  18. Sittig DF, Krall M, Kaalaas-Sittig J, Ash JS. Emotional aspects of computer-based provider order entry: a qualitative study. J Am Med Inform Assoc 2005;12(5):561-567 [FREE Full text] [CrossRef] [Medline]
  19. Middleton B, Sittig DF, Wright A. Clinical decision support: a 25 year retrospective and a 25 year vision. Yearb Med Inform 2016 Aug 02;Suppl 1(Suppl 1):S103-S116 [FREE Full text] [CrossRef] [Medline]


AGP: aerosol-generating procedure
CDS: clinical decision support
CPOE: computerized provider order entry
EHR: electronic health record
PPE: personal protective equipment
PUI: person under investigation


Edited by G Eysenbach; submitted 22.07.21; peer-reviewed by J Walsh; comments to author 13.08.21; revised version received 18.08.21; accepted 19.09.21; published 18.10.21

Copyright

©Hung S Luu, Laura M Filkins, Jason Y Park, Dinesh Rakheja, Jefferson Tweed, Christopher Menzies, Vincent J Wang, Vineeta Mittal, Christoph U Lehmann, Michael E Sebert. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 18.10.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on https://medinform.jmir.org/, as well as this copyright and license information must be included.