Published on in Vol 7, No 1 (2019): Jan-Mar

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/12591, first published .
Predicting Appropriate Hospital Admission of Emergency Department Patients with Bronchiolitis: Secondary Analysis

Predicting Appropriate Hospital Admission of Emergency Department Patients with Bronchiolitis: Secondary Analysis

Predicting Appropriate Hospital Admission of Emergency Department Patients with Bronchiolitis: Secondary Analysis

Original Paper

1Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA, United States

2Department of Pediatrics, University of Utah, Salt Lake City, UT, United States

3Care Transformation, Intermountain Healthcare, Salt Lake City, UT, United States

Corresponding Author:

Gang Luo, PhD

Department of Biomedical Informatics and Medical Education

University of Washington

Building C, Box 358047

850 Republican Street

Seattle, WA, 98195

United States

Phone: 1 206 221 4596

Fax:1 206 221 2671

Email: gangluo@cs.wisc.edu


Background: In children below the age of 2 years, bronchiolitis is the most common reason for hospitalization. Each year in the United States, bronchiolitis causes 287,000 emergency department visits, 32%-40% of which result in hospitalization. Due to a lack of evidence and objective criteria for managing bronchiolitis, clinicians often make emergency department disposition decisions on hospitalization or discharge to home subjectively, leading to large practice variation. Our recent study provided the first operational definition of appropriate hospital admission for emergency department patients with bronchiolitis and showed that 6.08% of emergency department disposition decisions for bronchiolitis were inappropriate. An accurate model for predicting appropriate hospital admission can guide emergency department disposition decisions for bronchiolitis and improve outcomes, but has not been developed thus far.

Objective: The objective of this study was to develop a reasonably accurate model for predicting appropriate hospital admission.

Methods: Using Intermountain Healthcare data from 2011-2014, we developed the first machine learning classification model to predict appropriate hospital admission for emergency department patients with bronchiolitis.

Results: Our model achieved an accuracy of 90.66% (3242/3576, 95% CI: 89.68-91.64), a sensitivity of 92.09% (1083/1176, 95% CI: 90.33-93.56), a specificity of 89.96% (2159/2400, 95% CI: 88.69-91.17), and an area under the receiver operating characteristic curve of 0.960 (95% CI: 0.954-0.966). We identified possible improvements to the model to guide future research on this topic.

Conclusions: Our model has good accuracy for predicting appropriate hospital admission for emergency department patients with bronchiolitis. With further improvement, our model could serve as a foundation for building decision-support tools to guide disposition decisions for children with bronchiolitis presenting to emergency departments.

International Registered Report Identifier (IRRID): RR2-10.2196/resprot.5155

JMIR Med Inform 2019;7(1):e12591

doi:10.2196/12591

Keywords



Bronchiolitis refers to inflammation of the bronchioles, the smallest air passages in the lungs, mainly seen in children below the age of 2 years [1]. More than one-third of children in the United States have been diagnosed with bronchiolitis by the age of 2 years [1]. In children below the age of 2 years, bronchiolitis is responsible for 16% of the hospitalizations and is the most common reason for hospitalization [2-5]. In the United States, bronchiolitis annually leads to approximately 287,000 emergency department (ED) visits [6], 128,000 hospitalizations [2], and US $1.73 billion in total inpatient costs (2009) [2].

About 32%-40% of ED visits for bronchiolitis result in hospitalization [7-9]. Current clinical guidelines for bronchiolitis [10,11] acknowledge that due to a lack of evidence and objective criteria for managing bronchiolitis, clinicians often make ED disposition decisions of hospitalization or discharge to home subjectively [4,12]. This uncertainty in bronchiolitis management leads to large practice variation [3,12-23], increased iatrogenic risk, suboptimal outcomes, and wasted healthcare resources resulting from unnecessary admissions and unsafe discharges [15,21,24]. Approximately 10% of infants with bronchiolitis experience adverse events during hospital stay [25]. By examining the distributions of multiple relevant attributes of ED visits for bronchiolitis and using a data-driven method to determine two threshold values, we recently developed the first operational definition of appropriate hospital admission for ED patients with bronchiolitis [26]. Appropriate admissions cover both necessary admissions (actual admissions that are necessary) and unsafe discharges (Figure 1). Appropriate ED discharges cover both safe discharges and unnecessary admissions. Unsafe discharges are defined based on early ED returns. Unnecessary admissions are defined based on brief exposure to certain major medical interventions (Figure 1). Brief exposure was defined as exposure of ≤6 hours, with the threshold value of 6 hours chosen conservatively based on the median duration of major medical interventions received by a subset of patients who tended to have been admitted unnecessarily. Based on the operational definition, we showed that 6.08% of ED disposition decisions for bronchiolitis were inappropriate [26].

Thus far, several models have been built for predicting hospital admission in ED patients with bronchiolitis [7-9,27-29]. As our review paper [30] pointed out, these models have low accuracy and incorrectly assume that actual ED disposition decisions are always appropriate. An accurate model for predicting appropriate hospital admission can guide ED disposition decisions for bronchiolitis and improve outcomes. This model, which is yet to be built, would be particularly useful for less experienced clinicians, including junior clinicians and those in general practice who attend to children infrequently [31]. The objective of this study was to build the first model to predict appropriate hospital admission for ED patients with bronchiolitis. The dependent variable of the appropriate ED disposition decision is categorical and has two possible values: appropriate admission and appropriate ED discharge. Accordingly, the model uses clinical and administrative data to conduct binary classification.

Figure 1. The operational definition of appropriate hospital admission for emergency department patients with bronchiolitis.
View this figure

Study Design and Ethical Approval

In this study, we performed secondary analysis of retrospective data. The Institutional Review Boards of the University of Washington Medicine, University of Utah, and Intermountain Healthcare reviewed and approved this study and waived the need for informed consent for all patients.

Patient Population

Our patient cohort consisted of children below the age of 2 years who visited the ED for bronchiolitis in 2013-2014 at any of the 22 Intermountain Healthcare hospitals. Intermountain Healthcare is the largest healthcare system in Utah, with 22 hospitals and 185 clinics delivering ~85% of pediatric care in Utah [32]. Similar to our previous paper [26], we adopted the approach used in Flaherman et al [33-35] to identify as many ED visits for bronchiolitis as possible. This approach included patients with an ED or hospital International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) primary discharge diagnosis code of bronchiolitis or bronchitis (466.x), viral pneumonia (480.x), adenoviral infection (079.0), rhinovirus infection (079.3), respiratory infection due to influenza (487.0 or 487.1), respiratory syncytial virus (079.6), H1N1 influenza (488.1, 488.11, or 488.12), influenza due to identified avian influenza virus (488, 488.0, 488.01, or 488.02), or influenza due to novel influenza A (488.81 or 488.82). Any of these discharge diagnosis codes, rather than only the discharge diagnosis code of bronchiolitis, could be assigned to an ED visit for bronchiolitis. In addition, this approach included all patients with any of the abovementioned codes as a nonprimary diagnosis code, as long as the ICD-9-CM primary diagnosis code was any of the following: apnea (786.03), shortness of breath (786.05), tachypnea (786.06), wheezing (786.07), other respiratory abnormalities (786.09), cough (786.2), fever (780.60 or 780.61), acute nasopharyngitis (460), acute upper respiratory infections (465.x), other specified viral infection (079.89), urinary tract infection (599.0), pneumonia unspecified organism (486), unspecified viral infection (079.99), volume depletion (276.5x), or respiratory failure (518.81 or 518.82) [26]. The ED visits for bronchiolitis captured by this approach in 2013-2014 are the focus of our study.

Data Set

From Intermountain Healthcare’s enterprise data warehouse, we extracted a clinical and administrative data set containing information of our patient cohort’s inpatient stays, ED visits, and outpatient visits at Intermountain Healthcare in 2011-2014. Our patient cohort included children below the age of 2 years who visited the Intermountain Healthcare ED for bronchiolitis in 2013-2014. By starting the data set in 2011, we ensured that for each ED visit by a target patient in 2013-2014, the data set included the patient’s complete prior medical history recorded within Intermountain Healthcare and necessary for computing features (also known as independent variables).

Features

The 35 candidate patient features fall into two disjoint categories. Category 1 includes all known predictors of hospital admission in ED patients with bronchiolitis, which were consistently recorded at Intermountain Healthcare facilities and available as structured attributes in our data set [30,31]. These 15 predictors are age in days, gender, heart rate, respiratory rate, peripheral capillary oxygen saturation (SpO2), temperature, coinfection, rhinovirus infection, enterovirus infection, history of bronchopulmonary dysplasia, history of eczema, prior intubation, prior hospitalization, prematurity, and dehydration. For any vital sign that was recorded more than once during the ED visit, we used its last value as its feature value. Among all recorded values, the last value most closely reflected the patient’s status at the time of ED disposition.

Category 2 consists of 20 features suggested by our team’s clinical experts BLS, MDJ, and FLN: race, ethnicity, insurance category (public, private, or self-paid or charity), the ED visit’s acuity level (resuscitation, emergent, urgent, semiurgent, or nonurgent), chief complaint, number of consults during the ED visit, number of laboratory tests ordered during the ED visit, number of radiology studies ordered during the ED visit, number of X-rays ordered during the ED visit, length of ED stay in minutes, hour of ED disposition, whether the patient is up-to-date with his/her immunizations, diastolic blood pressure, systolic blood pressure, weight, wheezing (none, expiratory, inspiratory and expiratory, or diminished breath sounds), retractions (none, one location, two locations, or three or more locations), respiratory syncytial virus infection, language barrier to learning, and whether the patient has any other barrier to learning. For either attribute of wheezing and retractions that was recorded more than once during the ED visit, we used its last value as its feature value. Among all recorded values, the last value most closely reflected the patient’s status at ED disposition time.

Based on the timestamp, all candidate features were available as structured attributes in our data set before the time of ED disposition. We used these features to build predictive models.

Data Analysis

Data Preparation

For each ED visit by a patient below the age of 2 years for bronchiolitis in 2013-2014, we used our previously developed operational definition of appropriate admission [26] (Figure 1) to compute the dependent variable’s value. For each numerical feature, we examined the data distribution, used the upper and lower bounds given by our team’s ED expert MDJ to identify invalid values, and replaced each invalid value with a null value. All temperatures<80°F or >110°F, all weights>50 pounds, all systolic blood pressure values of 0, all SpO2 values>100%, all respiratory rates>120 breaths/minute, and all heart rates<30 or >300 beats/minute were regarded as physiologically impossible and invalid. To ensure that all data were on the same scale, we standardized each numerical feature by first subtracting its mean and then dividing by its SD. We focused on 2 years of data for ED visits for bronchiolitis (2013-2014). Data from the first year (2013) were used to train predictive models. Data from the second year (2014) were used to evaluate model performance, reflecting use in practice.

Performance Metrics

As shown in Table 1 and the formulas below, we used six standard metrics to measure model performance: accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and area under the receiver operating characteristic curve (AUC). For instance, false negative (FN) is the number of appropriate admissions that the model incorrectly predicts to be ED discharges. Sensitivity measures the proportion of appropriate admissions that the model identifies. Specificity measures the proportion of appropriate ED discharges that the model identifies.

Accuracy=(TP+TN)/(TP+TN+FP+FN)
Sensitivity=TP/(TP+FN)
Specificity=TN/(TN+FP)
PPV=TP/(TP+FP)
NPV=TN/(TN+FN)

TP is true positive, TN is true negative, and FP is false negative.

For the six performance metrics, we conducted 1000-fold bootstrap analysis [36] to compute their 95% CIs. On each bootstrap sample of the 2014 data, we computed our model’s performance metrics. For each of the six performance metrics, the 2.5th and 97.5th percentiles in the 1000 bootstrap samples specified its 95% CI.

Table 1. The error matrix.
ClassAppropriate admissionAppropriate emergency department discharge
Predicted admissionTrue positiveFalse positive
Predicted emergency department dischargeFalse negativeTrue negative

To show the sensitivity-specificity tradeoff, we plotted the receiver operating characteristic curve. The calibration of a model refers to how well the predicted probabilities of appropriate admission match with the fractions of appropriate admissions in subgroups of ED visits for bronchiolitis. To show model calibration, we drew a calibration plot [36]. A perfect calibration curve would coincide with the diagonal line. In addition, we used the Hosmer-Lemeshow goodness-of-fit test [36] to evaluate model calibration.

Classification Algorithms

We used Weka [37], a widely used open-source machine learning and data mining toolkit, to build machine learning classification models. Machine learning studies computer algorithms that learn from data, such as random forest, support vector machine, and neural network, and has won most data science competitions [38]. Weka integrates many commonly used machine learning algorithms and feature-selection techniques. We considered all 39 machine learning classification algorithms in the standard Weka package and adopted our previously developed automatic machine learning model selection method [39] and the training data of 2013 to automatically select the algorithm, feature-selection technique, and hyperparameter values among all the applicable ones. In a machine learning algorithm, hyperparameters are the parameters whose values are traditionally manually set by the machine learning software user before model training. An example of a hyperparameter is the number of decision trees used in a random forest classifier. Our automatic machine learning model selection method [39] uses the Bayesian optimization (also known as response surface) methodology to automatically explore numerous combinations of algorithm, feature-selection technique, and hyperparameter values and performs three-fold cross-validation to select the final combination maximizing the AUC. Compared to the other five performance metrics—accuracy, sensitivity, specificity, PPV, and NPV— AUC has the advantage of not relying on the cutoff threshold for deciding between predicted admission and predicted ED discharge.

Demographic and Clinical Characteristics of the Patient Cohort

Tables 2 and 3 show the demographic and clinical characteristics of our patient cohort: children below the age of 2 years who visited the ED for bronchiolitis in 2013 and 2014, respectively. The characteristics are mostly similar between both years. About 40.78% (1640/4022) and 38.26% (1368/3576) of ED visits for bronchiolitis ended in hospitalization in 2013 and 2014, respectively. About 35.80% (1440/4022) and 32.89% (1176/3576) of ED visits for bronchiolitis were deemed to be appropriate hospital admissions in 2013 and 2014, respectively.

Based on the χ2 two-sample test, for the 2013 data, the ED visits discharged to home and those ending in hospitalization showed the same distribution for gender (P=.49) and different distributions for race (P<.001), ethnicity (P=.01), and insurance category (P<.001). For the 2014 data, the ED visits discharged to home and those ending in hospitalization showed the same distribution for gender (P=.94) and race (P=.61) and different distributions for ethnicity (P<.001) and insurance category (P<.001). Based on the Cochran-Armitage trend test [41], for both the 2013 and 2014 data, the ED visits discharged to home and those ending in hospitalization showed different distributions for age (P<.001).

Table 2. Demographic and clinical characteristics of children under the age of 2 years who visited the emergency department at Intermountain Healthcare hospitals for bronchiolitis in 2013.
CharacteristicEmergency department visits (N=4022), n (%)Emergency department visits discharged to home (N=2382), n (%)Emergency department visits ending in hospitalization (N=1640), n (%)
Age

<2 months518 (12.88)211 (8.86)307 (18.72)

2 to <12 months2424 (60.27)1498 (62.89)926 (56.46)

12 to 24 months1080 (26.85)673 (28.25)407 (24.82)
Gender

Male2369 (58.90)1414 (59.36)955 (58.23)

Female1653 (41.10)968 (40.64)685 (41.77)
Race

American Indian or Alaska native51 (1.27)26 (1.09)25 (1.52)

Asian49 (1.22)20 (0.84)29 (1.77)

Black or African American124 (3.08)78 (3.27)46 (2.80)

Native Hawaiian or other Pacific Islander321 (7.98)160 (6.72)161 (9.82)

White2940 (73.10)1784 (74.90)1156 (70.49)

Unknown or not reported537 (13.35)314 (13.18)223 (13.60)
Ethnicity

Hispanic1321 (32.84)826 (34.68)495 (30.18)

Non-Hispanic2687 (66.81)1549 (65.03)1138 (69.39)

Unknown or not reported14 (0.35)7 (0.29)7 (0.43)
Insurance

Private2436 (60.57)1338 (56.17)1098 (66.95)

Public1422 (35.36)933 (39.17)489 (29.82)

Self-paid or charity164 (4.08)111 (4.66)53 (3.23)
Asthma207 (5.15)72 (3.02)135 (8.23)
Chronic complex condition [40]296 (7.36)60 (2.52)236 (14.39)
Table 3. Demographic and clinical characteristics of children under the age of 2 years who visited the emergency department at Intermountain Healthcare hospitals for bronchiolitis in 2014.
CharacteristicEmergency department visits (N=3576), n (%)Emergency department visits discharged to home (N=2208), n (%)Emergency department visits ending in hospitalization (N=1368), n (%)
Age

<2 months454 (12.70)186 (8.42)268 (19.59)

2 to <12 months2079 (58.14)1379 (62.45)700 (51.17)

12 to 24 months1043 (29.17)643 (29.12)400 (29.24)
Gender

Male2059 (57.58)1273 (57.65)786 (57.46)

Female1517 (42.42)935 (42.35)582 (42.54)
Race

American Indian or Alaska Native47 (1.31)31 (1.40)16 (1.17)

Asian68 (1.90)40 (1.81)28 (2.05)

Black or African American104 (2.91)70 (3.17)34 (2.49)

Native Hawaiian or other Pacific Islander284 (7.94)180 (8.15)104 (7.60)

White2795 (78.16)1708 (77.36)1087 (79.46)

Unknown or not reported278 (7.77)179 (8.11)99 (7.24)
Ethnicity

Hispanic1071 (29.95)727 (32.93)344 (25.15)

Non-Hispanic2484 (69.46)1464 (66.30)1020 (74.56)

Unknown or not reported21 (0.59)17 (0.77)4 (0.29)
Insurance

Private2175 (60.82)1241 (56.20)934 (68.27)

Public1256 (35.12)860 (38.95)396 (28.95)

Self-paid or charity145 (4.05)107 (4.85)38 (2.78)
Asthma210 (5.87)67 (3.03)143 (10.45)
Chronic complex condition [40]252 (7.05)43 (1.94)209 (15.28)

Our automatic machine learning model selection method [39] chose the random forest classification algorithm. Random forest can naturally handle missing feature values. Our model was built using this algorithm and the 33 features shown in Table 4. These features are sorted in descending order of their importance values, which were automatically computed by the random forest algorithm in Weka based on average impurity decrease. In general, the features related to the patient’s history are ranked lower than those reflecting the patient’s status in the current ED visit. This intuitively makes medical sense. Two candidate patient features—ethnicity and the ED visit’s acuity level—were not used in our model because they did not increase the model’s accuracy.

Figure 2 shows the receiver operating characteristic curve of our model. Weka uses 50% as its default probability cutoff threshold for making binary classifications. Table 5 shows the error matrix of our model. Table 6 compares our model and the ED clinician’s disposition decision. Our model achieved an accuracy of 90.66% (3242/3576; 95% CI: 89.68-91.64), a sensitivity of 92.09% (1083/1176; 95% CI: 90.33-93.56), a specificity of 89.96% (2159/2400; 95% CI: 88.69-91.17), an AUC of 0.960 (95% CI: 0.954-0.966), a PPV of 81.80% (1083/1324; 95% CI: 79.67-83.80), and an NPV of 95.87% (2159/2252; 95% CI: 95.00-96.65). If we removed the insurance category feature, our model achieved a lower accuracy of 90.32% (3230/3576; 95% CI: 89.37-91.28), a lower sensitivity of 90.22% (1061/1176; 95% CI: 88.30-91.79), a specificity of 90.38% (2169/2400; 95% CI: 89.15-91.57), an AUC of 0.960 (95% CI: 0.955-0.966), a PPV of 82.12% (1061/1292; 95% CI: 79.94-84.15), and a lower NPV of 94.97% (2169/2284; 95% CI: 93.97-95.78). In comparison, the ED clinician’s disposition decision achieved an accuracy of 93.68% (3350/3576; 95% CI: 92.87-94.49), a sensitivity of 98.55% (1159/1176; 95% CI: 97.85-99.24), a specificity of 91.29% (2191/2400; 95% CI: 90.05-92.46), an AUC of 0.949 (95% CI: 0.942-0.956), a PPV of 84.72% (1159/1368; 95% CI: 82.83-86.69), and an NPV of 99.23% (2191/2208; 95% CI: 98.86-99.59).

Table 4. Features used in our model and their importance.
FeatureImportance based on average impurity decrease
Hour of EDa disposition0.42
Age in days0.40
Whether the patient has any other barrier to learning0.39
Length of ED stay in minutes0.38
Number of laboratory tests ordered during the ED visit0.37
Heart rate0.37
Diastolic blood pressure0.36
Gender0.35
Temperature0.35
Respiratory rate0.34
Number of radiology studies ordered during the ED visit0.34
Insurance category0.34
Number of X-rays ordered during the ED visit0.34
Systolic blood pressure0.34
Weight0.33
Chief complaint0.32
SpO2b0.32
Wheezing0.32
Retractions0.29
Number of consults during the ED visit0.28
Whether the patient is up-to-date with his/her immunizations0.27
Race0.27
Enterovirus infection0.25
Respiratory syncytial virus infection0.24
Coinfection0.24
Prior hospitalization0.22
Prior intubation0.22
Dehydration0.20
Language barrier to learning0.20
Rhinovirus infection0.20
Prematurity0.18
History of bronchopulmonary dysplasia0.16
History of eczema0.15

aED: emergency department.

bSpO2: peripheral capillary oxygen saturation.

Figure 2. The receiver operating characteristic curve of our model.
View this figure
Table 5. The error matrix of our predictive model.
ClassAppropriate admissionAppropriate emergency department discharge
Predicted admission1083241
Predicted emergency department discharge932159
Table 6. A comparison of our model and the emergency department clinician’s disposition decision.

Accuracy (%)Sensitivity (%)Specificity (%)AUCaPPVb (%)NPVc (%)
Our model90.6692.0989.960.96081.8095.87
The emergency department clinician’s disposition decision93.6898.5591.290.94984.7299.23

aAUC: area under the receiver operating characteristic curve.

bPPV: positive predictive value.

cNPV: negative predictive value.

Figure 3. The calibration plot of our model by decile of predicted probability of appropriate admission.
View this figure

Figure 3 shows the calibration plot of our model by decile of predicted probability of appropriate admission. The Hosmer-Lemeshow test showed imperfect calibration of the predicted probabilities and the actual outcomes (P<.001). When the predicted probability is <0.5, our model tends to overestimate the actual probability. When the predicted probability is >0.5, our model tends to underestimate the actual probability.


Principal Results

We developed the first machine learning classification model to accurately predict appropriate hospital admission for ED patients with bronchiolitis. Our model is a significant improvement over the previous models for predicting hospital admission in ED patients with bronchiolitis [7-9,27-29]. Our model has good accuracy, with five of the six performance metrics achieving a value ≥90% and the other metric achieving a value >80%. Although our model attained a 3.02% lower accuracy than Intermountain Healthcare clinicians’ ED disposition decisions (90.66% vs 93.68%), we still view our model as a step forward with great potential. Within 0.01 second, our model can output the prediction result for a new patient. With further improvement to boost its accuracy and automatically explain its prediction results [42,43], our model could be integrated into an electronic health record system and become the base of a decision-support tool to help make appropriate ED disposition decisions for bronchiolitis. At that time, a clinician could use the model’s output as a point of reference when considering the disposition decision. This could provide value, improve outcomes, and reduce health care costs for bronchiolitis, regardless of whether our future final model can achieve a higher accuracy than Intermountain Healthcare clinicians’ ED disposition decisions. Our faith in this model stems from the following considerations:

  1. Intermountain Healthcare has several collaborative partnerships among its EDs and hospitals to facilitate coordination of pediatric specialty care and has completed multiple quality-improvement projects for bronchiolitis management. About 52.16% (3963/7598) of ED visits for bronchiolitis within Intermountain Healthcare occur at a tertiary pediatric hospital with an ED staffed by pediatric-specific clinicians. On average, the ED disposition decisions for bronchiolitis made at Intermountain Healthcare could be more accurate than those made at some other healthcare systems, especially those systems with general practice physicians or fewer pediatricians working in their EDs. Our model can be valuable for those systems, if it reaches a higher accuracy than the clinicians’ ED disposition decisions made at those systems. There is some evidence indicating this possibility. Most inappropriate ED disposition decisions are unnecessary admissions [26]. In our data set, 14.36% of hospital admissions from the ED were deemed unnecessary [26]. In the literature [44,45], this percentage is reported to be larger (20%-29%). To understand our model’s value for other systems, additional studies need to be conducted using data of those systems. This is an interesting area for future work.
  2. Figure 4 shows the degree of missing values of each feature with missing values. Figure 5 shows the probability mass function of the number of features with missing values in each data instance. In our data set, several attributes have numerous missing values because those values were either recorded on paper or occasionally undocumented and therefore were not available in Intermountain Healthcare’s electronic health record system. In particular, wheezing and retractions values were missing for 73.56% (5589/7598) of ED visits for bronchiolitis. Systolic and diastolic blood pressure values were missing for 46.49% (3532/7598) of ED visits for bronchiolitis. This could lower the model’s accuracy. In the future, these attributes are expected to be recorded more completely in Intermountain Healthcare’s newly implemented Cerner-based electronic health record system. After retraining our model on more complete Intermountain Healthcare data from future years, we would expect its accuracy to increase. In addition, multiple other healthcare systems like Seattle Children’s Hospital have been using the Cerner electronic health record system to record these attributes relatively completely for many years. Our model could possibly achieve a higher accuracy if trained with data from those systems. Both of these areas are interesting for future work.
  3. When making ED disposition decisions for bronchiolitis, clinicians often face some level of uncertainty and would prefer to obtain a second opinion by a reasonably accurate predictive model, particularly if some technique is used to automatically explain the model’s prediction results. For this purpose, we can use our prior method [42,43] to automatically provide rule-based explanations for any machine learning model’s classification results with no accuracy loss.

When reporting the performance metrics, we used the default cut-off threshold that Weka chose in order to decide between predicted admission and predicted ED discharge. Different health care systems could emphasize different performance metrics and provide divergent weights to FPs and FNs. As is the case with predictive modeling, in general, a health care system can always adjust the cut-off threshold based on the system’s preferences.

Figure 4. The degree of missing values of each feature with missing values. SpO2: peripheral capillary oxygen saturation.
View this figure
Figure 5. The probability mass function of the number of features with missing values in each data instance.
View this figure

Comparison With Prior Work

Previously, researchers constructed several models to predict hospital admission in ED patients with bronchiolitis [7-9,27-29]. Table 7 compared these previous models with our model. Compared to our model, which predicts the appropriate ED disposition decision, the previous models are less accurate and incorrectly assume that actual ED disposition decisions are always appropriate. Our model uses data from more patients, more predictive features, and a more sophisticated classification algorithm than the previous models. As is the case with predictive modeling, in general, all of these features help improve our model’s accuracy.

Some aspects of our findings are similar to those of previous studies. In our data set, 39.59% (3008/7598) of ED visits for bronchiolitis ended in hospitalization. This percentage is within the 32%-40% range of hospital admission rates on ED visits for bronchiolitis reported in the literature [7-9].

Limitations

This study has several limitations. First, it used data from a single health care system, Intermountain Healthcare, and did not test the generalizability of the results. In the future, studies should validate our predictive models using data from other healthcare systems. We are reasonably confident in our results, as our study was conducted in a realistic setting for finding factors generalizable to other US healthcare systems. “Intermountain Healthcare is a large healthcare system with EDs at 22 heterogeneous hospitals spread over a large geographic area, ranging from community metropolitan and rural hospitals attended by general practitioners and family doctors with constrained pediatric resources to tertiary care children’s and general hospitals in urban areas attended by sub-specialists. Each hospital has a different patient population, geographic location, staff composition, scope of services, and cultural background” [26].

Table 7. A comparison of our model and several previous models for predicting emergency department disposition decisions for bronchiolitis.
ModelEDa visits (n)Method for building the modelFeatures included in the final modelAccuracy (%)Sensitivity (%)Specificity (%)AUCbPPVc (%)NPVd (%)
Our model7599Random forestAs listed in the Results section90.6692.0989.960.96081.8095.87
Walsh et al [27]119Neural network ensembleAge, respiratory rate after initial treatment, heart rate before initial treatment, oxygen saturation before and after initial treatment, dehydration, maternal smoking, increased work of breathing, poor feeding, wheezes only without associated crackles, entry temperature, and presence of both crackles and wheezes817882e6889
Marlais et al [7]449Scoring systemAge, respiratory rate, heart rate, oxygen saturation, and duration of symptoms74770.816783
Destino et al [28]195Single variableThe Children’s Hospital of Wisconsin respiratory score65650.68
Laham et al [8]101Logistic regressionAge, need for intravenous
fluids, hypoxia, and nasal wash lactate dehydrogenase concentration
8081770.878866
Corneli et al [9]598Decision treeOxygen saturation, the Respiratory Distress Assessment Instrument score computed from wheezing and retractions, and respiratory rate5674
Walsh et al [29]300Logistic regressionAge, dehydration, increased work of breathing, and heart rate918362

aED: emergency department

bAUC: area under the receiver operating characteristic curve

cPPV: positive predictive value

dNPV: negative predictive value

eThe performance metric is unreported in the original paper describing the model.

Second, despite being an integrated healthcare system, Intermountain Healthcare does not have complete clinical and administrative data on all of its patients. Our data set missed information on patients’ health care use that occurred at non-Intermountain Healthcare facilities. Inclusion of data from those facilities may lead to different results, but we do not expect this inclusion to significantly change our results. Intermountain Healthcare delivers ~85% of pediatric care in Utah [32]. Hence, our data set is reasonably complete with regard to capturing health care use among bronchiolitis patients in Utah.

Third, our operational definition of appropriate hospital admission is imperfect and excludes factors such as availability of patient transportation, preference of the patient’s parents, and hour of ED disposition [26]. Many of these factors are often undocumented in patient records. For some hospital admissions from the ED that were regarded as unnecessary based on our operational definition, the original admission decisions could be made because of these factors.

Finally, besides the features used in the study, other features could help improve the model’s accuracy. Finding new predictive features is an interesting area for future work.

Conclusions

Our model can predict appropriate hospital admission for ED patients with bronchiolitis with good accuracy. In particular, our model achieved an AUC of 0.960. An AUC≥0.9 is considered outstanding discrimination [46]. With further improvement, our model could be integrated into an electronic health record system to provide personalized real-time decision support for making ED disposition decisions for bronchiolitis, which could help standardize care and improve outcomes for bronchiolitis.

Acknowledgments

We thank Farrant Sakaguchi, Michael Mundorff, Karen Valentine, Chris Benitez, JoAnn Banks, Bart Dodds, Xiaoming Sheng, and Jim Bradshaw for their helpful discussions and help in retrieving the Intermountain Healthcare data set. GL, BLS, MDJ, FLN, and SH were partially supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under Award Number R21HL128875. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Authors' Contributions

GL was mainly responsible for the paper. He conceptualized and designed the study, performed the literature review and data analysis, and wrote the paper. BLS, MDJ, and FLN provided feedback on various medical issues, contributed to conceptualizing the presentation, and revised the paper. SH took part in retrieving the Intermountain Healthcare data set and interpreting its detected peculiarities.

Conflicts of Interest

None declared.

  1. Zorc JJ, Hall CB. Bronchiolitis: recent evidence on diagnosis and management. Pediatrics 2010 Feb;125(2):342-349. [CrossRef] [Medline]
  2. Hasegawa K, Tsugawa Y, Brown DFM, Mansbach JM, Camargo CA. Trends in bronchiolitis hospitalizations in the United States, 2000-2009. Pediatrics 2013 Jul;132(1):28-36 [FREE Full text] [CrossRef] [Medline]
  3. Mansbach JM, Emond JA, Camargo CA. Bronchiolitis in US emergency departments 1992 to 2000: epidemiology and practice variation. Pediatr Emerg Care 2005 Apr;21(4):242-247. [Medline]
  4. Parker MJ, Allen U, Stephens D, Lalani A, Schuh S. Predictors of major intervention in infants with bronchiolitis. Pediatr Pulmonol 2009 Apr;44(4):358-363. [CrossRef] [Medline]
  5. Shay DK, Holman RC, Newman RD, Liu LL, Stout JW, Anderson LJ. Bronchiolitis-associated hospitalizations among US children, 1980-1996. JAMA 1999 Oct 20;282(15):1440-1446. [Medline]
  6. Hasegawa K, Tsugawa Y, Brown DFM, Mansbach JM, Camargo CA. Temporal trends in emergency department visits for bronchiolitis in the United States, 2006 to 2010. Pediatr Infect Dis J 2014 Jan;33(1):11-18 [FREE Full text] [CrossRef] [Medline]
  7. Marlais M, Evans J, Abrahamson E. Clinical predictors of admission in infants with acute bronchiolitis. Arch Dis Child 2011 Jul;96(7):648-652. [CrossRef] [Medline]
  8. Laham FR, Trott AA, Bennett BL, Kozinetz CA, Jewell AM, Garofalo RP, et al. LDH concentration in nasal-wash fluid as a biochemical predictor of bronchiolitis severity. Pediatrics 2010 Feb;125(2):e225-e233 [FREE Full text] [CrossRef] [Medline]
  9. Corneli HM, Zorc JJ, Holubkov R, Bregstein JS, Brown KM, Mahajan P, Bronchiolitis Study Group for the Pediatric Emergency Care Applied Research Network. Bronchiolitis: clinical characteristics associated with hospitalization and length of stay. Pediatr Emerg Care 2012 Feb;28(2):99-103. [CrossRef] [Medline]
  10. Scottish IGN. Guideline Central. Bronchiolitis in children. A national clinical guideline   URL: https://www.guidelinecentral.com/summaries/bronchiolitis-in-children-a-national-clinical-guideline/ [accessed 2019-01-17] [WebCite Cache]
  11. AHRQ - Agency for Healthcare Research and Quality: Advancing Excellence in Health Care. Cincinnati Children's Hospital Medical Center: Evidence-based care guideline for management of first time episode bronchiolitis in infants less than 1 year of age   URL: http://www.guideline.gov/content.aspx?id=34411 [accessed 2019-01-17] [WebCite Cache]
  12. Brand PL, Vaessen-Verberne AA. Differences in management of bronchiolitis between hospitals in The Netherlands. Dutch Paediatric Respiratory Society. Eur J Pediatr 2000 May;159(5):343-347. [Medline]
  13. Ducharme FM. Management of acute bronchiolitis. BMJ 2011 Apr 06;342:d1658. [CrossRef] [Medline]
  14. Behrendt CE, Decker MD, Burch DJ, Watson PH. International variation in the management of infants hospitalized with respiratory syncytial virus. International RSV Study Group. Eur J Pediatr 1998 Mar;157(3):215-220. [Medline]
  15. Chamberlain JM, Patel KM, Pollack MM. Association of emergency department care factors with admission and discharge decisions for pediatric patients. J Pediatr 2006 Nov;149(5):644-649. [CrossRef] [Medline]
  16. Johnson DW, Adair C, Brant R, Holmwood J, Mitchell I. Differences in admission rates of children with bronchiolitis by pediatric and general emergency departments. Pediatrics 2002 Oct;110(4):e49. [Medline]
  17. Mallory MD, Shay DK, Garrett J, Bordley WC. Bronchiolitis management preferences and the influence of pulse oximetry and respiratory rate on the decision to admit. Pediatrics 2003 Jan;111(1):e45-e51. [Medline]
  18. Plint AC, Johnson DW, Wiebe N, Bulloch B, Pusic M, Joubert G, et al. Practice variation among pediatric emergency departments in the treatment of bronchiolitis. Acad Emerg Med 2004 Apr;11(4):353-360 [FREE Full text] [Medline]
  19. Vogel AM, Lennon DR, Harding JE, Pinnock RE, Graham DA, Grimwood K, et al. Variations in bronchiolitis management between five New Zealand hospitals: can we do better? J Paediatr Child Health 2003;39(1):40-45. [Medline]
  20. Willson DF, Horn SD, Hendley JO, Smout R, Gassaway J. Effect of practice variation on resource utilization in infants hospitalized for viral lower respiratory illness. Pediatrics 2001 Oct;108(4):851-855. [Medline]
  21. Willson DF, Jiao JH, Hendley JO, Donowitz L. Invasive monitoring in infants with respiratory syncytial virus infection. J Pediatr 1996 Mar;128(3):357-362. [Medline]
  22. Wang EE, Law BJ, Boucher FD, Stephens D, Robinson JL, Dobson S, et al. Pediatric Investigators Collaborative Network on Infections in Canada (PICNIC) study of admission and management variation in patients hospitalized with respiratory syncytial viral lower respiratory tract infection. J Pediatr 1996 Sep;129(3):390-395. [Medline]
  23. Christakis DA, Cowan CA, Garrison MM, Molteni R, Marcuse E, Zerr DM. Variation in inpatient diagnostic testing and management of bronchiolitis. Pediatrics 2005 Apr;115(4):878-884. [CrossRef] [Medline]
  24. Mansbach JM, Clark S, Christopher NC, LoVecchio F, Kunz S, Acholonu U, et al. Prospective multicenter study of bronchiolitis: predicting safe discharges from the emergency department. Pediatrics 2008 Apr;121(4):680-688. [CrossRef] [Medline]
  25. McBride SC, Chiang VW, Goldmann DA, Landrigan CP. Preventable adverse events in infants hospitalized with bronchiolitis. Pediatrics 2005 Sep;116(3):603-608. [CrossRef] [Medline]
  26. Luo G, Johnson MD, Nkoy FL, He S, Stone BL. Appropriateness of Hospital Admission for Emergency Department Patients with Bronchiolitis: Secondary Analysis. JMIR Med Inform 2018 Nov 05;6(4):e10498 [FREE Full text] [CrossRef] [Medline]
  27. Walsh P, Cunningham P, Rothenberg SJ, O'Doherty S, Hoey H, Healy R. An artificial neural network ensemble to predict disposition and length of stay in children presenting with bronchiolitis. Eur J Emerg Med 2004 Oct;11(5):259-264. [Medline]
  28. Destino L, Weisgerber MC, Soung P, Bakalarski D, Yan K, Rehborg R, et al. Validity of respiratory scores in bronchiolitis. Hosp Pediatr 2012 Oct;2(4):202-209 [FREE Full text] [Medline]
  29. Walsh P, Rothenberg SJ, O'Doherty S, Hoey H, Healy R. A validated clinical model to predict the need for admission and length of stay in children with acute bronchiolitis. Eur J Emerg Med 2004 Oct;11(5):265-272. [Medline]
  30. Luo G, Nkoy FL, Gesteland PH, Glasgow TS, Stone BL. A systematic review of predictive modeling for bronchiolitis. Int J Med Inform 2014 Oct;83(10):691-714. [CrossRef] [Medline]
  31. Luo G, Stone BL, Johnson MD, Nkoy FL. Predicting Appropriate Admission of Bronchiolitis Patients in the Emergency Department: Rationale and Methods. JMIR Res Protoc 2016 Mar 07;5(1):e41 [FREE Full text] [CrossRef] [Medline]
  32. Byington CL, Reynolds CC, Korgenski K, Sheng X, Valentine KJ, Nelson RE, et al. Costs and infant outcomes after implementation of a care process model for febrile infants. Pediatrics 2012 Jul;130(1):e16-e24 [FREE Full text] [CrossRef] [Medline]
  33. Flaherman VJ, Ragins AI, Li SX, Kipnis P, Masaquel A, Escobar GJ. Frequency, duration and predictors of bronchiolitis episodes of care among infants ≥32 weeks gestation in a large integrated healthcare system: a retrospective cohort study. BMC Health Serv Res 2012 Jun 08;12:144 [FREE Full text] [CrossRef] [Medline]
  34. Mittal V, Darnell C, Walsh B, Mehta A, Badawy M, Morse R, et al. Inpatient bronchiolitis guideline implementation and resource utilization. Pediatrics 2014 Mar;133(3):e730-e737 [FREE Full text] [CrossRef] [Medline]
  35. Sandweiss DR, Mundorff MB, Hill T, Wolfe D, Greene T, Andrews S, et al. Decreasing hospital length of stay for bronchiolitis by using an observation unit and home oxygen therapy. JAMA Pediatr 2013 May;167(5):422-428. [CrossRef] [Medline]
  36. Steyerberg EW. Clinical Prediction Models: A Practical Approach To Development, Validation, And Updating (statistics For Biology And Health). New York, NY: Springer-Verlag; 2009.
  37. Witten IH, Frank E, Hall MA, Pal CJ. Data Mining: Practical Machine Learning Tools and Techniques. Burlington, MA: Morgan Kaufmann; 2016.
  38. Kaggle.   URL: https://www.kaggle.com/ [accessed 2019-01-17] [WebCite Cache]
  39. Zeng X, Luo G. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection. Health Inf Sci Syst 2017 Dec;5(1):2 [FREE Full text] [CrossRef] [Medline]
  40. Feudtner C, Feinstein JA, Zhong W, Hall M, Dai D. Pediatric complex chronic conditions classification system version 2: updated for ICD-10 and complex medical technology dependence and transplantation. BMC Pediatr 2014 Aug 08;14:199 [FREE Full text] [CrossRef] [Medline]
  41. Agresti A. Categorical Data Analysis. Hoboken, NJ: Wiley; 2012.
  42. Luo G. Automatically explaining machine learning prediction results: a demonstration on type 2 diabetes risk prediction. Health Inf Sci Syst 2016;4:2 [FREE Full text] [CrossRef] [Medline]
  43. Luo G. A roadmap for semi-automatically extracting predictive and clinically meaningful temporal features from medical data for predictive modeling. Global Transitions 2019;1(1) (forthcoming).
  44. Shaw KN, Bell LM, Sherman NH. Outpatient assessment of infants with bronchiolitis. Am J Dis Child 1991 Feb;145(2):151-155. [Medline]
  45. Shafik MH, Seoudi TMM, Raway TS, Al Harbash NZ, Ahmad MMA, Al Mutairi HF. Appropriateness of pediatric hospitalization in a general hospital in Kuwait. Med Princ Pract 2012;21(6):516-521 [FREE Full text] [CrossRef] [Medline]
  46. Hosmer Jr DW, Lemeshow S, Sturdivant RX. Applied Logistic Regression. Hoboken, NJ: Wiley; 2013.


AUC: area under the receiver operating characteristic curve
ED: emergency department
FN: false negative
FP: false positive
ICD-9-CM: International Classification of Diseases, Ninth Revision, Clinical Modification
NPV: negative predictive value
PPV: positive predictive value
SpO2: peripheral capillary oxygen saturation
TN: true negative
TP: true positive


Edited by G Eysenbach; submitted 23.10.18; peer-reviewed by K Chen, P Giabbanelli, J op den Buijs, R Rivas; comments to author 21.11.18; revised version received 27.11.18; accepted 12.12.18; published 22.01.19

Copyright

©Gang Luo, Bryan L Stone, Flory L Nkoy, Shan He, Michael D Johnson. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 22.01.2019.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on http://medinform.jmir.org/, as well as this copyright and license information must be included.