Published on 22.01.19 in Vol 7, No 1 (2019): Jan-Mar
Preprints (earlier versions) of this paper are available at http://preprints.jmir.org/preprint/12591, first published Oct 23, 2018.
Predicting Appropriate Hospital Admission of Emergency Department Patients with Bronchiolitis: Secondary Analysis
Background: In children below the age of 2 years, bronchiolitis is the most common reason for hospitalization. Each year in the United States, bronchiolitis causes 287,000 emergency department visits, 32%-40% of which result in hospitalization. Due to a lack of evidence and objective criteria for managing bronchiolitis, clinicians often make emergency department disposition decisions on hospitalization or discharge to home subjectively, leading to large practice variation. Our recent study provided the first operational definition of appropriate hospital admission for emergency department patients with bronchiolitis and showed that 6.08% of emergency department disposition decisions for bronchiolitis were inappropriate. An accurate model for predicting appropriate hospital admission can guide emergency department disposition decisions for bronchiolitis and improve outcomes, but has not been developed thus far.
Objective: The objective of this study was to develop a reasonably accurate model for predicting appropriate hospital admission.
Methods: Using Intermountain Healthcare data from 2011-2014, we developed the first machine learning classification model to predict appropriate hospital admission for emergency department patients with bronchiolitis.
Results: Our model achieved an accuracy of 90.66% (3242/3576, 95% CI: 89.68-91.64), a sensitivity of 92.09% (1083/1176, 95% CI: 90.33-93.56), a specificity of 89.96% (2159/2400, 95% CI: 88.69-91.17), and an area under the receiver operating characteristic curve of 0.960 (95% CI: 0.954-0.966). We identified possible improvements to the model to guide future research on this topic.
Conclusions: Our model has good accuracy for predicting appropriate hospital admission for emergency department patients with bronchiolitis. With further improvement, our model could serve as a foundation for building decision-support tools to guide disposition decisions for children with bronchiolitis presenting to emergency departments.
International Registered Report Identifier (IRRID): RR2-10.2196/resprot.5155
JMIR Med Inform 2019;7(1):e12591
- appropriate hospital admission;
- emergency department;
- predictive model;
- machine learning
Bronchiolitis refers to inflammation of the bronchioles, the smallest air passages in the lungs, mainly seen in children below the age of 2 years . More than one-third of children in the United States have been diagnosed with bronchiolitis by the age of 2 years [ ]. In children below the age of 2 years, bronchiolitis is responsible for 16% of the hospitalizations and is the most common reason for hospitalization [ - ]. In the United States, bronchiolitis annually leads to approximately 287,000 emergency department (ED) visits [ ], 128,000 hospitalizations [ ], and US $1.73 billion in total inpatient costs (2009) [ ].
About 32%-40% of ED visits for bronchiolitis result in hospitalization [- ]. Current clinical guidelines for bronchiolitis [ , ] acknowledge that due to a lack of evidence and objective criteria for managing bronchiolitis, clinicians often make ED disposition decisions of hospitalization or discharge to home subjectively [ , ]. This uncertainty in bronchiolitis management leads to large practice variation [ , - ], increased iatrogenic risk, suboptimal outcomes, and wasted healthcare resources resulting from unnecessary admissions and unsafe discharges [ , , ]. Approximately 10% of infants with bronchiolitis experience adverse events during hospital stay [ ]. By examining the distributions of multiple relevant attributes of ED visits for bronchiolitis and using a data-driven method to determine two threshold values, we recently developed the first operational definition of appropriate hospital admission for ED patients with bronchiolitis [ ]. Appropriate admissions cover both necessary admissions (actual admissions that are necessary) and unsafe discharges ( ). Appropriate ED discharges cover both safe discharges and unnecessary admissions. Unsafe discharges are defined based on early ED returns. Unnecessary admissions are defined based on brief exposure to certain major medical interventions ( ). Brief exposure was defined as exposure of ≤6 hours, with the threshold value of 6 hours chosen conservatively based on the median duration of major medical interventions received by a subset of patients who tended to have been admitted unnecessarily. Based on the operational definition, we showed that 6.08% of ED disposition decisions for bronchiolitis were inappropriate [ ].
Thus far, several models have been built for predicting hospital admission in ED patients with bronchiolitis [- , - ]. As our review paper [ ] pointed out, these models have low accuracy and incorrectly assume that actual ED disposition decisions are always appropriate. An accurate model for predicting appropriate hospital admission can guide ED disposition decisions for bronchiolitis and improve outcomes. This model, which is yet to be built, would be particularly useful for less experienced clinicians, including junior clinicians and those in general practice who attend to children infrequently [ ]. The objective of this study was to build the first model to predict appropriate hospital admission for ED patients with bronchiolitis. The dependent variable of the appropriate ED disposition decision is categorical and has two possible values: appropriate admission and appropriate ED discharge. Accordingly, the model uses clinical and administrative data to conduct binary classification.
Study Design and Ethical Approval
In this study, we performed secondary analysis of retrospective data. The Institutional Review Boards of the University of Washington Medicine, University of Utah, and Intermountain Healthcare reviewed and approved this study and waived the need for informed consent for all patients.
Our patient cohort consisted of children below the age of 2 years who visited the ED for bronchiolitis in 2013-2014 at any of the 22 Intermountain Healthcare hospitals. Intermountain Healthcare is the largest healthcare system in Utah, with 22 hospitals and 185 clinics delivering ~85% of pediatric care in Utah . Similar to our previous paper [ ], we adopted the approach used in Flaherman et al [ - ] to identify as many ED visits for bronchiolitis as possible. This approach included patients with an ED or hospital International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) primary discharge diagnosis code of bronchiolitis or bronchitis (466.x), viral pneumonia (480.x), adenoviral infection (079.0), rhinovirus infection (079.3), respiratory infection due to influenza (487.0 or 487.1), respiratory syncytial virus (079.6), H1N1 influenza (488.1, 488.11, or 488.12), influenza due to identified avian influenza virus (488, 488.0, 488.01, or 488.02), or influenza due to novel influenza A (488.81 or 488.82). Any of these discharge diagnosis codes, rather than only the discharge diagnosis code of bronchiolitis, could be assigned to an ED visit for bronchiolitis. In addition, this approach included all patients with any of the abovementioned codes as a nonprimary diagnosis code, as long as the ICD-9-CM primary diagnosis code was any of the following: apnea (786.03), shortness of breath (786.05), tachypnea (786.06), wheezing (786.07), other respiratory abnormalities (786.09), cough (786.2), fever (780.60 or 780.61), acute nasopharyngitis (460), acute upper respiratory infections (465.x), other specified viral infection (079.89), urinary tract infection (599.0), pneumonia unspecified organism (486), unspecified viral infection (079.99), volume depletion (276.5x), or respiratory failure (518.81 or 518.82) [ ]. The ED visits for bronchiolitis captured by this approach in 2013-2014 are the focus of our study.
From Intermountain Healthcare’s enterprise data warehouse, we extracted a clinical and administrative data set containing information of our patient cohort’s inpatient stays, ED visits, and outpatient visits at Intermountain Healthcare in 2011-2014. Our patient cohort included children below the age of 2 years who visited the Intermountain Healthcare ED for bronchiolitis in 2013-2014. By starting the data set in 2011, we ensured that for each ED visit by a target patient in 2013-2014, the data set included the patient’s complete prior medical history recorded within Intermountain Healthcare and necessary for computing features (also known as independent variables).
The 35 candidate patient features fall into two disjoint categories. Category 1 includes all known predictors of hospital admission in ED patients with bronchiolitis, which were consistently recorded at Intermountain Healthcare facilities and available as structured attributes in our data set [, ]. These 15 predictors are age in days, gender, heart rate, respiratory rate, peripheral capillary oxygen saturation (SpO2), temperature, coinfection, rhinovirus infection, enterovirus infection, history of bronchopulmonary dysplasia, history of eczema, prior intubation, prior hospitalization, prematurity, and dehydration. For any vital sign that was recorded more than once during the ED visit, we used its last value as its feature value. Among all recorded values, the last value most closely reflected the patient’s status at the time of ED disposition.
Category 2 consists of 20 features suggested by our team’s clinical experts BLS, MDJ, and FLN: race, ethnicity, insurance category (public, private, or self-paid or charity), the ED visit’s acuity level (resuscitation, emergent, urgent, semiurgent, or nonurgent), chief complaint, number of consults during the ED visit, number of laboratory tests ordered during the ED visit, number of radiology studies ordered during the ED visit, number of X-rays ordered during the ED visit, length of ED stay in minutes, hour of ED disposition, whether the patient is up-to-date with his/her immunizations, diastolic blood pressure, systolic blood pressure, weight, wheezing (none, expiratory, inspiratory and expiratory, or diminished breath sounds), retractions (none, one location, two locations, or three or more locations), respiratory syncytial virus infection, language barrier to learning, and whether the patient has any other barrier to learning. For either attribute of wheezing and retractions that was recorded more than once during the ED visit, we used its last value as its feature value. Among all recorded values, the last value most closely reflected the patient’s status at ED disposition time.
Based on the timestamp, all candidate features were available as structured attributes in our data set before the time of ED disposition. We used these features to build predictive models.
For each ED visit by a patient below the age of 2 years for bronchiolitis in 2013-2014, we used our previously developed operational definition of appropriate admission  ( ) to compute the dependent variable’s value. For each numerical feature, we examined the data distribution, used the upper and lower bounds given by our team’s ED expert MDJ to identify invalid values, and replaced each invalid value with a null value. All temperatures<80°F or >110°F, all weights>50 pounds, all systolic blood pressure values of 0, all SpO2 values>100%, all respiratory rates>120 breaths/minute, and all heart rates<30 or >300 beats/minute were regarded as physiologically impossible and invalid. To ensure that all data were on the same scale, we standardized each numerical feature by first subtracting its mean and then dividing by its SD. We focused on 2 years of data for ED visits for bronchiolitis (2013-2014). Data from the first year (2013) were used to train predictive models. Data from the second year (2014) were used to evaluate model performance, reflecting use in practice.
As shown inand the formulas below, we used six standard metrics to measure model performance: accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and area under the receiver operating characteristic curve (AUC). For instance, false negative (FN) is the number of appropriate admissions that the model incorrectly predicts to be ED discharges. Sensitivity measures the proportion of appropriate admissions that the model identifies. Specificity measures the proportion of appropriate ED discharges that the model identifies.
TP is true positive, TN is true negative, and FP is false negative.
For the six performance metrics, we conducted 1000-fold bootstrap analysis  to compute their 95% CIs. On each bootstrap sample of the 2014 data, we computed our model’s performance metrics. For each of the six performance metrics, the 2.5th and 97.5th percentiles in the 1000 bootstrap samples specified its 95% CI.
To show the sensitivity-specificity tradeoff, we plotted the receiver operating characteristic curve. The calibration of a model refers to how well the predicted probabilities of appropriate admission match with the fractions of appropriate admissions in subgroups of ED visits for bronchiolitis. To show model calibration, we drew a calibration plot . A perfect calibration curve would coincide with the diagonal line. In addition, we used the Hosmer-Lemeshow goodness-of-fit test [ ] to evaluate model calibration.
We used Weka , a widely used open-source machine learning and data mining toolkit, to build machine learning classification models. Machine learning studies computer algorithms that learn from data, such as random forest, support vector machine, and neural network, and has won most data science competitions [ ]. Weka integrates many commonly used machine learning algorithms and feature-selection techniques. We considered all 39 machine learning classification algorithms in the standard Weka package and adopted our previously developed automatic machine learning model selection method [ ] and the training data of 2013 to automatically select the algorithm, feature-selection technique, and hyperparameter values among all the applicable ones. In a machine learning algorithm, hyperparameters are the parameters whose values are traditionally manually set by the machine learning software user before model training. An example of a hyperparameter is the number of decision trees used in a random forest classifier. Our automatic machine learning model selection method [ ] uses the Bayesian optimization (also known as response surface) methodology to automatically explore numerous combinations of algorithm, feature-selection technique, and hyperparameter values and performs three-fold cross-validation to select the final combination maximizing the AUC. Compared to the other five performance metrics—accuracy, sensitivity, specificity, PPV, and NPV— AUC has the advantage of not relying on the cutoff threshold for deciding between predicted admission and predicted ED discharge.
Demographic and Clinical Characteristics of the Patient Cohort
and show the demographic and clinical characteristics of our patient cohort: children below the age of 2 years who visited the ED for bronchiolitis in 2013 and 2014, respectively. The characteristics are mostly similar between both years. About 40.78% (1640/4022) and 38.26% (1368/3576) of ED visits for bronchiolitis ended in hospitalization in 2013 and 2014, respectively. About 35.80% (1440/4022) and 32.89% (1176/3576) of ED visits for bronchiolitis were deemed to be appropriate hospital admissions in 2013 and 2014, respectively.
Based on the χ2 two-sample test, for the 2013 data, the ED visits discharged to home and those ending in hospitalization showed the same distribution for gender (P=.49) and different distributions for race (P<.001), ethnicity (P=.01), and insurance category (P<.001). For the 2014 data, the ED visits discharged to home and those ending in hospitalization showed the same distribution for gender (P=.94) and race (P=.61) and different distributions for ethnicity (P<.001) and insurance category (P<.001). Based on the Cochran-Armitage trend test , for both the 2013 and 2014 data, the ED visits discharged to home and those ending in hospitalization showed different distributions for age (P<.001).
|Characteristic||Emergency department visits (N=4022), n (%)||Emergency department visits discharged to home (N=2382), n (%)||Emergency department visits ending in hospitalization (N=1640), n (%)|
|<2 months||518 (12.88)||211 (8.86)||307 (18.72)|
|2 to <12 months||2424 (60.27)||1498 (62.89)||926 (56.46)|
|12 to 24 months||1080 (26.85)||673 (28.25)||407 (24.82)|
|Male||2369 (58.90)||1414 (59.36)||955 (58.23)|
|Female||1653 (41.10)||968 (40.64)||685 (41.77)|
|American Indian or Alaska native||51 (1.27)||26 (1.09)||25 (1.52)|
|Asian||49 (1.22)||20 (0.84)||29 (1.77)|
|Black or African American||124 (3.08)||78 (3.27)||46 (2.80)|
|Native Hawaiian or other Pacific Islander||321 (7.98)||160 (6.72)||161 (9.82)|
|White||2940 (73.10)||1784 (74.90)||1156 (70.49)|
|Unknown or not reported||537 (13.35)||314 (13.18)||223 (13.60)|
|Hispanic||1321 (32.84)||826 (34.68)||495 (30.18)|
|Non-Hispanic||2687 (66.81)||1549 (65.03)||1138 (69.39)|
|Unknown or not reported||14 (0.35)||7 (0.29)||7 (0.43)|
|Private||2436 (60.57)||1338 (56.17)||1098 (66.95)|
|Public||1422 (35.36)||933 (39.17)||489 (29.82)|
|Self-paid or charity||164 (4.08)||111 (4.66)||53 (3.23)|
|Asthma||207 (5.15)||72 (3.02)||135 (8.23)|
|Chronic complex condition ||296 (7.36)||60 (2.52)||236 (14.39)|
|Characteristic||Emergency department visits (N=3576), n (%)||Emergency department visits discharged to home (N=2208), n (%)||Emergency department visits ending in hospitalization (N=1368), n (%)|
|<2 months||454 (12.70)||186 (8.42)||268 (19.59)|
|2 to <12 months||2079 (58.14)||1379 (62.45)||700 (51.17)|
|12 to 24 months||1043 (29.17)||643 (29.12)||400 (29.24)|
|Male||2059 (57.58)||1273 (57.65)||786 (57.46)|
|Female||1517 (42.42)||935 (42.35)||582 (42.54)|
|American Indian or Alaska Native||47 (1.31)||31 (1.40)||16 (1.17)|
|Asian||68 (1.90)||40 (1.81)||28 (2.05)|
|Black or African American||104 (2.91)||70 (3.17)||34 (2.49)|
|Native Hawaiian or other Pacific Islander||284 (7.94)||180 (8.15)||104 (7.60)|
|White||2795 (78.16)||1708 (77.36)||1087 (79.46)|
|Unknown or not reported||278 (7.77)||179 (8.11)||99 (7.24)|
|Hispanic||1071 (29.95)||727 (32.93)||344 (25.15)|
|Non-Hispanic||2484 (69.46)||1464 (66.30)||1020 (74.56)|
|Unknown or not reported||21 (0.59)||17 (0.77)||4 (0.29)|
|Private||2175 (60.82)||1241 (56.20)||934 (68.27)|
|Public||1256 (35.12)||860 (38.95)||396 (28.95)|
|Self-paid or charity||145 (4.05)||107 (4.85)||38 (2.78)|
|Asthma||210 (5.87)||67 (3.03)||143 (10.45)|
|Chronic complex condition ||252 (7.05)||43 (1.94)||209 (15.28)|
Our automatic machine learning model selection method  chose the random forest classification algorithm. Random forest can naturally handle missing feature values. Our model was built using this algorithm and the 33 features shown in . These features are sorted in descending order of their importance values, which were automatically computed by the random forest algorithm in Weka based on average impurity decrease. In general, the features related to the patient’s history are ranked lower than those reflecting the patient’s status in the current ED visit. This intuitively makes medical sense. Two candidate patient features—ethnicity and the ED visit’s acuity level—were not used in our model because they did not increase the model’s accuracy.
shows the receiver operating characteristic curve of our model. Weka uses 50% as its default probability cutoff threshold for making binary classifications. shows the error matrix of our model. compares our model and the ED clinician’s disposition decision. Our model achieved an accuracy of 90.66% (3242/3576; 95% CI: 89.68-91.64), a sensitivity of 92.09% (1083/1176; 95% CI: 90.33-93.56), a specificity of 89.96% (2159/2400; 95% CI: 88.69-91.17), an AUC of 0.960 (95% CI: 0.954-0.966), a PPV of 81.80% (1083/1324; 95% CI: 79.67-83.80), and an NPV of 95.87% (2159/2252; 95% CI: 95.00-96.65). If we removed the insurance category feature, our model achieved a lower accuracy of 90.32% (3230/3576; 95% CI: 89.37-91.28), a lower sensitivity of 90.22% (1061/1176; 95% CI: 88.30-91.79), a specificity of 90.38% (2169/2400; 95% CI: 89.15-91.57), an AUC of 0.960 (95% CI: 0.955-0.966), a PPV of 82.12% (1061/1292; 95% CI: 79.94-84.15), and a lower NPV of 94.97% (2169/2284; 95% CI: 93.97-95.78). In comparison, the ED clinician’s disposition decision achieved an accuracy of 93.68% (3350/3576; 95% CI: 92.87-94.49), a sensitivity of 98.55% (1159/1176; 95% CI: 97.85-99.24), a specificity of 91.29% (2191/2400; 95% CI: 90.05-92.46), an AUC of 0.949 (95% CI: 0.942-0.956), a PPV of 84.72% (1159/1368; 95% CI: 82.83-86.69), and an NPV of 99.23% (2191/2208; 95% CI: 98.86-99.59).
|Feature||Importance based on average impurity decrease|
|Hour of EDa disposition||0.42|
|Age in days||0.40|
|Whether the patient has any other barrier to learning||0.39|
|Length of ED stay in minutes||0.38|
|Number of laboratory tests ordered during the ED visit||0.37|
|Diastolic blood pressure||0.36|
|Number of radiology studies ordered during the ED visit||0.34|
|Number of X-rays ordered during the ED visit||0.34|
|Systolic blood pressure||0.34|
|Number of consults during the ED visit||0.28|
|Whether the patient is up-to-date with his/her immunizations||0.27|
|Respiratory syncytial virus infection||0.24|
|Language barrier to learning||0.20|
|History of bronchopulmonary dysplasia||0.16|
|History of eczema||0.15|
aED: emergency department.
bSpO2: peripheral capillary oxygen saturation.
|Accuracy (%)||Sensitivity (%)||Specificity (%)||AUCa||PPVb (%)||NPVc (%)|
|The emergency department clinician’s disposition decision||93.68||98.55||91.29||0.949||84.72||99.23|
aAUC: area under the receiver operating characteristic curve.
bPPV: positive predictive value.
cNPV: negative predictive value.
shows the calibration plot of our model by decile of predicted probability of appropriate admission. The Hosmer-Lemeshow test showed imperfect calibration of the predicted probabilities and the actual outcomes (P<.001). When the predicted probability is <0.5, our model tends to overestimate the actual probability. When the predicted probability is >0.5, our model tends to underestimate the actual probability.
We developed the first machine learning classification model to accurately predict appropriate hospital admission for ED patients with bronchiolitis. Our model is a significant improvement over the previous models for predicting hospital admission in ED patients with bronchiolitis [- , - ]. Our model has good accuracy, with five of the six performance metrics achieving a value ≥90% and the other metric achieving a value >80%. Although our model attained a 3.02% lower accuracy than Intermountain Healthcare clinicians’ ED disposition decisions (90.66% vs 93.68%), we still view our model as a step forward with great potential. Within 0.01 second, our model can output the prediction result for a new patient. With further improvement to boost its accuracy and automatically explain its prediction results [ , ], our model could be integrated into an electronic health record system and become the base of a decision-support tool to help make appropriate ED disposition decisions for bronchiolitis. At that time, a clinician could use the model’s output as a point of reference when considering the disposition decision. This could provide value, improve outcomes, and reduce health care costs for bronchiolitis, regardless of whether our future final model can achieve a higher accuracy than Intermountain Healthcare clinicians’ ED disposition decisions. Our faith in this model stems from the following considerations:
- Intermountain Healthcare has several collaborative partnerships among its EDs and hospitals to facilitate coordination of pediatric specialty care and has completed multiple quality-improvement projects for bronchiolitis management. About 52.16% (3963/7598) of ED visits for bronchiolitis within Intermountain Healthcare occur at a tertiary pediatric hospital with an ED staffed by pediatric-specific clinicians. On average, the ED disposition decisions for bronchiolitis made at Intermountain Healthcare could be more accurate than those made at some other healthcare systems, especially those systems with general practice physicians or fewer pediatricians working in their EDs. Our model can be valuable for those systems, if it reaches a higher accuracy than the clinicians’ ED disposition decisions made at those systems. There is some evidence indicating this possibility. Most inappropriate ED disposition decisions are unnecessary admissions [ ]. In our data set, 14.36% of hospital admissions from the ED were deemed unnecessary [ ]. In the literature [ , ], this percentage is reported to be larger (20%-29%). To understand our model’s value for other systems, additional studies need to be conducted using data of those systems. This is an interesting area for future work.
- shows the degree of missing values of each feature with missing values. shows the probability mass function of the number of features with missing values in each data instance. In our data set, several attributes have numerous missing values because those values were either recorded on paper or occasionally undocumented and therefore were not available in Intermountain Healthcare’s electronic health record system. In particular, wheezing and retractions values were missing for 73.56% (5589/7598) of ED visits for bronchiolitis. Systolic and diastolic blood pressure values were missing for 46.49% (3532/7598) of ED visits for bronchiolitis. This could lower the model’s accuracy. In the future, these attributes are expected to be recorded more completely in Intermountain Healthcare’s newly implemented Cerner-based electronic health record system. After retraining our model on more complete Intermountain Healthcare data from future years, we would expect its accuracy to increase. In addition, multiple other healthcare systems like Seattle Children’s Hospital have been using the Cerner electronic health record system to record these attributes relatively completely for many years. Our model could possibly achieve a higher accuracy if trained with data from those systems. Both of these areas are interesting for future work.
- When making ED disposition decisions for bronchiolitis, clinicians often face some level of uncertainty and would prefer to obtain a second opinion by a reasonably accurate predictive model, particularly if some technique is used to automatically explain the model’s prediction results. For this purpose, we can use our prior method [ , ] to automatically provide rule-based explanations for any machine learning model’s classification results with no accuracy loss.
When reporting the performance metrics, we used the default cut-off threshold that Weka chose in order to decide between predicted admission and predicted ED discharge. Different health care systems could emphasize different performance metrics and provide divergent weights to FPs and FNs. As is the case with predictive modeling, in general, a health care system can always adjust the cut-off threshold based on the system’s preferences.
Comparison With Prior Work
Previously, researchers constructed several models to predict hospital admission in ED patients with bronchiolitis [- , - ]. compared these previous models with our model. Compared to our model, which predicts the appropriate ED disposition decision, the previous models are less accurate and incorrectly assume that actual ED disposition decisions are always appropriate. Our model uses data from more patients, more predictive features, and a more sophisticated classification algorithm than the previous models. As is the case with predictive modeling, in general, all of these features help improve our model’s accuracy.
Some aspects of our findings are similar to those of previous studies. In our data set, 39.59% (3008/7598) of ED visits for bronchiolitis ended in hospitalization. This percentage is within the 32%-40% range of hospital admission rates on ED visits for bronchiolitis reported in the literature [- ].
This study has several limitations. First, it used data from a single health care system, Intermountain Healthcare, and did not test the generalizability of the results. In the future, studies should validate our predictive models using data from other healthcare systems. We are reasonably confident in our results, as our study was conducted in a realistic setting for finding factors generalizable to other US healthcare systems. “Intermountain Healthcare is a large healthcare system with EDs at 22 heterogeneous hospitals spread over a large geographic area, ranging from community metropolitan and rural hospitals attended by general practitioners and family doctors with constrained pediatric resources to tertiary care children’s and general hospitals in urban areas attended by sub-specialists. Each hospital has a different patient population, geographic location, staff composition, scope of services, and cultural background” .
|Model||EDa visits (n)||Method for building the model||Features included in the final model||Accuracy (%)||Sensitivity (%)||Specificity (%)||AUCb||PPVc (%)||NPVd (%)|
|Our model||7599||Random forest||As listed in the Results section||90.66||92.09||89.96||0.960||81.80||95.87|
|Walsh et al ||119||Neural network ensemble||Age, respiratory rate after initial treatment, heart rate before initial treatment, oxygen saturation before and after initial treatment, dehydration, maternal smoking, increased work of breathing, poor feeding, wheezes only without associated crackles, entry temperature, and presence of both crackles and wheezes||81||78||82||—e||68||89|
|Marlais et al ||449||Scoring system||Age, respiratory rate, heart rate, oxygen saturation, and duration of symptoms||—||74||77||0.81||67||83|
|Destino et al ||195||Single variable||The Children’s Hospital of Wisconsin respiratory score||—||65||65||0.68||—||—|
|Laham et al ||101||Logistic regression||Age, need for intravenous|
fluids, hypoxia, and nasal wash lactate dehydrogenase concentration
|Corneli et al ||598||Decision tree||Oxygen saturation, the Respiratory Distress Assessment Instrument score computed from wheezing and retractions, and respiratory rate||—||56||74||—||—||—|
|Walsh et al ||300||Logistic regression||Age, dehydration, increased work of breathing, and heart rate||—||91||83||—||62||—|
aED: emergency department
bAUC: area under the receiver operating characteristic curve
cPPV: positive predictive value
dNPV: negative predictive value
eThe performance metric is unreported in the original paper describing the model.
Second, despite being an integrated healthcare system, Intermountain Healthcare does not have complete clinical and administrative data on all of its patients. Our data set missed information on patients’ health care use that occurred at non-Intermountain Healthcare facilities. Inclusion of data from those facilities may lead to different results, but we do not expect this inclusion to significantly change our results. Intermountain Healthcare delivers ~85% of pediatric care in Utah . Hence, our data set is reasonably complete with regard to capturing health care use among bronchiolitis patients in Utah.
Third, our operational definition of appropriate hospital admission is imperfect and excludes factors such as availability of patient transportation, preference of the patient’s parents, and hour of ED disposition . Many of these factors are often undocumented in patient records. For some hospital admissions from the ED that were regarded as unnecessary based on our operational definition, the original admission decisions could be made because of these factors.
Finally, besides the features used in the study, other features could help improve the model’s accuracy. Finding new predictive features is an interesting area for future work.
Our model can predict appropriate hospital admission for ED patients with bronchiolitis with good accuracy. In particular, our model achieved an AUC of 0.960. An AUC≥0.9 is considered outstanding discrimination . With further improvement, our model could be integrated into an electronic health record system to provide personalized real-time decision support for making ED disposition decisions for bronchiolitis, which could help standardize care and improve outcomes for bronchiolitis.
We thank Farrant Sakaguchi, Michael Mundorff, Karen Valentine, Chris Benitez, JoAnn Banks, Bart Dodds, Xiaoming Sheng, and Jim Bradshaw for their helpful discussions and help in retrieving the Intermountain Healthcare data set. GL, BLS, MDJ, FLN, and SH were partially supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under Award Number R21HL128875. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
GL was mainly responsible for the paper. He conceptualized and designed the study, performed the literature review and data analysis, and wrote the paper. BLS, MDJ, and FLN provided feedback on various medical issues, contributed to conceptualizing the presentation, and revised the paper. SH took part in retrieving the Intermountain Healthcare data set and interpreting its detected peculiarities.
Conflicts of Interest
- Zorc JJ, Hall CB. Bronchiolitis: recent evidence on diagnosis and management. Pediatrics 2010 Feb;125(2):342-349. [CrossRef] [Medline]
- Hasegawa K, Tsugawa Y, Brown DFM, Mansbach JM, Camargo CA. Trends in bronchiolitis hospitalizations in the United States, 2000-2009. Pediatrics 2013 Jul;132(1):28-36 [FREE Full text] [CrossRef] [Medline]
- Mansbach JM, Emond JA, Camargo CA. Bronchiolitis in US emergency departments 1992 to 2000: epidemiology and practice variation. Pediatr Emerg Care 2005 Apr;21(4):242-247. [Medline]
- Parker MJ, Allen U, Stephens D, Lalani A, Schuh S. Predictors of major intervention in infants with bronchiolitis. Pediatr Pulmonol 2009 Apr;44(4):358-363. [CrossRef] [Medline]
- Shay DK, Holman RC, Newman RD, Liu LL, Stout JW, Anderson LJ. Bronchiolitis-associated hospitalizations among US children, 1980-1996. JAMA 1999 Oct 20;282(15):1440-1446. [Medline]
- Hasegawa K, Tsugawa Y, Brown DFM, Mansbach JM, Camargo CA. Temporal trends in emergency department visits for bronchiolitis in the United States, 2006 to 2010. Pediatr Infect Dis J 2014 Jan;33(1):11-18 [FREE Full text] [CrossRef] [Medline]
- Marlais M, Evans J, Abrahamson E. Clinical predictors of admission in infants with acute bronchiolitis. Arch Dis Child 2011 Jul;96(7):648-652. [CrossRef] [Medline]
- Laham FR, Trott AA, Bennett BL, Kozinetz CA, Jewell AM, Garofalo RP, et al. LDH concentration in nasal-wash fluid as a biochemical predictor of bronchiolitis severity. Pediatrics 2010 Feb;125(2):e225-e233 [FREE Full text] [CrossRef] [Medline]
- Corneli HM, Zorc JJ, Holubkov R, Bregstein JS, Brown KM, Mahajan P, Bronchiolitis Study Group for the Pediatric Emergency Care Applied Research Network. Bronchiolitis: clinical characteristics associated with hospitalization and length of stay. Pediatr Emerg Care 2012 Feb;28(2):99-103. [CrossRef] [Medline]
- Scottish IGN. Guideline Central. Bronchiolitis in children. A national clinical guideline URL: https://www.guidelinecentral.com/summaries/bronchiolitis-in-children-a-national-clinical-guideline/ [accessed 2019-01-17] [WebCite Cache]
- AHRQ - Agency for Healthcare Research and Quality: Advancing Excellence in Health Care. Cincinnati Children's Hospital Medical Center: Evidence-based care guideline for management of first time episode bronchiolitis in infants less than 1 year of age URL: http://www.guideline.gov/content.aspx?id=34411 [accessed 2019-01-17] [WebCite Cache]
- Brand PL, Vaessen-Verberne AA. Differences in management of bronchiolitis between hospitals in The Netherlands. Dutch Paediatric Respiratory Society. Eur J Pediatr 2000 May;159(5):343-347. [Medline]
- Ducharme FM. Management of acute bronchiolitis. BMJ 2011 Apr 06;342:d1658. [CrossRef] [Medline]
- Behrendt CE, Decker MD, Burch DJ, Watson PH. International variation in the management of infants hospitalized with respiratory syncytial virus. International RSV Study Group. Eur J Pediatr 1998 Mar;157(3):215-220. [Medline]
- Chamberlain JM, Patel KM, Pollack MM. Association of emergency department care factors with admission and discharge decisions for pediatric patients. J Pediatr 2006 Nov;149(5):644-649. [CrossRef] [Medline]
- Johnson DW, Adair C, Brant R, Holmwood J, Mitchell I. Differences in admission rates of children with bronchiolitis by pediatric and general emergency departments. Pediatrics 2002 Oct;110(4):e49. [Medline]
- Mallory MD, Shay DK, Garrett J, Bordley WC. Bronchiolitis management preferences and the influence of pulse oximetry and respiratory rate on the decision to admit. Pediatrics 2003 Jan;111(1):e45-e51. [Medline]
- Plint AC, Johnson DW, Wiebe N, Bulloch B, Pusic M, Joubert G, et al. Practice variation among pediatric emergency departments in the treatment of bronchiolitis. Acad Emerg Med 2004 Apr;11(4):353-360 [FREE Full text] [Medline]
- Vogel AM, Lennon DR, Harding JE, Pinnock RE, Graham DA, Grimwood K, et al. Variations in bronchiolitis management between five New Zealand hospitals: can we do better? J Paediatr Child Health 2003;39(1):40-45. [Medline]
- Willson DF, Horn SD, Hendley JO, Smout R, Gassaway J. Effect of practice variation on resource utilization in infants hospitalized for viral lower respiratory illness. Pediatrics 2001 Oct;108(4):851-855. [Medline]
- Willson DF, Jiao JH, Hendley JO, Donowitz L. Invasive monitoring in infants with respiratory syncytial virus infection. J Pediatr 1996 Mar;128(3):357-362. [Medline]
- Wang EE, Law BJ, Boucher FD, Stephens D, Robinson JL, Dobson S, et al. Pediatric Investigators Collaborative Network on Infections in Canada (PICNIC) study of admission and management variation in patients hospitalized with respiratory syncytial viral lower respiratory tract infection. J Pediatr 1996 Sep;129(3):390-395. [Medline]
- Christakis DA, Cowan CA, Garrison MM, Molteni R, Marcuse E, Zerr DM. Variation in inpatient diagnostic testing and management of bronchiolitis. Pediatrics 2005 Apr;115(4):878-884. [CrossRef] [Medline]
- Mansbach JM, Clark S, Christopher NC, LoVecchio F, Kunz S, Acholonu U, et al. Prospective multicenter study of bronchiolitis: predicting safe discharges from the emergency department. Pediatrics 2008 Apr;121(4):680-688. [CrossRef] [Medline]
- McBride SC, Chiang VW, Goldmann DA, Landrigan CP. Preventable adverse events in infants hospitalized with bronchiolitis. Pediatrics 2005 Sep;116(3):603-608. [CrossRef] [Medline]
- Luo G, Johnson MD, Nkoy FL, He S, Stone BL. Appropriateness of Hospital Admission for Emergency Department Patients with Bronchiolitis: Secondary Analysis. JMIR Med Inform 2018 Nov 05;6(4):e10498 [FREE Full text] [CrossRef] [Medline]
- Walsh P, Cunningham P, Rothenberg SJ, O'Doherty S, Hoey H, Healy R. An artificial neural network ensemble to predict disposition and length of stay in children presenting with bronchiolitis. Eur J Emerg Med 2004 Oct;11(5):259-264. [Medline]
- Destino L, Weisgerber MC, Soung P, Bakalarski D, Yan K, Rehborg R, et al. Validity of respiratory scores in bronchiolitis. Hosp Pediatr 2012 Oct;2(4):202-209 [FREE Full text] [Medline]
- Walsh P, Rothenberg SJ, O'Doherty S, Hoey H, Healy R. A validated clinical model to predict the need for admission and length of stay in children with acute bronchiolitis. Eur J Emerg Med 2004 Oct;11(5):265-272. [Medline]
- Luo G, Nkoy FL, Gesteland PH, Glasgow TS, Stone BL. A systematic review of predictive modeling for bronchiolitis. Int J Med Inform 2014 Oct;83(10):691-714. [CrossRef] [Medline]
- Luo G, Stone BL, Johnson MD, Nkoy FL. Predicting Appropriate Admission of Bronchiolitis Patients in the Emergency Department: Rationale and Methods. JMIR Res Protoc 2016 Mar 07;5(1):e41 [FREE Full text] [CrossRef] [Medline]
- Byington CL, Reynolds CC, Korgenski K, Sheng X, Valentine KJ, Nelson RE, et al. Costs and infant outcomes after implementation of a care process model for febrile infants. Pediatrics 2012 Jul;130(1):e16-e24 [FREE Full text] [CrossRef] [Medline]
- Flaherman VJ, Ragins AI, Li SX, Kipnis P, Masaquel A, Escobar GJ. Frequency, duration and predictors of bronchiolitis episodes of care among infants ≥32 weeks gestation in a large integrated healthcare system: a retrospective cohort study. BMC Health Serv Res 2012 Jun 08;12:144 [FREE Full text] [CrossRef] [Medline]
- Mittal V, Darnell C, Walsh B, Mehta A, Badawy M, Morse R, et al. Inpatient bronchiolitis guideline implementation and resource utilization. Pediatrics 2014 Mar;133(3):e730-e737 [FREE Full text] [CrossRef] [Medline]
- Sandweiss DR, Mundorff MB, Hill T, Wolfe D, Greene T, Andrews S, et al. Decreasing hospital length of stay for bronchiolitis by using an observation unit and home oxygen therapy. JAMA Pediatr 2013 May;167(5):422-428. [CrossRef] [Medline]
- Steyerberg EW. Clinical Prediction Models: A Practical Approach To Development, Validation, And Updating (statistics For Biology And Health). New York, NY: Springer-Verlag; 2009.
- Witten IH, Frank E, Hall MA, Pal CJ. Data Mining: Practical Machine Learning Tools and Techniques. Burlington, MA: Morgan Kaufmann; 2016.
- Kaggle. URL: https://www.kaggle.com/ [accessed 2019-01-17] [WebCite Cache]
- Zeng X, Luo G. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection. Health Inf Sci Syst 2017 Dec;5(1):2 [FREE Full text] [CrossRef] [Medline]
- Feudtner C, Feinstein JA, Zhong W, Hall M, Dai D. Pediatric complex chronic conditions classification system version 2: updated for ICD-10 and complex medical technology dependence and transplantation. BMC Pediatr 2014 Aug 08;14:199 [FREE Full text] [CrossRef] [Medline]
- Agresti A. Categorical Data Analysis. Hoboken, NJ: Wiley; 2012.
- Luo G. Automatically explaining machine learning prediction results: a demonstration on type 2 diabetes risk prediction. Health Inf Sci Syst 2016;4:2 [FREE Full text] [CrossRef] [Medline]
- Luo G. A roadmap for semi-automatically extracting predictive and clinically meaningful temporal features from medical data for predictive modeling. Global Transitions 2019;1(1) (forthcoming).
- Shaw KN, Bell LM, Sherman NH. Outpatient assessment of infants with bronchiolitis. Am J Dis Child 1991 Feb;145(2):151-155. [Medline]
- Shafik MH, Seoudi TMM, Raway TS, Al Harbash NZ, Ahmad MMA, Al Mutairi HF. Appropriateness of pediatric hospitalization in a general hospital in Kuwait. Med Princ Pract 2012;21(6):516-521 [FREE Full text] [CrossRef] [Medline]
- Hosmer Jr DW, Lemeshow S, Sturdivant RX. Applied Logistic Regression. Hoboken, NJ: Wiley; 2013.
|AUC: area under the receiver operating characteristic curve|
|ED: emergency department|
|FN: false negative|
|FP: false positive|
|ICD-9-CM: International Classification of Diseases, Ninth Revision, Clinical Modification|
|NPV: negative predictive value|
|PPV: positive predictive value|
|SpO2: peripheral capillary oxygen saturation|
|TN: true negative|
|TP: true positive|
Edited by G Eysenbach; submitted 23.10.18; peer-reviewed by K Chen, P Giabbanelli, J op den Buijs, R Rivas; comments to author 21.11.18; revised version received 27.11.18; accepted 12.12.18; published 22.01.19
©Gang Luo, Bryan L Stone, Flory L Nkoy, Shan He, Michael D Johnson. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 22.01.2019.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on http://medinform.jmir.org/, as well as this copyright and license information must be included.