Abstract
Background: The shortage of pediatric medical resources and overcrowding in children’s hospitals are severe issues in China. Accurately predicting waiting times can help optimize hospital operational efficiency.
Objective: This study aims to develop machine learning models to predict waiting times for various laboratory and radiology examinations at a pediatric hospital.
Methods: Time stamp data from laboratory and radiology examinations were retrospectively collected from the pediatric hospital information system between November 1, 2024, and March 13, 2025. Two queue-related and 4 time-based features were extracted using queue theory. Linear regression and 8 machine learning models were trained to predict waiting times for each medical task. Hyperparameters were fine-tuned using randomized search and 10-fold cross-validation, and the bootstrap method was used for model evaluation. Mean absolute error, mean square error, root mean square error, and the coefficient of determination (R²) were used as evaluation metrics. Shapley additive explanations values were used to assess feature importance.
Results: A total of 230,864 time-stamped records were included after data preprocessing. The median waiting time was 4.817 (IQR 1.867-12.050) minutes for all medical tasks. Waiting times for radiology examinations were generally longer than those for laboratory tests. Tree-based algorithms, such as random forest and classification and regression trees, performed best in predicting laboratory test waiting times, with R² values ranging from mean 0.880(SD 0.003) to mean 0.934 (SD 0.003). However, the machine learning models did not perform well in predicting radiology examination waiting times, with R2 ranging from 0.114 (SD 0.005) to 0.719 (SD 0.004). Feature importance analysis revealed that queue-related predictors, especially the number of queuing patients, were the most important in predicting waiting times.
Conclusions: Task-specific prediction models are more appropriate for accurately predicting waiting times across various medical tasks. Guided by queue theory principles, we developed machine learning models for the waiting time prediction of each medical task and highlighted the importance of queue-related predictors.
doi:10.2196/77297
Keywords
Introduction
Background
Pediatric health care resources are considerably more scarce compared to other medical specialties. From 2015 to 2020, the growth rate of categorical pediatric residency positions in the United States was only 7%, which was far slower than the growth rate of other specialties during that time [
]. In China, the lack of resources for pediatric medicine is much more severe. According to national statistical data [ ], the number of pediatricians per child in China is approximately half that in the United States. Furthermore, China has a very unequal distribution of pediatric experts, with a heavy concentration in developed metropolitan areas [ ]. This leads to patient clustering and increased overcrowding in tertiary pediatric hospitals. A typical pediatric medical visit consists of multiple steps, including consultation, radiology examination or laboratory tests, and medication dispensing. Excessive waiting times at each of these stages not only reduce patient satisfaction [ ] but may also contribute to adverse clinical outcomes [ , ]. Therefore, accurately predicting waiting times and improving operational effectiveness in pediatric hospitals is of critical importance.Queue management systems in hospitals can enhance operational efficiency through features such as real-time queue updates, automated notifications, and online appointment scheduling [
]. Prior research has examined queue-related issues using techniques such as time series analysis (TSA) [ ], discrete event simulation [ ], and queue theory (QT) [ ]. These methods simulate queue dynamics under various parameters to assess operational performance and identify the optimal resource allocation strategies. However, there are several obstacles to overcome before they may be used in actual health care settings. The application of these approaches is limited in practice because health care queues often deviate from the fundamental assumptions that underlie them, such as known initial probability distributions, exponentially distributed waiting times, or stationary queue states [ ]. As a result, these models frequently fail to capture the variability and fluctuations inherent in actual queue dynamics, reducing their accuracy in predicting waiting times.Machine learning models are developed using data-driven approaches and usually do not require restrictive assumptions. To overcome the drawbacks of conventional techniques, machine learning presents a viable substitute for forecasting wait times in hospital lines. In previous studies, Chen et al [
] applied an improved random forest algorithm to predict patient waiting times for each treatment task, such as blood tests, computed tomography (CT) scan, and pharmacy dispensing, and the hospital queuing-recommendation system based on the prediction algorithm reduced waiting times by guiding patient flow. Similarly, Lin et al [ ] evaluated the performance of several machine learning algorithms for waiting time prediction and selected the best-performing model for prediction in a pediatric ophthalmology outpatient clinic. Chocron et al [ ] compared machine learning algorithms with classical QT models for waiting time prediction and found that machine learning models produced better predictions than traditional methods in complex real-world scenarios. Therefore, machine learning techniques provide a more reliable way to capture variations in actual queue dynamics and improve the prediction accuracy of waiting time. Furthermore, multitask queues in pediatric hospitals have received less attention in the majority of studies, which have concentrated on emergency departments (EDs) [ - ] or radiology departments [ ]. Nevertheless, the operational effectiveness of different medical task windows varies, and research focusing on a specific queue may not adequately capture the dynamics of other hospital medical tasks. A comprehensive investigation of waiting time prediction for different medical tasks can offer deeper insights into the use of medical resources and can be used as an alternative method to support medical resource optimization.Objectives
This study investigates the dynamic characteristics of multiple medical task queues using real-world data from a large pediatric hospital in northern China. By incorporating principles from QT for feature selection, key features were extracted from actual hospital queue data, and then, we used machine learning techniques to predict waiting times after patient check-in across various medical tasks in the hospital. The predictive performance of different machine learning models is systematically evaluated. Additionally, feature importance analysis is conducted to assess the contribution of individual predictors and to provide insights for optimizing hospital operational efficiency. We also plan to deploy the best-performing model for waiting time prediction to improve the patient experience and help reduce hospital overcrowding in the future.
Methods
Problem Definition
During a hospital visit, patients usually have one or more medical tasks, which are frequently spread out over several hospital departments. For instance, a pediatric patient with an acute respiratory tract infection might undergo a series of steps, starting with a consultation in the respiratory department. After blood sampling and throat swab collection, the patient has an X-ray examination and then returns to the pulmonary department when all diagnostic reports are ready and the physician has ordered medicine. Finally, the patient goes to the pharmacy to pick up the necessary medications.
As illustrated, there may be unforeseen waiting times at each step of the medical tasks, which can significantly affect the patient’s overall experience [
]. Furthermore, hospital operational efficiency may be lowered by disorderly queue arrangements. Targeted optimization of medical resource allocation based on key factors influencing waiting time represents a practical and effective strategy. As a result, in actual hospital settings, queue notifications may be provided by predictive algorithms that estimate waiting times with great accuracy, which will assist in reducing patient anxiety.Data Collection
This study retrospectively collected records of patients who visited the Capital Center for Children’s Health, Capital Medical University, between November 1, 2024, and March 13, 2025, for various laboratory tests and radiology examinations; the total raw dataset comprised 326,701 entries. The time stamp of these records was retrieved from the health information system for patients who attended the hospital during this period. The key information is summarized in
.Records name | Description |
Patient ID | A unique identifier assigned to each patient during their visit to the hospital |
Medical task name | The specific examination conducted during a patient’s visit, including throat swab, blood sampling, laboratory test for patients with fever, CT | , MRI , X-ray, laryngoscopy, ultrasound, and echocardiography
Visit category | The department from which the patient is referred, including outpatient and ED |
Service location | The specific location where the patient receives medical services |
Check-in time | The time when the patient completes check-in upon arriving for a specific medical task |
Sampling or examination start time | The time when the patient begins a specific medical examination: the start time of laboratory tests is recorded as the sampling time, while the start time of the radiology examination is recorded as the examination start time |
aCT: computed tomography.
bMRI: magnetic resonance imaging.
cED: emergency department.
Following the retrieval of all records, the data were arranged according to the practical medical tasks performed by the institution. In this hospital, the tasks for laboratory tests include throat swab at the first floor (throat swab–first floor) and the second floor (throat swab–second floor), blood sampling, and laboratory test for patients with fever (laboratory test–fever). The radiology examination includes CT, magnetic resonance imaging (MRI), X-ray, laryngoscopy, ultrasound, and echocardiography. Blood sampling, ultrasound, and echocardiography are further divided into outpatient and ED windows at this facility.
Ethical Considerations
All data used in our study was anonymized and deidentified and did not involve data related to humans. Therefore, our research was exempted from the requirement of written informed consent and was approved by the Ethics Committee of the Capital Center for Children's Health, Capital Medical University (SHERLLM2024037).
Data Preprocessing
We define the waiting time as the duration from the patient’s check-in to the start of the sampling or examination, as follows:
(1)
First, we removed records that do not include the sampling or examination start time and check-in time. Due to the presence of numerous duplicate records in the health information system, we then performed a deduplication procedure on the extracted data. If the patient ID and sampling or examination start time are identical in the records, it will be removed as a redundant duplicate. To avoid negative waiting times, we then removed data when the check-in time is later than the sampling or examination start time. In addition, there may be very long waiting time outliers because some patients may not line up for the medical service window immediately after check-in—possibly leaving the hospital for a period or coming back on a subsequent day to continue their tests or examinations. To address this issue, we applied an IQR method to denoise the data using the following criterion:
(2)
Records with waiting times exceeding the IQR were classified as noise and discarded. Model development was then conducted using the remaining data.
Feature Construction
Queue-length (QL) predictors and delay-history predictors are the 2 types of predictors used in earlier QT experiments to forecast waiting times [
]. While the latter makes predictions based on past waiting times, the former refers to projecting waiting time based on the queue’s present status factors. As we have access to the precise time stamps for every patient’s test or examination, we make predictions using QL predictors, which comprise the following two categories of features:- Time-based feature; these include the month, day, hour, and day of the week when a patient undergoes a specific medical task. These indicators can all be derived from the abovementioned records. Given that patient data vary significantly across different periods, these 4 time-related factors were considered as potential predictors.
- Queue-related feature; these include the patient arrival rate per hour, calculated by counting the number of patients registering for a particular task each hour, and the number of queuing patients, that is, the number of patients who have checked in but have not yet started the sampling or examination.
In summary, we selected only 6 indicators, including month, day, hour, day of the week, arrival rate, and the number of queuing patients, to predict the waiting time using a concise set of predictors.
Model Development and Feature Importance Analysis
In our experiment, we use linear regression (LR) as a baseline model and 8 machine learning models for waiting time prediction, including k-nearest neighbor (KNN), support vector regression (SVR), classification and regression tree (CART), elasticNet, random forest (RF), light gradient boosting machine (LightGBM), extreme gradient boosting (XGBoost), and multilayer perceptron (MLP). As the waiting time is a continuous variable, we evaluate the models using the following performance metrics: mean absolute error (MAE), mean square error (MSE), root mean square error (RMSE), and the coefficient of determination (R²). Equations 3–6 show the formula of MAE, MSE, RMSE, and R2, where the is the prediction value, the represents the mean value of the waiting time, and the is the actual value.
(3)
(4)
(5)
(6)
A randomized search and 10-fold cross-validation were used to fine-tune each model’s hyperparameters. The performance of the models was evaluated using the abovementioned metrics, and the model with the best performance across these metrics was selected for prediction. To further evaluate the stability and reliability of the chosen model, a bootstrap approach was applied. MAE, MSE, RMSE, and R2 were calculated for each bootstrap sample by comparing the model’s predictions with the true values. The 95% CIs for each metric were derived to quantify the uncertainty in the model’s predictive performance. Feature contributions to the waiting time prediction were quantified using Shapley additive explanations (SHAP) values [
]. For tree-based models, SHAP values were computed directly; for others (KNN and SVR), a background dataset was used to simulate feature absence, allowing for marginal contribution estimation via the KernelExplainer function. The entire research process is shown in . All analyses were performed using Python (version 3.12; Python Software Foundation).
Results
Basic Characteristics of the Medical Queue
From November 1, 2024, to March 13, 2025, a total of 230,864 records were included after preprocessing the raw data. The required tasks were classified into 2 main categories: laboratory tests, which included 5 types of tasks, and radiology examinations, which included 8 types of tasks. The most frequently performed laboratory test was blood sampling for outpatient patients (76,587/230,864, 33.17%), while the most common radiology examination was X-ray (31,125/230,864, 13.48%;
).Tests and examinations | Tasks, n (%) | Time, range (min) | Time, median (IQR) | |
Laboratory test | ||||
Throat swab—first floor | 26,314 (11.05) | 0.017‐28.4 | 1.617 (0.483‐5.799) | |
Throat swab—second floor | 13,296 (5.75) | 0.017‐26.967 | 1.467 (0.417‐6.971) | |
Blood sampling—ED | 23,558 (10.20) | 0.150‐5.533 | 1.650 (1.049‐2.483) | |
Blood sampling—outpatient | 76,587 (33.17) | 0.199‐25.5 | 3.225 (2.583‐10.700) | |
Laboratory test—fever | 9533 (4.13) | 0.017‐8.667 | 1.783 (1.017‐3.349) | |
Radiology examination | ||||
Ultrasound—ED | 22,601 (9.79) | 0.199‐48.050 | 10.883 (5.283‐20.717) | |
Ultrasound—outpatient | 8031 (3.48) | 0.517‐85.299 | 23.400 (12.750‐37.067) | |
Echocardiography—ED | 394 (0.17) | 0.033‐9.683 | 3.958 (3.054‐5.212) | |
Echocardiography—outpatient | 5067 (2.19) | 0.550‐7.983 | 3.200 (2.600‐4.167) | |
Laryngoscope | 4856 (2.10) | 0.467‐43.267 | 11.975 (5.217‐19.771) | |
X-ray | 31125 (13.48) | 0.065‐6.566 | 5.106 (10.872‐21.852) | |
MRI | 3324 (1.44) | 0.008‐80.157 | 8.389 (17.825‐33.262) | |
СТ | 6178 (2.68) | 0.002‐19.412 | 2.406 (3.933‐6.947) |
aED: emergency department.
bMRI: magnetic resonance imaging.
cCT: computed tomography.
In the laboratory test category, we observed that blood sampling–ED (median 1.650, IQR 1.049-2.483 min) had the narrowest waiting time range, while the task for blood sampling–outpatient (median 3.225, IQR 2.583-10.700 min) had a relatively extensive waiting time range (
). For radiology examinations, the range of waiting time for the echocardiography task was narrow (median 3.958, IQR 3.054-5.212 min for ED; median 3.200, IQR 2.600-4.167 min for outpatient). In contrast, outpatients had to wait much longer for an ultrasound (median 37.067, IQR 12.750-23.400 min; ). Compared to laboratory tests, patients experienced longer waiting times in most radiology examinations.
The distribution of waiting time of the total medical task followed an exponential pattern (Figure S1 in
). At different time points, we observed average arrival rate peaks for medical tasks at approximately 10:00 AM, 15:00 PM, and 8:00 PM. Similarly, 3 peaks were also present for the average number of queuing patients across different days of the week (Figures S2 and S3 in ). However, the waiting time of patients showed a first earlier peak at approximately 7:00 AM, with 2 lower peaks at approximately 12:00 AM and 8:00 PM (Figure S4 in ). In addition, the distribution of waiting time for different medical tasks before and after denoising is shown in Figure S5 in .Model Evaluation
We used 8 different models for predicting waiting times before laboratory and radiology tasks, while LR was used as the baseline model. After hyperparameter optimization, the optimal performance of the selected models for different tasks is shown in Table S1 in
. In 6 medical task queues, RF demonstrated the best predictive performance, with RMSE ranging from 1.925 (SD 0.015) to 2.395 (SD 0.033) for laboratory tests and from 1.217 (SD 0.015) to 15.204 (SD 0.207) for radiology examinations. In addition, XGBoost demonstrated effective performance in predicting waiting times for the throat swab–first floor (RMSE: mean 2.395, SD 0.033; R2: mean 0.880, SD 0.003), echocardiography-ED (RMSE: mean 1.224, SD 0.048; R2: mean 0.612, SD 0.021), and CT (RMSE: mean 3.191, SD 0.043; R2: mean 0.382, SD 0.011), and CART, LightGBM, SVR, and KNN can also be used for waiting time prediction. All selected models outperformed the baseline LR model ( and ). The optimal hyperparameters identified through randomized search are provided in Table S2 in . In addition, the calibration and residual plots are provided in Figures S8-S18 in .

Feature Importance Analysis
On the basis of the QT principle, we ultimately selected the number of queuing patients, arrival rate, month, day, day of the week, and hour as independent features for predicting waiting time. To evaluate the contribution of each feature to the machine learning model, we computed SHAP values, which quantify the impact of individual features on the predicted waiting time. The mean SHAP value was used to rank each feature. The rankings of feature importance across different models are visualized in a heat map plot (
). Notably, for the majority of medical tasks, queue-related features, such as the number of queuing patients, emerged as the most influential predictor of waiting time, and the arrival rate was also expressed as another important feature for waiting time prediction. The details of the mean SHAP value are provided in Table S3 in .
Discussion
Principal Findings
This study collected patient data from the Capital Center for Children’s Health, Capital Medical University, spanning the period from November 1, 2024, to March 13, 2025, comprising a total of 230,864 records after data preprocessing. The distribution of waiting times for overall medical tasks followed an exponential pattern, with a median value of 4.817 minutes. Notably, peak periods in the average patient arrival rate and the average number of queuing patients were observed at 10:00 AM, 3:00 PM, and 8:00 PM. The waiting time exhibited a different pattern, with an earlier peak occurring at approximately 7:00 AM, along with 2 additional, smaller peaks around 12:00 PM and 8:00 PM, indicating the periods of highest congestion within the hospital. During model selection, tree-based algorithms such as RF, XGBoost, CART, and LightGBM demonstrated better accuracy in predicting waiting times for most medical task queues, while SVR and KNN can also be used as appropriate algorithms. Additionally, feature importance analysis revealed that the number of queuing patients was the most influential predictor of waiting time.
QT and TSA are classical approaches used to model and simulate queue dynamics in health care settings [
, ]. The application of QT requires the key input parameters, including the average arrival rate (λ) and average service rate (μ). Several queue parameters, including average waiting time (Wq), total time spent in the system (Ws), average QL (Ls), and average number of individuals waiting (Lq), can be calculated based on these inputs. For example, waiting time can be estimated using the formula in an M/M/1 queue [ ]. By simulating the queuing status and adjusting service rates, different operational scenarios can be modeled for service optimization. However, QT has several limitations when applied to the prediction of waiting times in real-world hospital settings. First, as demonstrated in our study, QT assumes that patient arrivals follow a Poisson process and that service times conform to exponential distributions. While the entire waiting time across all tasks approximates an exponential distribution in our study, this assumption does not universally hold for each medical task queue (the distribution of waiting times for each task was provided on our Github website). Second, QT estimates queue parameters relying on average arrival and service rates, which are dynamic in clinical practice. Because patient volume and service effectiveness are constantly changing, QT’s static assumptions are inadequate for simulating operating situations in real time [ ]. Third, accurately measuring the actual service rates for each medical task is challenging. As a result, we had to rely on estimations based on prior experience, which likely reduced the precision of the modeled parameters. Additionally, TSA, which involves modeling sequences of observations indexed over time, has been applied in previous studies to forecast ED attendances [ , , - ]. While effective at capturing periodic trends in patient volumes, TSA is more suited to predicting daily volumes rather than individual-level waiting times. Instead of depending on the strict presumptions of QT or TSA, machine learning models may directly use real-world waiting time data, enabling flexible, data-driven prediction.This study assessed a variety of models for the waiting time prediction task to achieve the optimal predictive performance. In most scenarios, the tree-based model performed better than other models and demonstrated strong predictive capability. The RMSE of the models in our study was lower than that reported in previous studies [
, , , ]. Strong predictive performance was shown by high R2 values for a number of tasks, including throat swab–second floor (R²: mean 0.934, SD 0.003), blood sampling–outpatient (R²: mean 0.893, SD 0.002), and throat swab–first floor (R²: mean 0.880, SD 0.003). As widely recognized, machine learning models are generally more effective than LR in capturing nonlinear relationships [ ], and tree-based models typically handle tabular data more efficiently than deep learning models [ ]. However, for several tasks, such as MRI, CT, ultrasound-outpatient, and echocardiography-outpatient queues, we observed weak associations between the features and waiting time. We believe that the small sample sizes for these tasks may contribute to the poor model performance. A longer data collection period could help improve performance. Additionally, the 6 features used in our model may not be sufficient for predicting waiting times across all medical tasks. Some medical tasks may have inherent complexities or variability in waiting times that are not fully captured by the current available features. For example, the waiting time for outpatient ultrasound and MRI may depend on factors such as the age and urgency of the case or physician workload, which are difficult to quantify with the current feature set. While we aim to develop a lightweight prediction model with simplified features for easier deployment, it is clear that for certain tasks, feature engineering and the inclusion of additional variables—such as patient age, gender, department, and diagnosed diseases—are necessary to improve performance. These strategies can help improve the prediction models for underperforming tasks and support their future deployment.Notably, the data used to train our predictive models were primarily derived from the first and fourth quarters of the year, when patient visit patterns are different from those observed in other periods. According to surveillance data from the Chinese Center for Disease Control and Prevention, the prevalence of acute respiratory infections is elevated between November 2024 and January 2025 [
- ]. This seasonal trend aligns with epidemiological surveillance data from the United States, which indicate a predominance of respiratory tract infections among children during the winter months [ , ]. Throughout this time, pediatric attendance was consistently high from late December to January (Figure S19 in ), with January having the most visits (44,808/145,739, 30.75%). This phenomenon is probably caused by a confluence of environmental and school-related factors: lower temperatures and humidity promote viral stability and propagation, while more interpersonal interaction in school facilitates the spread of infectious illnesses. Children’s hospital visits are accelerated by these 2 factors. The most frequent combinations for pediatric patients who received multiple medical services during a single visit were related to acute respiratory infections, including throat swab → laryngoscopy → X-ray → echocardiography, throat swab → X-ray → blood sampling, and throat swab → blood sampling [ , ]. These findings imply that queue loads for various services are considerably impacted by fluctuations in illness occurrence. Although feature importance analysis suggests that the contribution of the month feature is not particularly important across different prediction models, this may be attributed to the short data collection period, which constrains the ability to capture long-term temporal trends. Therefore, seasonal demand fluctuations can also be considered when optimizing patient flow and resource allocation (Figure S20 in ). Strategies such as increasing the number of service windows during high-demand periods could alleviate queue burdens and improve operational efficiency. For example, at times when respiratory infections were most common at this institution, a second throat swab window was opened.Limitations
This study has several limitations. First, our study is a single-center investigation that was mostly carried out at a pediatric hospital during the winter. The findings of this study might not apply to circumstances outside of a children’s hospital due to the unique characteristics of the patients at this facility. Second, although medicine dispensing is a critical component of the clinical workflow, it was excluded from the analysis since the pharmacy did not have a procedure for patients to check in, making it impossible to calculate waiting times. Lastly, waiting times for various medical tasks were predicted using numerous models, which complicated the overall predictive framework and posed challenges for real-world implementation.
Conclusions
In conclusion, the distribution of patient waiting times exhibits 3 distinct peak periods. However, waiting time patterns differ markedly across various medical task queues, each displaying unique characteristics that do not align with the overall trend. Consequently, developing task-specific predictive models for each medical task queue can enhance prediction accuracy. Feature importance analysis reveals that although queue-related features are the most influential in predicting patient waiting time, time-based features might also contribute meaningfully and should be considered to further optimize hospital operational efficiency.
Acknowledgments
This work was supported by the Chinese Academy of Medical Sciences Innovation Fund for Medical Sciences (2021-I2M-1—056) and the Capital Institute of Pediatrics Foundation for Youths (QN-2024—37).
Data Availability
The datasets generated or analyzed during this study are available from the corresponding author on reasonable request.
Conflicts of Interest
None declared.
Distribution of waiting time, hyperparameters, and performance of the model.
DOCX File, 5724 KBReferences
- Vinci RJ. The pediatric workforce: recent data trends, questions, and challenges for the future. Pediatrics. Jun 2021;147(6):e2020013292. [CrossRef] [Medline]
- Russ CM, Gao Y, Karpowicz K, et al. The pediatrician workforce in the United States and China. Pediatrics. Jun 1, 2023;151(6):e2022059143. [CrossRef] [Medline]
- Yan X, Yu J, Zhang P, Zhang J, Luo S, Yu Y. Innovative management strategies for addressing paediatric medical staff shortages in underdeveloped cities in developing countries. BMJ Lead. Oct 24, 2024:leader-2023-000894. [CrossRef] [Medline]
- Anderson RT, Camacho FT, Balkrishnan R. Willing to wait?: the influence of patient wait time on satisfaction with primary care. BMC Health Serv Res. Feb 28, 2007;7:31. [CrossRef] [Medline]
- Doan Q, Wong H, Meckler G, et al. The impact of pediatric emergency department crowding on patient and health care system outcomes: a multicentre cohort study. CMAJ. Jun 10, 2019;191(23):E627-E635. [CrossRef] [Medline]
- Kwak DS, Park J. Analysis of the prognosis outcomes and treatment delay among ST-segment elevation myocardial infarction patients in emergency department based on the presence of symptoms suggestive of COVID-19. Int J Health Policy Manag. 2024;13:8207. [CrossRef] [Medline]
- Clinic queue management system: enhancing patient experience and streamlining operations. Qtech, Queuing System Pte Ltd. URL: https://qtechqueueingsystem.com/clinic-queue-management-system/#real-time-queue-updates [Accessed 2025-04-28]
- Kadri F, Harrou F, Chaabane S, Tahon C. Time series modelling and forecasting of emergency department overcrowding. J Med Syst. Sep 2014;38(9):107. [CrossRef] [Medline]
- Montazeri M, Multmeier J, Novorol C, Upadhyay S, Wicks P, Gilbert S. Optimization of patient flow in urgent care centers using a digital tool for recording patient symptoms and history: simulation study. JMIR Form Res. May 21, 2021;5(5):e26402. [CrossRef] [Medline]
- Ang E, Kwasnick S, Bayati M, Plambeck EL, Aratow M. Accurate emergency department wait time prediction. M&SOM. Feb 2016;18(1):141-156. [CrossRef]
- Gajanan A. Reducing wait time prediction in hospital emergency room: lean analysis using a random forest model. University of Tennessee. URL: https://trace.tennessee.edu/utk_gradthes/4722 [Accessed 2025-09-08]
- Chen JG, Li KL, Tang Z, Bilal K, Li KQ. A parallel patient treatment time prediction algorithm and its applications in hospital queuing-recommendation in a big data environment. IEEE Access. 2016;4:1767-1783. [CrossRef]
- Lin WC, Goldstein IH, Hribar MR, Sanders DS, Chiang MF. Predicting wait times in pediatric ophthalmology outpatient clinic using machine learning. AMIA Annu Symp Proc. 2019;2019:1121-1128. [Medline]
- Chocron E, Cohen I, Feigin P. Delay prediction for managing multiclass service systems: an investigation of queueing theory and machine learning approaches. IEEE Trans Eng Manage. 2024;71:4469-4479. [CrossRef]
- Guo LL, Guo LY, Li J, et al. Characteristics and admission preferences of pediatric emergency patients and their waiting time prediction using electronic medical record data: retrospective comparative analysis. J Med Internet Res. Nov 1, 2023;25:e49605. [CrossRef] [Medline]
- Hemaya SAK, Locker TE. How accurate are predicted waiting times, determined upon a patient’s arrival in the emergency department? Emerg Med J. Apr 2012;29(4):316-318. [CrossRef] [Medline]
- Walker KJ, Jiarpakdee J, Loupis A, et al. Predicting ambulance patient wait times: a multicenter derivation and validation study. Ann Emerg Med. Jul 2021;78(1):113-122. [CrossRef] [Medline]
- Curtis C, Liu C, Bollerman TJ, Pianykh OS. Machine learning for predicting patient wait times and appointment delays. J Am Coll Radiol. Sep 2018;15(9):1310-1316. [CrossRef] [Medline]
- Camacho F, Anderson R, Safrit A, Jones AS, Hoffmann P. The relationship between patient’s perceived waiting time and office-based practice satisfaction. N C Med J. 2006;67(6):409-413. [Medline]
- Thiongane M, Chan W, L’Ecuyer P. New history-based delay predictors for service systems. Presented at: 2016 Winter Simulation Conference (WSC); Dec 11-14, 2016:425-436; Washington, DC, USA. [CrossRef]
- Lundberg SM, Lee S. A unified approach to interpreting model predictions. Curran Associates Inc Presented at: In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17); Dec 4-9, 2017:4768-4777; Red Hook, NY.
- Joseph JW. Queuing theory and modeling emergency department resource utilization. Emerg Med Clin North Am. Aug 2020;38(3):563-572. [CrossRef] [Medline]
- Juang WC, Huang SJ, Huang FD, Cheng PW, Wann SR. Application of time series analysis in modelling and forecasting emergency department visits in a medical centre in Southern Taiwan. BMJ Open. Dec 1, 2017;7(11):e018628. [CrossRef] [Medline]
- Yang H. Research on M/M/1 queue of exponential departure intensity. Math Probl Eng. Aug 19, 2022;2022:1-7. [CrossRef]
- Winston WL, Goldberg JB. Operations Research: Applications and Algorithms. Vol 3. Duxbury Press; 2004.
- Jones SA, Joy MP, Pearson J. Forecasting demand of emergency care. Health Care Manag Sci. Nov 2002;5(4):297-305. [CrossRef] [Medline]
- Abdel-Aal RE, Mangoud AM. Modeling and forecasting monthly patient volume at a primary health care clinic using univariate time-series analysis. Comput Methods Programs Biomed. Jun 1998;56(3):235-247. [CrossRef] [Medline]
- Milner PC. Ten-year follow-up of ARIMA forecasts of attendances at accident and emergency departments in the Trent region. Stat Med. Sep 30, 1997;16(18):2117-2125. [CrossRef] [Medline]
- Rastpour A, McGregor C. Predicting patient wait times by using highly deidentified data in mental health care: enhanced machine learning approach. JMIR Ment Health. Aug 9, 2022;9(8):e38428. [CrossRef] [Medline]
- Kuo YH, Chan NB, Leung JMY, et al. An integrated approach of machine learning and systems thinking for waiting time prediction in an emergency department. Int J Med Inform. Jul 2020;139:104143. [CrossRef] [Medline]
- Grinsztajn L, Oyallon E, Varoquaux G. Why do tree-based models still outperform deep learning on tabular data? arXiv. Preprint posted online on Jul 18, 2022. URL: https://arxiv.org/abs/2207.08815
- China National Influenza Center homepage. China National Influenza Center. URL: https://ivdc.chinacdc.cn/cnic/ [Accessed 2025-09-08]
- National sentinel surveillance of acute respiratory infectious diseases. Chinese Center for Disease Control and Prevention. URL: https://www.chinacdc.cn/jksj/jksj04_14275/ [Accessed 2025-09-08]
- Kang L, Huang JL. The impact of infectious disease outbreaks on the pediatric healthcare system and countermeasures. J Clin Pediatr. 2024;42(6):475-479. [CrossRef]
- Schrijver TV, Brand PLP, Bekhof J. Seasonal variation of diseases in children: a 6-year prospective cohort study in a general hospital. Eur J Pediatr. Apr 2016;175(4):457-464. [CrossRef] [Medline]
- Lipsett SC, Monuteaux MC, Fine AM. Seasonality of common pediatric infectious diseases. Pediatr Emerg Care. Feb 1, 2021;37(2):82-85. [CrossRef] [Medline]
- China Medicine Education Association Committee on Pediatrics, The Subspecialty Group of Respiratory Diseases, The Society of Pediatrics, Chinese Medical Association, Chinese Medical Doctor Association Committee on Respirology Pediatrics. Guideline for diagnosis, treatment and prevention of influenza in children (medical version, 2024). Chin J Appl Clin Pediatr. 2024;39(12):881-895. [CrossRef]
- National Health Commission of the People’s Republic of China. Guidelines for diagnosis and treatment of Mycoplasma pneumonae pneumonia in children (2023 Edition). Chin J Ration Drug Use. 2023;3(330):16-24. [CrossRef]
Abbreviations
CART : classification and regression tree |
CT: computed tomography |
ED: emergency department |
KNN: k-nearest neighbor |
LightGBM: light gradient boosting machine |
LR: linear regression |
MAE: mean absolute error |
MLP: multilayer perceptron |
MRI: magnetic resonance imaging |
MSE: mean square error |
QL: queue length |
QT: queue theory |
RF : random forest |
RMSE: root mean square error |
SHAP: Shapley additive explanations |
SVR: support vector regression |
TSA: time series analysis |
XGBoost: extreme gradient boosting |
Edited by Arriel Benis; submitted 11.May.2025; peer-reviewed by Gbenga Adewumi, Pratik Shingru, Xun Ding; final revised version received 25.Jul.2025; accepted 04.Aug.2025; published 29.Sep.2025.
Copyright© Lin Lin Guo, Rui Tang, Jia Yang Wang, Si Zheng, Yin Zeng, Jun Hou, Mo Chen Dong, Jiao Li, Ying Cui. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 29.Sep.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on https://medinform.jmir.org/, as well as this copyright and license information must be included.