Published on in Vol 9, No 9 (2021): September

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/21990, first published .
Forecasting the Requirement for Nonelective Hospital Beds in the National Health Service of the United Kingdom: Model Development Study

Forecasting the Requirement for Nonelective Hospital Beds in the National Health Service of the United Kingdom: Model Development Study

Forecasting the Requirement for Nonelective Hospital Beds in the National Health Service of the United Kingdom: Model Development Study

Original Paper

1NYU Grossman School of Medicine, New York, NY, United States

2Icahn School of Medicine at Mount Sinai, New York, NY, United States

3The Royal Bolton Hospital, Bolton, United Kingdom

4Methods Analytics, London, United Kingdom

5University of Exeter Business School, Exeter, United Kingdom

6Taunton & Somerset NHS Foundation trust, Taunton, United Kingdom

7Division of Healthcare Delivery Science, Department of Population Health, NYU Grossman School of Medicine, New York, NY, United States

Corresponding Author:

Simon Jones, PhD

Division of Healthcare Delivery Science

Department of Population Health

NYU Grossman School of Medicine

227 E 30th St

New York, NY, 10016

United States

Phone: 1 646 501 2905

Email: simon.jones@nyulangone.org


Background: Over the last decade, increasing numbers of emergency department attendances and an even greater increase in emergency admissions have placed severe strain on the bed capacity of the National Health Service (NHS) of the United Kingdom. The result has been overcrowded emergency departments with patients experiencing long wait times for admission to an appropriate hospital bed. Nevertheless, scheduling issues can still result in significant underutilization of bed capacity. Bed occupancy rates may not correlate well with bed availability. More accurate and reliable long-term prediction of bed requirements will help anticipate the future needs of a hospital’s catchment population, thus resulting in greater efficiencies and better patient care.

Objective: This study aimed to evaluate widely used automated time-series forecasting techniques to predict short-term daily nonelective bed occupancy at all trusts in the NHS. These techniques were used to develop a simple yet accurate national health system–level forecasting framework that can be utilized at a low cost and by health care administrators who do not have statistical modeling expertise.

Methods: Bed occupancy models that accounted for patterns in occupancy were created for each trust in the NHS. Daily nonelective midnight trust occupancy data from April 2011 to March 2017 for 121 NHS trusts were utilized to generate these models. Forecasts were generated using the three most widely used automated forecasting techniques: exponential smoothing; Seasonal Autoregressive Integrated Moving Average; and Trigonometric, Box-Cox transform, autoregressive moving average errors, and Trend and Seasonal components. The NHS Modernisation Agency’s recommended forecasting method prior to 2020 was also replicated.

Results: The accuracy of the models varied on the basis of the season during which occupancy was forecasted. For the summer season, percent root-mean-square error values for each model remained relatively stable across the 6 forecasted weeks. However, only the trend and seasonal components model (median error=2.45% for 6 weeks) outperformed the NHS Modernisation Agency’s recommended method (median error=2.63% for 6 weeks). In contrast, during the winter season, the percent root-mean-square error values increased as we forecasted further into the future. Exponential smoothing generated the most accurate forecasts (median error=4.91% over 4 weeks), but all models outperformed the NHS Modernisation Agency’s recommended method prior to 2020 (median error=8.5% over 4 weeks).

Conclusions: It is possible to create automated models, similar to those recently published by the NHS, which can be used at a hospital level for a large national health care system to predict nonelective bed admissions and thus schedule elective procedures.

JMIR Med Inform 2021;9(9):e21990

doi:10.2196/21990

Keywords



Background and Rationale

Between 2011-2012 and 2019-2020, patient attendances at major (Type 1) emergency departments (EDs) in the National health Service (NHS) of the United Kingdom increased by approximately 20%. There was an even greater increase in the number of patients admitted to hospital from the ED during that time. Such admissions grew by more than one-third and now account for nearly three-fourth of all nonelective admitted patients (Figure 1).

The resulting strain on the bed capacity of the NHS resulted in overcrowding of EDs and long wait times for patients before admission to an inpatient ward. By 2019-2020, over 3.2% of all patients in the ED remained in the ED for more than 12 hours from their time of arrival [1]. Such long delays are known to cause poor patient outcomes, including an increase in all-cause 30-day mortality [2].

For health care systems to meet the increasing needs of the populations they serve, as well as to provide better care, it is imperative to optimize the allocation of existing health care resources, including hospital beds. Health forecasting, a novel area of forecasting, can facilitate this by providing health service providers with the hospital bed occupancy forecasts that will allow them to minimize risks and manage demand [3].

Current models, including NHS-recommended systems, predict hospital bed utilization with a significant degree of error and with marked variability among different hospitals. More accurate and reliable long-term prediction of bed requirements will facilitate the anticipation of a local population’s needs [4] with resulting gains in both efficiency and patient outcomes.

Many studies have attempted to conduct time-series analyses to forecast bed occupancy levels days or weeks in advance [5]. Most of these models use estimates of length of stay [6] or ED admissions [7-9]. We developed models using a more direct time-series–based approach, which utilizes historic admissions data without any identifying patient information, to model nonelective hospital bed requirements. Knowledge of future nonelective bed occupancy allows for proper scheduling of elective procedures to optimize both capacity and resource allocation [6]. In addition, no previous study, to our knowledge, has generated accurate models for an entire national health care system. In this study, we created a modeling framework that is automated and generalizable across the NHS of the United Kingdom. It can be used by administrators and key decision-makers, who have minimal knowledge of statistical techniques, to evaluate and respond more efficiently to clinical demand.

Figure 1. Growth in the admission rate of all nonelective admitted patients. ED: emergency department. Data source: NHS Hospital Episode Statistics for Accident & Emergency and for admitted patient care data, 2012-2020 [10].
View this figure

Objectives

We aimed to (1) test and compare a set of widely used automated time-series forecasting techniques to predict short-term (up to 42 days) nonelective bed occupancy on a daily basis and (2) develop a simple yet accurate system-level forecasting and modeling framework that could be used to predict emergency bed occupancy during different seasonal patterns of admission. A summary of the study rationale and objectives is provided in Textbox 1.

Summary of the study objectives and rationale.

What is already known

Current statistical models that forecast bed occupancy days or weeks in advance across all trusts in the National Health Service (NHS) of the United Kingdom, including the previously recommended NHS method, predict hospital bed utilization with significant errors and variability among hospitals.

What this study adds

We created a modeling framework, following advanced forecasting techniques recommended by the NHS in January 2020, which generates similar forecasts of bed occupancy levels weeks in advance for all trusts in the NHS. In addition, because it is automated and generalizable, this model can be used by administrators and key decision-makers who have a minimal statistical background.

Textbox 1. Summary of the study objectives and rationale.

Data

Our data set contained daily nonelective midnight trust occupancy data from April 2011 to March 2017 for 121 NHS trusts located in each region of England. We acknowledge that this is not the same as peak occupancy, which often occurs in the middle of the working day. No personal information on patients or staff was provided. Since these data did not contain any identifying patient information, ethics approval from the institutional review board was not required. In addition, administrative data were utilized, and patients and the public were not involved in our study. We performed all analyses using RStudio (version 1.1.442, RStudio Inc). All generated forecasts accounted for patterns in occupancy or seasonality, resulting from the day of the week being forecasted. One forecast also factored in the day of the year, incidence of public holidays, and historical bed availability.

Study Design

Data preparation and analysis were performed in 4 steps for each forecasting technique employed (Figure 2).

Figure 2. Methodology employed to develop models and generate forecasted nonelective occupancy for each trust in the NHS. ES: exponential smoothing, NHS: National Health Service, SARIMA: Seasonal Autoregressive Integrated Moving Average, and TBATS: Trigonometric, Box-Cox transform, autoregressive moving average errors, and Trend and Seasonal components.
View this figure

Data Curation

We extracted daily nonelective occupancy data for 121 trusts in the NHS. To limit our data to general and acute bed occupancy, we excluded admissions in which the consultant’s specialty was related to mental health, learning disabilities, or maternity. The first 10 days’ and last 20 days’ worth of occupancy data for each trust were removed from our data set to account for any edge effects creating inaccuracies in data reporting. The following supporting variables were also included, as they were likely to produce fluctuations or seasonality, in bed occupancy [11]: (1) day of the week, (2) day of the year, (3) public holidays, and (4) historical bed availability.

Separation of Data

Each trust’s hospital occupancy data were divided into 2 seasonal data sets for summer and winter each, given that hospitals are more burdened during winter months, and this may introduce complications in the forecasting process. Each seasonal data set was further subsetted into a training data set that was used to develop the models and a validation data set that was used to cross-validate the models. The validation data sets for the summer season contained the last 6 weeks of occupancy data for mid-July to mid-August 2016 and those for winter contained the last 6 weeks of occupancy data for February to mid-March 2017. The derivation data sets contained all remaining data from April 2011 to the start date of the validation data sets and were used to develop the models.

Model Generation and Evaluation

We developed a set of models for each trust by using the three most widely used, automated forecasting techniques: exponential smoothing (ES); Seasonal Autoregressive Integrated Moving Average (SARIMA); and Trigonometric, Box-Cox transform, autoregressive moving average errors, and Trend and Seasonal components (TBATS) (Table 1). Details regarding this model are provided in Multimedia Appendix 1. These models are in line with the NHS Modernisation Agency’s newly released overview of advanced forecasting techniques that can be used to model NHS services [12]. We also replicated the NHS Modernisation Agency’s previously recommended, albeit dated, forecasting method [13].

Therefore, we developed 4 models for each of the 121 trusts in the NHS. A program was developed in R to automate the entire process to minimize repetition and maximize efficiency. Two tests were applied for each forecast: the Ljung-Box test output (Multimedia Appendix 2) to measure for residual patterns of the models’ errors that could be corrected for with additional modeling parameters, and root-mean-square error (RMSE) values from cross-validation. Absolute RMSE values were then converted to percentage errors, representing the average prediction error irrespective of sign (Multimedia Appendix 2). The numerator was the median RMSE of the forecasting method, and the denominator was the total number of general and acute beds in the hospital. The denominator, therefore, was the same for all methods. A comparative analysis of forecast accuracy was performed by comparing forecasted daily nonelective occupancy with actual nonelective occupancy in the out-of-sample data set for each week forecasted.

Forecasts were generated for the summer and winter. Given that summer school holidays in the United Kingdom usually occur from late July until early September and NHS data for England suggest that winter pressures mostly last from early January until the end of March, we forecasted the time periods within those timeframes. Summer forecasts were obtained from July to mid-August 2016 and winter forecasts were obtained from mid-February to mid-March 2017. Forecasts were compared to each other as well as to those derived from the NHS Modernisation Agency’s recommended method.

We did not generate models using Prophet and artificial neural networks, which are 2 additional models recommended in the Modernisation Agency’s overview, because these cannot be automated and applied across multiple trusts [12].

Table 1. Descriptive characteristics of automated time-series forecasting techniques used.
CharacteristicsModels generated

Exponential smoothingSeasonal Autoregressive Integrated Moving AverageTrend and Seasonal ComponentsMethod recommended by the National Health Service Modernisation Agency
Statistical methods employedExponential weighted sum of previous observationsCombines auto-regression and moving average modelsState space reconstructionForecast is the mean value of the past 6 weeks’ bed occupancy for the day of the week being forecasted
Seasonality taken into accountWeeklyWeekly, yearly, monthly, public holidays, and historical bed availabilityWeeklyWeekly
Additional modifications performedLThe Ljung-Box test provided information on residual patterns of error; this was addressed by creating a model of the residuals of the forecastSuspected model may be more accurate for trusts that do not approach maximum occupancy (occupancy=<95%); percent occupancy was introduced as a seasonal componentNoneNone

A total of 484 models (n=4 per trust) were automatically developed using our modeling framework (Figure 3). Our Ljung-Box tests validated that autocorrelation is minimal, thus validating our choice of model.

The accuracy of our models varied on the basis of the season during which we forecasted occupancy. Percent RMSE values for each model remained relatively stable across the 6 weeks forecasted in the summer, indicating that the summer period is predictable (Figure 4). In addition, only our TBATS model (median error=2.45% for 6 weeks) outperformed the NHS Modernisation Agency’s recommended method (median error=2.63% for 6 weeks). TBATS yielded a median error of 1.98% for the first forecasted week and 3.01% for the sixth, while the NHS Modernisation Agency’s recommended method yielded a median error of 2.32% for the first forecasted week and 3.17% for the sixth.

In contrast, percent RMSE values increased as we forecasted further into the future during winter (Figure 5). Therefore, our study suggests that we are only able to generate relatively accurate forecasts 4 weeks into the future during winter. Significant weather events and disease outbreaks may contribute to this unpredictability. However, as current weather forecasting methods are unable to predict significant events accurately beyond 10 days, accounting for weather beyond this is impractical. ES performed the best (median error=4.91% over 4 weeks), but all models outperformed the NHS Modernisation Agency’s recommended method (median error=8.5% over 4 weeks). ES yielded a median error of 2.17% for the first forecasted week and 9.38% for the fourth, while the NHS Modernisation Agency’s recommended method yielded a median error of 5.12% for the first forecasted week and 13.62% for the sixth.

Five or fewer trusts failed to pass the Ljung-Box test of autocorrelation for the TBATS and SARIMA models, which suggested that the models could not be improved much further for accuracy. However, as a large proportion of trusts failed to pass for the ES model (40% for summer forecasts and 42% for winter forecasts), we developed a TBATS model to forecast the residuals of our predictions and incorporated these forecasted residuals into our original model. This modification, however, did not significantly improve forecast accuracy. We also suspected that forecasts may be more accurate for trusts that do not reach their maximum bed availability (less than 95% of total beds are occupied). Therefore, we subsetted trusts on the basis of maximum bed availability and generated separate SARIMA forecasts for each group. This increased each forecast accuracy for the winter but did not affect forecasts much for the summer.

Figure 3. Sample time series and TBATS forecasts generated for summer and winter for two trusts: Gateshead Health (A-C) and Mid Essex Hospital Services (D-F). (A) and (D) show plots of nonelective bed occupancy through the time period in the data set. In (B)-(C) and (E)-(F), the black lines represent the training data sets, the red lines represent the out-of-sample datasets, and the blue lines represent the occupancy forecasted by TBATS. TBATS: Trigonometric, Box-Cox transform, autoregressive moving average errors, and Trend and Seasonal components.
View this figure
Figure 4. Distribution of error values for summer forecasts displayed by model (A) and by week (B). Outliers have been suppressed in (B) for better visualization of error spread. ES: exponential smoothing, NHS: National Health Service, RMS: root-mean-square, SARIMA: Seasonal Autoregressive Integrated Moving Average, TBATS: Trigonometric, Box-Cox transform, autoregressive moving average errors, and Trend and Seasonal components.
View this figure
Figure 5. Distribution of error values for winter forecasts displayed by model (A) and by week (B). Outliers have been suppressed in (B) for better visualization of error spread. ES: exponential smoothing, NHS: National Health Service, RMS: root-mean-square, SARIMA: Seasonal Autoregressive Integrated Moving Average, TBATS: Trigonometric, Box-Cox transform, autoregressive moving average errors, and Trend and Seasonal components.
View this figure

Principal Findings

Our results show that it is possible to create automated models to predict nonelective bed admissions with a higher degree of accuracy and reliability than the method previously recommended by the NHS Modernisation Agency.

Utilization of the NHS-recommended method has led to the lack of capacity and procedures being canceled at the last minute (Figure 3). Although some of these problems are inevitable in such a large and diffusely managed system, the frequency of such cancellations can be reduced with improved forecasting methods such as the one described in this study.

Other groups that focused on hospitals in England have generated forecasting models for either a single site or a group of trusts, or for specific hospital services such as the emergency department, which is more easily predictable [14]. However, to our knowledge, none of them have utilized the data of the entire NHS to generate more accurate forecasting models [14]. Although individual, site-specific models have the potential to outperform national models, we believe that the majority of hospitals do not have the resources, time, or expertise required to generate their own predictive methodology. Therefore, a more universal and easily implemented—albeit slightly less accurate—modeling framework is preferable. Moreover, our models are automated and require minimal effort for consistent execution.

Even with a simplified approach and appropriate end-user education, several barriers to implementation could limit the use of the developed national forecasting models. In the NHS system, staffing rotas lack the flexibility required to reduce staff at short notice. While an incorrect forecast predicting an increased demand would result in financial losses, an erroneous prediction of a reduction could lead to adverse clinical events. Therefore, caution dictates that users are more likely to respond to forecasts of increased demand rather than those predicting the reverse. Routine forecasting would therefore be likely to increase costs, at least in the initial phase.

If such modeling frameworks are to be incorporated into policy, it is essential to consider whether effective implementation is possible. A recent study of ED escalation plans [15] to support patient care in times of increased demand reported that there can be a significant gap between managerial intentions and actual implementation.

Limitations

Potential bias may have arisen from inaccurate data collection and reporting at a local level. In addition, our occupancy data were collected as a midnight census rather than during the day, when hospital occupancy peaks. This may limit the applicability of the model. Although our models took into account various temporal models, we did not explicitly consider meteorological conditions such as weather or air pollution—factors that could have an impact on predictive accuracy [16,17].

Another limitation of our study is that we were not able to model demand for outpatient services or elective procedures, which may have a significant impact on the availability of inpatient resources, including health care professionals themselves [18]. Nevertheless, this is an area that clinicians and managers can control to a large extent [13]. In addition, there is a need for caution when predicting the real-world implications of total bed utilization from models in which maximum capacity is approached, as small random effects may have unpredictable consequences.

As these models require automation, we have not utilized the most advanced predictive techniques available. The self-exciting threshold autoregressive model and artificial neural networks would be likely to produce more accurate predications once fully optimized. We explored this model but ultimately rejected it because of the high degree of personalization that the self-exciting threshold autoregressive system required for correct usage. This would thus be impractical for hospital staff with limited statistical knowledge—or the time to acquire it—for effective implementation.

Conclusions

There is no sign of an imminent reduction in the demand for all hospital services. Therefore, improvements in the efficiency of health care resource utilization are of paramount importance. We believe that, to our knowledge, this is the first study to generate accurate forecasting models for an entire health care system. In addition, our models are automated and require minimal effort to execute consistently and accurately; thus, they are in parallel with the NHS Modernisation Agency’s latest guidelines on advanced forecasting techniques [12]. With increased predictive accuracy of nonelective bed occupancy, more reliable elective procedure schedules can be produced by hospital managers. This increased efficiency should lead to better care for patients, together with a more consistent workflow pattern for health care staff. We believe that a similar methodology can be applied to hospital systems other than the NHS, in other countries including the United States, and we hope to apply these models more widely in the future.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Details regarding the forecasting techniques used in developing the current prediction model.

DOCX File , 13 KB

Multimedia Appendix 2

Supplementary tables.

DOCX File , 284 KB

  1. Hospital Episode Statistics Data Dictionary. NHS Digital.   URL: https:/​/digital.​nhs.uk/​data-and-information/​data-tools-and-services/​data-services/​hospital-episode-statistics/​hospital-episode-statistics-data-dictionary [accessed 2021-07-05]
  2. Jones SM, Swift S, Moulton C, Molyneux P, Black S, Mason N, et al. The association between delays to patient admission from the Emergency Department and all-cause 30-day mortality. Emergency Medicine Journal 2021 (forthcoming).
  3. Soyiri IN, Reidpath DD. An overview of health forecasting. Environ Health Prev Med 2013 Jan;18(1):1-9 [FREE Full text] [CrossRef] [Medline]
  4. Ordu M, Demir E, Tofallis C. A comprehensive modelling framework to forecast the demand for all hospital services. Int J Health Plann Manage 2019 Apr;34(2):e1257-e1271. [CrossRef] [Medline]
  5. Mackay M, Lee M. Choice of models for the analysis and forecasting of hospital beds. Health Care Manag Sci 2005 Aug;8(3):221-230. [CrossRef] [Medline]
  6. Kutafina E, Bechtold I, Kabino K, Jonas SM. Recursive neural networks in hospital bed occupancy forecasting. BMC Med Inform Decis Mak 2019 Mar 07;19(1):39 [FREE Full text] [CrossRef] [Medline]
  7. Hoot N, Epstein S, Allen T, Jones SS, Baumlin KM, Chawla N, et al. Forecasting emergency department crowding: an external, multicenter evaluation. Ann Emerg Med 2009 Oct;54(4):514-522.e19 [FREE Full text] [CrossRef] [Medline]
  8. Juang WC, Huang SJ, Huang FD, Cheng PW, Wann SR. Application of time series analysis in modelling and forecasting emergency department visits in a medical centre in Southern Taiwan. BMJ Open 2017 Dec 01;7(11):e018628 [FREE Full text] [CrossRef] [Medline]
  9. Jones SA, Joy MP, Pearson J. Forecasting demand of emergency care. Health Care Manag Sci 2002 Nov;5(4):297-305. [CrossRef] [Medline]
  10. Hospital Episode Statistics (HES). NHS Digital.   URL: https:/​/digital.​nhs.uk/​data-and-information/​data-tools-and-services/​data-services/​hospital-episode-statistics [accessed 2021-09-29]
  11. Rotstein Z, Wilf-Miron R, Lavi B, Shahar A, Gabbay U, Noy S. The dynamics of patient visits to a public hospital ED: a statistical model. Am J Emerg Med 1997 Oct;15(6):596-599. [CrossRef] [Medline]
  12. NHS England and NHS Improvement. Advanced forecasting technique:. NHS. London: NHS England; 2020 Jan.   URL: https://www.england.nhs.uk/wp-content/uploads/2020/01/advanced-forecasting-techniques.pdf [accessed 2021-08-31]
  13. Proudlove NC, Black S, Fletcher A. OR and the challenge to improve the NHS: modelling for insight and improvement in in-patient flows. Journal of the Operational Research Society 2017 Dec 21;58(2):145-158. [CrossRef]
  14. Champion R, Kinsman LD, Lee GA, Masman KA, May EA, Mills TM, et al. Forecasting emergency department presentations. Aust Health Rev 2007 Feb;31(1):83-90. [CrossRef] [Medline]
  15. Back J, Ross A, Duncan M, Jaye P, Henderson K, Anderson J. Emergency Department Escalation in Theory and Practice: A Mixed-Methods Study Using a Model of Organizational Resilience. Ann Emerg Med 2017 Nov;70(5):659-671 [FREE Full text] [CrossRef] [Medline]
  16. Jilani T, Housley G, Figueredo G, Tang PS, Hatton J, Shaw D. Short and Long term predictions of Hospital emergency department attendances. Int J Med Inform 2019 Sep;129:167-174. [CrossRef] [Medline]
  17. Poon CM, Wong ELY, Chau PYK, Yau SY, Yeoh EK. Management decision of hospital surge: assessing seasonal upsurge in inpatient medical bed occupancy rate among public acute hospitals in Hong Kong. QJM 2019 Jan 01;112(1):11-16. [CrossRef] [Medline]
  18. Luo L, Luo L, Zhang X, He X. Hospital daily outpatient visits forecasting using a combinatorial model based on ARIMA and SES models. BMC Health Serv Res 2017 Jul 10;17(1):469 [FREE Full text] [CrossRef] [Medline]


ED: emergency department
ES: exponential smoothing
NHS: National Health Service
RMSE: root-mean-square error
SARIMA: Seasonal Autoregressive Integrated Moving Average
TBATS: Trigonometric, Box-Cox transform, autoregressive moving average errors, and Trend and Seasonal components


Edited by G Eysenbach; submitted 30.06.20; peer-reviewed by L Genaro, M Adly, A Adly, A Adly, A Azzam; comments to author 06.11.20; revised version received 15.05.21; accepted 03.06.21; published 30.09.21

Copyright

©Kanan Shah, Akarsh Sharma, Chris Moulton, Simon Swift, Clifford Mann, Simon Jones. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 30.09.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on https://medinform.jmir.org/, as well as this copyright and license information must be included.