Review
Abstract
Background: In the United States, national guidelines suggest that aggressive cancer care should be avoided in the final months of life. However, guideline compliance currently requires clinicians to make judgments based on their experience as to when a patient is nearing the end of their life. Machine learning (ML) algorithms may facilitate improved end-of-life care provision for patients with cancer by identifying patients at risk of short-term mortality.
Objective: This study aims to summarize the evidence for applying ML in ≤1-year cancer mortality prediction to assist with the transition to end-of-life care for patients with cancer.
Methods: We searched MEDLINE, Embase, Scopus, Web of Science, and IEEE to identify relevant articles. We included studies describing ML algorithms predicting ≤1-year mortality in patients of oncology. We used the prediction model risk of bias assessment tool to assess the quality of the included studies.
Results: We included 15 articles involving 110,058 patients in the final synthesis. Of the 15 studies, 12 (80%) had a high or unclear risk of bias. The model performance was good: the area under the receiver operating characteristic curve ranged from 0.72 to 0.92. We identified common issues leading to biased models, including using a single performance metric, incomplete reporting of or inappropriate modeling practice, and small sample size.
Conclusions: We found encouraging signs of ML performance in predicting short-term cancer mortality. Nevertheless, no included ML algorithms are suitable for clinical practice at the current stage because of the high risk of bias and uncertainty regarding real-world performance. Further research is needed to develop ML models using the modern standards of algorithm development and reporting.
doi:10.2196/33182
Keywords
Introduction
Background
Cancer therapies, including chemotherapy, immunotherapy, radiation, and surgery, aim to cure and reduce the risk of recurrence in early-stage disease and improve survival and quality of life for late-stage disease. However, cancer therapy is invariably associated with negative effects, including toxicity, comorbidities, financial burden, and social disruption. There is growing recognition that therapies are sometimes started too late, and many patients die while receiving active therapy [
- ]. For instance, a systematic review summarized that the percentage of patients with lung cancer receiving aggressive treatments during the last month of their life ranged from 6.4% to >50% [ ]. Another retrospective comparison study revealed that the proportion of patients with gynecologic cancer undergoing chemotherapy or invasive procedures in their last 3 months was significantly higher in 2011 to 2015 than in 2006 to 2010 [ ]. Research has shown that the aggressiveness of care at the end of life in patients with advanced cancers is associated with extra costs and a reduction in the quality of life for patients and their families [ , ].In the United States, national guidelines state that gold standard cancer care should avoid the provision of aggressive care in the final months of life [
]. Avoiding aggressive care at the end of life currently requires clinicians to make judgments based on their experience as to when a patient is nearing the end of their life [ ]. Research has shown that these decisions are difficult to make because of a lack of scientific, objective evidence to support the clinicians’ judgment in palliative or related discussion initiation [ , , ]. Thus, a decision support tool enabling the early identification of patients of oncology who may not benefit from aggressive care is needed to support better palliative care management and reduce clinicians’ burden [ ].In recent years, there have been substantial changes in both the type and quantity of patient data collected using electronic health records (EHR) and the sophistication and availability of the techniques used to learn the complex patterns within that data. By learning these patterns, it is possible to make predictions for individual patients’ future health states [
]. The process of creating accurate predictions from evident patterns in past data is referred to as machine learning (ML), a branch of artificial intelligence research [ ]. There has been growing enthusiasm for the development of ML algorithms to guide clinical problems. Using ML to create robust, individualized predictions of clinical outcomes, such as the risk of short-term mortality [ , ], may improve care by allowing clinical teams to adjust care plans in anticipation of a forecasted event. Such predictions have been shown to be acceptable for use in clinical practice [ ] and may one day become a fundamental aspect of clinical practice.ML applications have been developed to support mortality predictions for a variety of populations, including but not limited to patients with traumatic brain injury, COVID-19 disease of 2019, and cancers, as well as patients admitted to emergency departments and intensive care units. These applications have consistently demonstrated promising performances across studies [
- ]. Researchers have also applied ML techniques to create tools supporting various clinical tasks involved in the care of patients of oncology, with most applications focusing on the prediction of cancer susceptibility, recurrence, treatment response, and survival [ , , ]. However, the performance of ML applications in supporting mortality predictions for patients of oncology has not yet been systematically examined and synthesized.In addition, as the popularity of ML in clinical medicine has risen, so too has the realization that applying complex algorithms to big data sets does not in itself result in high-quality models [
, ]. For example, subtle temporal-regional nuances in data can cause models to learn relationships that are not repeated over time and space. This can lead to poor future performance and misleading predictions [ ]. Algorithms may also learn to replicate human biases in data and, as a result, could produce predictions that negatively affect disadvantaged groups [ , ]. Recent commentary has drawn attention to various issues in the transparency, performance, and reproducibility of ML tools [ - ]. A comparison of 511 scientific papers describing the development of ML algorithms found that, in terms of reproducibility, ML for health care compared poorly to other fields [ ]. Issues of algorithmic fairness and performance are especially pertinent when predicting patient mortality. If done correctly, these predictions could help patients and their families receive gold standard care at the end of life; if done incorrectly, there is a risk of causing unnecessary harm and distress at a deeply sensitive time.Another aspect of mortality affecting the algorithm performance is its rare occurrence in most populations. There are known issues that are commonly encountered when trying to predict events from data sets in which there are far fewer events than nonevents, which is known as class imbalance. One such issue is known as the accuracy paradox—the case in which an ML algorithm presents with high accuracy but a failure to identify occurrences of the rare outcome it was tasked to predict [
, ]. During the model training process, many algorithms seek to maximize their accuracy across the entire data set. In the case of a data set in which only 10% of patients experienced a rare outcome—as is often the case with data sets containing mortality—an algorithm could achieve an apparently excellent accuracy of 0.90 by simply predicting that every patient would live. The resulting algorithm would be clinically useless on account of its failure to identify patients who are at risk of dying. If handled incorrectly, the class imbalance problem can lead algorithms to prioritize the predictions of the majority class. For this reason, it is especially important to evaluate multiple performance metrics when assessing algorithms that predict rare events.Objective
The purpose of this systematic review is to critically evaluate the current evidence to (1) summarize ML-based model performance in predicting ≤1-year mortality for patients with cancer, (2) evaluate the practice and reporting of ML modeling, and (3) provide suggestions to guide future work in the area. In this study, we seek to evaluate models identifying patients with cancer who are near the end of their life and may benefit from end-of-life care to facilitate the better provision of care. As the definitions of aggressive care at the end of life vary from initiation of chemotherapy or invasive procedures or admission to the emergency department or intensive care unit within 14 days to 6 months [
, , ], we focused on ≤1-year mortality of patients with cancer to ensure that we include all ML models that have the potential to reduce the aggressiveness of care and support the better provision of palliative care for cancer populations.Methods
Overview
We conducted this systematic review following the Joanna Briggs Institute guidelines for systematic reviews [
]. To facilitate reproducible reporting, we present our results following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement [ ]. This review was prospectively registered in PROSPERO (International Prospective Register of Systematic Reviews; PROSPERO ID: CRD42021246233). The protocol for this review has not been published.Search Strategy
We searched Ovid MEDLINE, Ovid Embase, Clarivate Analytics Web of Science, Elsevier Scopus, and IEEE Xplore databases from the date of inception to October 2020. The following concepts were searched using subject headings keywords as needed: cancer, tumor, oncology, machine learning, artificial intelligence, performancemetrics, mortality, cancer death, survival rate, and prognosis. The terms were combined using AND/OR Boolean statements. A full list of search terms along with a complete search strategy for each database used is provided in
. In addition, we reviewed the reference lists of each included study for relevant studies.Study Selection
A total of 2 team members screened all the titles and abstracts of the articles identified in the search for studies. A senior ML researcher (CSG) resolved the discrepancies between the 2 reviewers. We then examined the full text of the remaining articles using the same approach but resolved disagreements via consensus. Studies were included if they (1) developed or validated ML-based models predicting ≤1-year mortality for patients of oncology, (2) made predictions using EHR data, (3) reported model performance, and (4) were original research published through a peer-reviewed process in English. We excluded studies if they (1) focused on risk factor investigation; (2) implemented existing models; (3) were not specific to patients with cancer; (4) used only image, genomic, clinical trial, or publicly available data; (5) predicted long-term (>1 year) mortality or survival probability; (6) created survival stratification using unsupervised ML approaches; and (7) were not peer-reviewed full papers. We defined short-term mortality as death happening within ≤1 year after receiving cancer diagnostics or certain treatments for this review.
Critical Appraisal
We evaluated the risk of bias (ROB) of each included study using the prediction model ROB assessment tool [
]. A total of 2 reviewers independently conducted the assessment for all the included studies and resolved conflicts by consensus.Data Extraction and Synthesis
For data extraction, we developed a spreadsheet based on the items in the transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD) [
] through iterative discussions. A total of 4 reviewers independently extracted information about sampling, data sources, predictive and outcome variables, modeling and evaluation approaches, model performance, and model interpretations using the spreadsheet from the included studies, with each study extracted by 2 reviewers. Discrepancies were discussed among all reviewers to reach a consensus. The collected data items are available in [ ]. To summarize the evidence, we grouped the studies using TRIPOD’s classification for prediction model studies ( ) and summarized the data narratively and descriptively by group. To estimate the performance of each ML algorithm, we averaged the area under the receiver operating characteristic curve (AUROC) for each type of ML algorithm across the included studies and estimated SE for 95% CI calculation using the averaged AUROC and pooled validation sample size for each type of ML algorithm. In addition, we conducted a sensitivity analysis to assess the impact of studies that were outliers either on the basis of their sample size or their risk of bias.Types of prediction model studies.
Study type and definition
Type 1a
Studies develop prediction model or models and evaluate model performance using the same data used for model development.
Type 1b
Studies develop prediction model or models and evaluate the model or models using the same data used for model development with resampling techniques (eg, bootstrapping and cross-validation) to avoid an optimistic performance estimate.
Type 2a
Studies randomly split data into two subsets: one for model development and another for model performance estimate.
Type 2b
Studies nonrandomly split data into two subsets: one for model development and another for model performance estimate. The splitting rule can be by institute, location, and time.
Type 3
Studies develop and evaluate prediction model or models using 2 different data sets (eg, from different studies).
Type 4
Studies evaluate existing prediction models with new data sets not used in model development.
Note: The types of prediction model studies were summarized from Collins et al [
].Results
Summary of Included Studies
Our search resulted in 970 unduplicated references, of which we excluded 771 (79.5%) articles because of various reasons, such as no ML involvement, not using EHR data, or no patient with cancer involvement, based on the title and abstract screen. After the full-text review, we included 1.5% (15/970) of articles involving a total of 110,058 patients with cancer (
). We have provided a detailed record of the selection process in [ , ].We present a characteristic summary of the included articles in
[ - ]. Of the 15 included articles, 13 (87%) were model development and internal validations, and 2 (13%) were external validations of existing models. The median sample size was 783 (range 173-26,946), with a median of 21 predictors considered (range 9-5390). The target populations of the 15 articles included 5 (33%) with all types of cancer, 3 (20%) with spinal metastatic diseases, 2 (13%) with liver cancer, and 1 (7%) each with gastric cancer, colon and rectum cancer, stomach cancer, lung cancer, and bladder cancer. Several algorithms have been examined in many studies. The most commonly used ML algorithms were artificial neural networks (8/15, 53%). Other algorithms included gradient-boosted trees (4/15, 27%), decision trees (4/15, 27%), regularized logistic regression (LR; 4/15, 27%), stochastic gradient boosting (2/15, 13%), naive Bayes classifier (1/15, 7%), Bayes point machine (1/15, 7%), and random forest (RF; 1/15, 7%). Of the 15 studies, 2 (13%) tested their models in their training data sets by resampling (type 1b), 9 (60%) examined their models using randomly split holdout internal validation data sets (type 2a), 2 (13%) examined with nonrandomly split holdout validation data sets (type 2b), and 2 (13%) validated existing models using external data sets (type 4). The frequent candidate predictors were demographic (12/15, 80%), clinicopathologic (12/15, 80%), tumor entity (7/15, 47%), laboratory (7/15, 47%), comorbidity (5/15, 33%), and prior treatment information (5/15, 33%). The event of interest varied across the studies, with 47% (7/15) for 1-year mortality, 33% (5/15) for 180-day mortality, 13% (2/15) for 90-day mortality, and 7% (1/15) for 30-day mortality.Type of cancer and study | Country | Study type | Treatment | Sample size | Algorithms | Input features (total number of features) | Outcome | |||||||||||||||
Training | Testing | Validating | ||||||||||||||||||||
All cancer | ||||||||||||||||||||||
Sena et al [ | ]Brazil | 1b | All | 543 | N/Aa | N/A | DTb, ANNc, and NBd | Comorbidity and PROe for physical and mental status assessments (9) | 180-day death | |||||||||||||
Parikh et al [ | ]United States | 2a | All | 18,567 | 7958 | N/A | GBTf and RFg | Demographic, clinicopathologic, laboratory, comorbidity, and electrocardiogram data (599) | 180-day death | |||||||||||||
Manz et al [ | ]United States | 4 | All | N/A | N/A | 24,582 | GBT | Same as Parikh et al [ | ]180-day death | |||||||||||||
Bertsimas et al [ | ]United States | 2a | All | 14,427 | 9556 | N/A | DT, regularized LRh, and GBT | Demographic, clinicopathologic, gene mutations, prior treatment, comorbidity, use of health care resources, vital signs, and laboratory data (401) | 180-day death | |||||||||||||
Elfiky et al [ | ]United States | 2b | All | 17,832 | 9114 | N/A | GBT | Demographic, clinicopathologic, prescription, comorbidity, laboratory, vital sign, and use of health care resources data and physician notes (5390) | 180-day death | |||||||||||||
Non–small cell lung cancer | ||||||||||||||||||||||
Hanai et al [ | ]Japan | 2b | Curative resection | 125 | 48 | N/A | ANN | Demographic, clinicopathologic, and tumor entity data (17) | 1-year death | |||||||||||||
Gastric cancer | ||||||||||||||||||||||
Nilsaz-Dezfouli et al [ | ]Iran | 1b | Surgery | 452 | N/A | N/A | ANN | Demographic, clinicopathologic, tumor entity, and prior treatment (20) | 1-year death | |||||||||||||
Colon and rectum cancer | ||||||||||||||||||||||
Arostegui et al [ | ]Spain | 2a | Curative or palliative surgery | 981 | 964 | N/A | DT and regularized LR | Demographic, clinicopathologic, tumor entity, comorbidity, ASAi prior treatment, laboratory, operational data, postoperational complication, and use of health care resources data (32) | 1-year death | |||||||||||||
Stomach cancer | ||||||||||||||||||||||
Biglarian et al [ | ]Iran | 2a | Surgery | 300 | 136 | N/A | ANN | Demographic, clinicopathologic, and symptom data (NRj) | 1-year death | |||||||||||||
Bladder cancer | ||||||||||||||||||||||
Klén et al [ | ]Turkey | 2a | Radical cystectomy | 733 | 366 | N/A | Regularized LR | Demographic, clinicopathologic, ASA, comorbidity, laboratory, prior treatment, tomography, and operational data (NR) | 90-day death | |||||||||||||
Hepatocellular carcinoma | ||||||||||||||||||||||
Chiu et al [ | ]Taiwan | 2a | Liver resection | 347 | 87 | N/A | ANN | Demographic, clinicopathologic, tumor entity, comorbidity, ASA, laboratory, operational, and postoperational data (21) | 1-year death | |||||||||||||
Zhang et al [ | ]China | 2a | Liver transplant | 230 | 60 | N/A | ANN | Donor demographic data and recipient laboratory, clinicopathologic, and image data (14) | 1-year death | |||||||||||||
Spinal metastatic | ||||||||||||||||||||||
Karhade et al [ | ]United States | 2a | Surgery | 1432 | 358 | N/A | ANN, SVMk, DT, and BPMl | Demographic, clinicopathologic, tumor entity, ASA, laboratory, and operational data (23) | 30-day death | |||||||||||||
Karhade et al [ | ]United States | 2a | Surgery | 587 | 145 | N/A | SGBm, RF, ANN, SVM, and regularized LR | Demographic, clinicopathologic, tumor entity, laboratory, operational, ECOGn, ASIAo, and prior treatment data (29) | 90-day death | |||||||||||||
Karhade et al [ | ]United States | 4 | Curative surgery | N/A | N/A | 176 | SGB | ECOG, demographic, clinicopathologic, tumor entity, laboratory, prior treatment, and ASIA data (23) | 1-year death |
aN/A: not applicable.
bDT: decision tree.
cANN: artificial neural network.
dNB: naive Bayes.
ePRO: patient-reported outcome.
fGBT: gradient-boosted tree.
gRF: random forest.
hLR: logistic regression.
iASA: American Sociological Association.
jNR: not reported.
kSVM: support vector machine.
lBPM: Bayes point machine.
mSGB: stochastic gradient boosting.
nECOG: Eastern Cooperative Oncology Group.
oASIA: American Spinal Injury Association.
ROB Evaluation
Of the 15 studies, 12 (80%) were deemed to have a high or unclear ROB. The analysis domain was the major source of bias (
). Of the 12 model development studies, 8 (67%) provided insufficient or no information on data preprocessing and model optimization (tuning) methods. Approximately 33% (5/15) of studies did not report how they addressed missing data, and 13% (2/15) potentially introduced selection bias by excluding patients with missing data. All studies clearly defined their study populations and data sources, although none justified their sample size. Predictors and outcomes of interest were also well-defined in all studies, except for 20% (3/15) of studies that did not specify their outcome measure definition and whether the definition was consistently used.Model Performance
We summarize the performance of the best models from the type 2, 3, and 4 studies (12/15,80%) in
. We excluded 1 type 2b study as the authors did not report their performance results in a holdout validation set. Model performance across the studies ranged from acceptable to good, based on AUROC ranging from 0.72 to 0.92. Approximately 40% (6/15) of studies reported only the AUROC values, therefore, leaving some uncertainty about model performance in correctly identifying patients at risk of short-term mortality. Other performance metrics were less reported and were sometimes indicative of poor performance. Studies reported median accuracy 0.91 (range 0.86-0.96; 2/15, 13%), sensitivity 0.85 (range 0.27-0.91; 4/15, 27%), specificity 0.90 (0.50-0.99; 5/15, 33%), as well as the positive predictive value (PPV) and the negative predictive value of 0.52 (range 0.45-0.83; 4/15, 27%) and 0.92 (range 0.86-0.97; 2/15, 13%), respectively.Among the ML algorithms examined, all algorithms were similarly performed, with RF slightly better than the other algorithms (
). Approximately 33% (5/15) of studies compared their ML algorithms with statistical models [ , , , , ]. Differences in AUROC between the ML and statistical models ranged from 0.01 to 0.11, with one of the studies reporting a significant difference ( ).Type of cancer and study | Outcome | Training sample | Validation sample | Mortality rate (%) | Algorithm | AUROCa | Accuracy | Sensitivity | Specificity | PPVb | NPVc | Calibration | Benchmark, model (Δ AUROC) | ||
All cancer | |||||||||||||||
Manz et al [ | ]180-day death | N/Ad | 24,582 | 4.2 | GBTe | 0.89 | —f | 0.27 | 0.99 | 0.45 | 0.97 | Well-fit | — | ||
Parikh et al [ | ]180-day death | 18,567 | 7958 | 4.0 | RFg | 0.87 | 0.96 | — | 0.99 | 0.51 | — | Well-fit at the low-risk group | LRh (0.01) | ||
Bertsimas et al [ | ]180-day death | 14,427 | 9556 | 5.6 | GBT | 0.87 | 0.87 | .60 | — | 0.53 | — | — | LR (0.11) | ||
Elfiky et al [ | ]180-day death | 17,832 | 9114 | 18.4 | GBT | 0.83 | — | — | — | — | — | Well-fit | — | ||
Gastrointestinal cancer | |||||||||||||||
Arostegui et al [ | ]1-year death | 981 | 964 | 5.1 | DTi | 0.84 | — | — | — | — | — | Well-fit | — | ||
Biglarian et al [ | ]1-year death | 300 | 136 | 37.5 | ANNj | 0.92 | — | 0.80 | 0.85 | — | — | — | CPHk (0.04)l | ||
Patients with bladder cancer | |||||||||||||||
Klén et al [ | ]90-day death | 733 | 366 | 4.4 | Regularized LR | 0.72 | — | — | — | — | — | — | ACCIm univariate model (0.05) | ||
Patients with liver cancer | |||||||||||||||
Chiu et al [ | ]1-year death | 347 | 87 | 17 | ANN | 0.88 | — | 0.89 | 0.50 | — | — | — | LR (0.08) | ||
Zhang et al [ | ]1-year death | 230 | 60 | 23.9 | ANN | 0.91 | — | 0.91 | 0.90 | 0.83 | 0.86 | — | — | ||
Patients with spinal metastasis | |||||||||||||||
Karhade et al [ | ]30-day death | 1432 | 358 | 8.5 | BPMn | 0.78 | — | — | — | — | — | Well-fit | — | ||
Karhade et al [ | ]1-year death | 586 | 145 | 54.3 | SGBo | 0.89 | — | — | — | — | — | Well-fit | — | ||
Karhade et al [ | ]1-year death | N/A | 176 | 56.2 | SGB | 0.77 | — | — | — | — | — | Fairly well-fit | — |
aAUROC: area under the receiver operating characteristic curve.
bPPV: positive predictive value.
cNPV: negative predictive value.
dN/A: not applicable.
eGBT: gradient-boosted tree.
fNo data available
gRF: random forest.
hLR: logistic regression.
iDT: decision tree.
jANN: artificial neural network.
kCPH: Cox proportional hazard.
lSignificant at the α level defined by the study.
mACCI: adjusted Charlson comorbidity index.
nBPM: Bayes point machine
oSGB: stochastic gradient boosting.
Model Development and Evaluation Processes
Most articles (11/15, 73%) did not report how their training data were preprocessed (
). Authors of 27% (4/15) of articles reported their methods for preparing numeric variables, with 75% (3/4) using normalization, 25% (1/4) using standardization, and 25% (1/4) using discretization. Approximately 13% (2/15) of articles used one-hot encoding for their categorical variables. Various techniques were used to address missing data, including constant value imputation (3/15, 20%), multiple imputation (3/15, 20%), complete cases only (2/15, 13%), probabilistic imputation (1/15, 7%), and the optimal impute algorithm (1/15, 7%).Of the 13 model development studies, 9 (69%) reported their approaches for feature selection. The approaches, including 3 model-based variable importance, between-variable correlation, zero variance, univariate Cox proportional hazard, forward stepwise selection algorithm, recursive feature selection, and parameter-increasing method, were used alone or in combination. Concerning hyperparameter selection, 33% (5/15) reported their methods to determine hyperparameters, with 60% (3/5) using grid search and 2 (40%) using the default values of the modeling software. Finally, 47% (7/15) used various resampling approaches to ensure the generalizability of their models. The N-fold cross-validation approach was the primary strategy. Varying fold numbers were used, such as 10 (3/15, 20%), 5 (2/15, 13%), 4 (1/15, 7%), 3 repeats 10 (1/15, 7%), and 5 repeats 5-fold (1/15, 7%). One of the studies used the bootstrapping method. Approximately 27% (4/15) of studies did not report whether resampling was performed.
Of the 15 studies, 12 (80%) used variable importance plots to interpret their models, 3 (20%) included decision tree rules, and 2 (13%) included coefficients to explain their models in terms of prediction generation. Other model interpretation approaches, including local interpretable model-agnostic explanations and partial dependence plots, were used in 7% (1/15) of studies.
Type and study | Data preprocessing | Model optimization | Interpretation | |||||||
Numeric variables | Categorical variables | Missing data | Feature selection | Hyperparameter value selection | Generalizability consideration | |||||
Type 1b | ||||||||||
Sena et al [ | ]Normalization | N/Aa | NRb | None | Software default | 10-fold CVc | VId | |||
Nilsaz-Dezfouli et al [ | ]NR | NR | NR | VI | Grid search | 5×5-fold CV | VI | |||
Type 2a | ||||||||||
Parikh et al [ | ]NR | NR | Constant value imputation | Zero variance and between-variable correlation | Grid search | 5-fold CV | VI and coefficient | |||
Klén et al [ | ]NR | NR | Complete cases only | LASSOe LRf | NR | NR | VI | |||
Karhade et al [ | ]NR | NR | missForest multiple imputation | RFg | NR | 3×10-fold CV | VI, PDPh, and LIMEi | |||
Karhade et al [ | ]NR | NR | Multiple imputation | Recursive feature selection | NR | 10-fold CV | NR | |||
Arostegui et al [ | ]Discretization | One-hot encoding | Constant value imputation | RF variable importance | Software default | Bootstrapping | VI and decision tree rules | |||
Bertsimas et al [ | ]NR | NR | Optimal impute algorithm | None | NR | NR | VI and decision tree rules | |||
Chiu et al [ | ]NR | NR | Complete cases only | Univariate Cox proportional hazard model | NR | NR | VI | |||
Zhang et al [ | ]Normalization | One-hot encoding | NR | Forward stepwise selection algorithm | NR | 10-fold CV | VI | |||
Biglarian et al [ | ]NR | NR | NR | None | NR | NR | NR | |||
Type 2b | ||||||||||
Elfiky et al [ | ]NR | NR | Probabilistic imputation | None | Grid search | 4-fold CV | VI | |||
Hanai et al [ | ]Standardization | NR | NR | Between-variable correlation and PIMj | NR | 5-fold CV | VI | |||
Type 4 | ||||||||||
Manz et al [ | ]NR | NR | Constant value imputation | N/A | N/A | N/A | VI and coefficient | |||
Karhade et al [ | ]NR | NR | missForest multiple imputation | N/A | N/A | N/A | NR |
aN/A: not applicable.
bNR: not reported.
cCV: cross-validation.
dVI: variable importance.
eLASSO: least absolute shrinkage and selection operator.
fLR: logistic regression.
gRF: random forest.
hPDP: partial dependence plot.
iLIME: local interpretable model-agnostic explanation.
jPIM: parameter-increasing method.
Solutions for Class Imbalance
All included studies reported that the mortality rate of their samples experienced some degree of class imbalance (
). The median mortality rate was 20.0% (range 4%-56.2%), with 2.8 deaths in training samples per candidate predictor (range 0.5-12.3) in training samples. A type 1 study discussed the potential disadvantage of the issue and used a downsampling approach to handle imbalanced data. No information was provided on how the downsampling approach was conducted and its effectiveness on model performance in an unseen data set.Sensitivity Analysis
Owing to the small number of included studies, we conducted a sensitivity analysis by including 1 study per research group to avoid the disproportionate effects of studies from a single group on our model performance and modeling practice evaluation. We observed similar issues concerning model development and evaluation practice after removing the studies by Manz et al [
] and Karhade et al [ , ]. For model performance, all algorithms still demonstrated good performance, with a median AUROC of 0.88 ranging from 0.81 to 0.89 ( [ , , ]). We detected changes in AUROC for all algorithms except RF and regularized LR (ranging from −0.008 to 0.065). Stochastic gradient boosting and support vector machine algorithms had the greatest changes in AUROC (ΔAUROC=0.06 and 0.065, respectively). However, the performance of these models in the sensitivity analysis may not be reliable as both algorithms were examined in the same study using a small sample (n=145).Discussion
Principal Findings
Mortality prediction is a sensitive topic that, if done correctly, could assist with the provision of appropriate end-of-life care for patients with cancer. ML-based models have been developed to support the prediction; however, the current evidence has not yet been systematically examined. To fill this gap, we performed a systematic review evaluating 15 studies to summarize the evidence quality and the performance of ML-based models predicting short-term mortality for the identification of patients with cancer who may benefit from palliative care. Our findings suggest that the algorithms appeared to have promising overall discriminatory performance with respect to AUROC values, consistent with previous studies summarizing the performance of ML-based models supporting mortality predictions for other populations [
- ]. However, the results must be interpreted with caution because of the high ROB across the studies, as well as some evidence of the selective reporting of important performance metrics such as sensitivity and PPV, supporting previous studies reporting poor adherence to TRIPOD reporting items in ML studies [ ]. We identified several common issues that could lead to biased models and misleading model performance estimates in the methods used to develop and evaluate the algorithms. The issues included the use of a single performance metric, incomplete reporting of or inappropriate data preprocessing and modeling, and small sample size. Further research is needed to establish a guideline for ML modeling, evaluation, and reporting to enhance the evidence quality in this area.We found that the AUROC was predominantly used as the primary metric for model selection. Other performance metrics have been less discussed. However, the AUROC provides less information for determining whether the model is clinically beneficial, as it equally weighs sensitivity and specificity [
, ]. For instance, Manz et al [ ] reported a model predicting 180-day mortality for patients with cancer with an AUROC of 0.89, showing the superior performance of the model [ ]. However, their model demonstrated a low sensitivity of 0.27, indicating poor performance in identifying individuals at high risk of 180-day death. In practice, whether to stress sensitivity or specificity depends on the model’s purpose. In the case of rare event prediction, we believe that sensitivity will usually be prioritized. Therefore, we strongly suggest that future studies report multiple discrimination metrics, including sensitivity, specificity, PPV, negative predictive value, F1 score, and the area under the precision–recall curve, to allow for a comprehensive evaluation [ - ].We found no clear difference in performance between general and cancer-specific ML models for short-term mortality predictions (AUROC 0.87 for general models vs 0.86 for cancer-specific models). This finding aligns with a study reporting no performance benefit of disease-specific ML models over general ML models for hospital readmission predictions [
]. However, among the 15 included studies, 10 (67%) examined ML performance in short-term mortality for only a few types of cancer, which resulted in the ML in most cancer types remaining unexplored and compromising the comparison. In fact, a few disease-specific models examined in this review demonstrated exceptional performance and have the potential to provide disease-specific information to better guide clinical practice [ , ]. As such, we recommend that more research test ML models using various oncology-specific patient cohorts to predict short-term mortality to enable a full understanding of whether disease-specific ML models can bring advantages over limitations, such as higher development and implementation cost.Only 33% (5/15) of the included studies compared their model with a traditional statistical model, such as univariate or multivariate LR [
, , , , ]. Of the 15 studies, 1 (7%) reported that ML models were statistically more accurate, although all studies reported a superior AUROC of their ML models compared with statistical predictive models. This finding supports previous studies that reported that the performance benefit of ML over conventional modeling approaches is unclear at the current stage [ ]. Thus, although we argue that the capacity of ML algorithms in dealing with nonlinear, high-dimensional data could benefit clinical practice by identifying additional risk factors for intervening to improve patient outcomes beyond predictive performance, we encourage researchers to benchmark their ML models against conventional approaches to highlight the performance benefit of ML.Our review suggests that the sample size consideration is missing for ML studies in the field, which is consistent with a previous review [
]. In fact, none of the included studies justified the appropriateness of their sample size, given the number of candidate predictors used in model development. Simulation studies have suggested that most ML modeling approaches require >200 data points related to the outcome per candidate predictor to reach a stable performance and mitigate optimistic models [ ]. Unfortunately, none of the included studies met this criterion. Thus, we recommend that future studies justify the appropriateness of their sample size and use feature selection and dimensional reduction techniques before modeling to reduce the number of candidate predictors if a small sample is inevitably used.Most studies used imbalanced data sets without additional procedures to address the issue, such as over- or downsampling. The effects of class-imbalanced data sets are unclear as sensitivity was often unreported and widely varied when it was reported. A study used a downsampling technique to balance their data set [
]. However, the authors did not report their model performance in a holdout validation data set. Thus, the effectiveness of this approach is unknown. Moreover, the effectiveness of other approaches, such as the synthetic minority oversampling technique [ ], remains unexamined in this context. Further research is needed to examine whether these approaches can further improve the performance of ML models in predicting cancer mortality.Most ML models predicting short-term cancer mortality were reported without intuitive interpretations of the prediction processes. It has been well-documented that ML acceptance by the larger medical community is limited because of the limited interpretability of ML-based models [
]. Despite the widespread use of variable importance analysis to reveal essential factors for the models in the included studies, it is unknown how the models used the factors to generate the predictions [ ]. As the field progresses, global and local model interpretation approaches have been developed to explain ML models intuitively and visually at a data set and instance level [ ]. The inclusion of these analyses to provide an intuitive model explanation may not only gain medical professionals’ trust but also provide information guiding individualized care plans and future investigations [ ]. Therefore, we highly recommend that future studies unbox their models using various explanation analyses in addition to model performance.Limitations
This review has several limitations. First, we did not quantitatively synthesize the model performance because of the clinical and methodological heterogeneity of the included studies. We believe that a meta-analysis of the model performance would provide clear evidence but should be conducted with enough homogeneous studies [
]. Second, the ROB of the studies may be inappropriately estimated because of the use of the prediction model ROB assessment tool checklist, which was developed for appraising predictive modeling studies using multivariable analysis. Some items may not apply, or additional items may be needed because of the differences in terminology, theoretical foundations, and procedures between ML-based and regression-based studies. Finally, the results of this review may be affected by reporting bias as we did not consider studies published outside of scientific journals or in non-English languages. Furthermore, our results could be compromised by the small number of included studies and the inclusion of studies by the same group (eg, 3 studies from Karhade et al [ , , ]). However, we observed similar issues with model development and performance in our sensitivity analysis, suggesting that our evaluation likely reflects the current evidence in the literature. Despite these limitations, this review provides an overview of ML-based model performance in predicting short-term cancer mortality and leads to recommendations concerning model development and reporting.Conclusions
In conclusion, we found signs of encouraging performance but also highlighted several issues concerning the way algorithms were trained, evaluated, and reported in the current literature. The overall ROB was high, and there was substantial uncertainty regarding the development and performance of the models in the real world because of incomplete reporting. Although some models are potentially clinically beneficial, we must conclude that none of the included studies produced an ML model that we considered suitable for clinical practice to support palliative care initiation and provision. We encourage further efforts to develop safe and effective ML models using modern standards of development and reporting.
Acknowledgments
The authors would like to acknowledge the support of the Division of Internal Medicine Immuno‐Oncology Toxicity Award Program of the University of Texas MD Anderson Cancer Center. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agency.
Conflicts of Interest
None declared.
Search strategy for all reference databases used.
DOCX File , 24 KB
Data collection tool.
DOCX File , 26 KB
Detailed record for the study selection process.
XLSX File (Microsoft Excel File), 1276 KB
Comparison of area under the receiver operating characteristic curve for each machine learning algorithm between the full and sensitivity analyses.
DOCX File , 31 KBReferences
- Earle CC, Neville BA, Landrum MB, Ayanian JZ, Block SD, Weeks JC. Trends in the aggressiveness of cancer care near the end of life. J Clin Oncol 2004;22(2):315-321. [CrossRef] [Medline]
- Tönnies J, Hartmann M, Jäger D, Bleyel C, Becker N, Friederich HC, et al. Aggressiveness of care at the end-of-life in cancer patients and its association with psychosocial functioning in bereaved caregivers. Front Oncol 2021;11:673147 [FREE Full text] [CrossRef] [Medline]
- Ullgren H, Fransson P, Olofsson A, Segersvärd R, Sharp L. Health care utilization at end of life among patients with lung or pancreatic cancer. Comparison between two Swedish cohorts. PLoS One 2021;16(7):e0254673 [FREE Full text] [CrossRef] [Medline]
- Bylicki O, Didier M, Riviere F, Margery J, Grassin F, Chouaid C. Lung cancer and end-of-life care: a systematic review and thematic synthesis of aggressive inpatient care. BMJ Support Palliat Care 2019;9(4):413-424 [FREE Full text] [CrossRef] [Medline]
- Jang TK, Kim DY, Lee SW, Park JY, Suh DS, Kim JH, et al. Trends in treatment during the last stages of life in end-stage gynecologic cancer patients who received active palliative chemotherapy: a comparative analysis of 10-year data in a single institution. BMC Palliat Care 2018;17(1):99 [FREE Full text] [CrossRef] [Medline]
- Wright AA, Keating NL, Ayanian JZ, Chrischilles EA, Kahn KL, Ritchie CS, et al. Family perspectives on aggressive cancer care near the end of life. JAMA 2016;315(3):284-292 [FREE Full text] [CrossRef] [Medline]
- Palliative and end-of-life care 2015-2016. National Quality Forum. 2016. URL: https://www.qualityforum.org/publications/2016/12/palliative_and_end-of-life_care_2015-2016.aspx [accessed 2022-02-28]
- Avati A, Jung K, Harman S, Downing L, Ng A, Shah NH. Improving palliative care with deep learning. BMC Med Inform Decis Mak 2018;18(Suppl 4):122 [FREE Full text] [CrossRef] [Medline]
- Pirl WF, Lerner J, Traeger L, Greer JA, El-Jawahri A, Temel JS. Oncologists' dispositional affect and likelihood of end-of-life discussions. J Clin Oncol 2019;34(26_suppl):9 [FREE Full text] [CrossRef]
- Kale MS, Ornstein KA, Smith CB, Kelley AS. End-of-life discussions with older adults. J Am Geriatr Soc 2016;64(10):1962-1967 [FREE Full text] [CrossRef] [Medline]
- Obermeyer Z, Emanuel EJ. Predicting the future - big data, machine learning, and clinical medicine. N Engl J Med 2016;375(13):1216-1219 [FREE Full text] [CrossRef] [Medline]
- Sidey-Gibbons JA, Sidey-Gibbons CJ. Machine learning in medicine: a practical introduction. BMC Med Res Methodol 2019;19(1):64 [FREE Full text] [CrossRef] [Medline]
- Cruz JA, Wishart DS. Applications of machine learning in cancer prediction and prognosis. Cancer Inform 2007;2:59-77 [FREE Full text] [Medline]
- Kourou K, Exarchos TP, Exarchos KP, Karamouzis MV, Fotiadis DI. Machine learning applications in cancer prognosis and prediction. Comput Struct Biotechnol J 2014;13:8-17 [FREE Full text] [CrossRef] [Medline]
- Manz C, Parikh RB, Evans CN, Chivers C, Regli SB, Changolkar S, et al. Effect of integrating machine learning mortality estimates with behavioral nudges to increase serious illness conversions among patients with cancer: a stepped-wedge cluster randomized trial. J Clin Oncol 2020;38(15_suppl):12002 [FREE Full text] [CrossRef]
- Shillan D, Sterne JA, Champneys A, Gibbison B. Use of machine learning to analyse routinely collected intensive care unit data: a systematic review. Crit Care 2019;23(1):284 [FREE Full text] [CrossRef] [Medline]
- Rau CS, Kuo PJ, Chien PC, Huang CY, Hsieh HY, Hsieh CH. Mortality prediction in patients with isolated moderate and severe traumatic brain injury using machine learning models. PLoS One 2018;13(11):e0207192 [FREE Full text] [CrossRef] [Medline]
- Bottino F, Tagliente E, Pasquini L, Di Napoli A, Lucignani M, Figà-Talamanca L, et al. COVID mortality prediction with machine learning methods: a systematic review and critical appraisal. J Pers Med 2021;11(9):893 [FREE Full text] [CrossRef] [Medline]
- Nardini C. Machine learning in oncology: a review. Ecancermedicalscience 2020;14:1065 [FREE Full text] [CrossRef] [Medline]
- Ramesh S, Chokkara S, Shen T, Major A, Volchenboum SL, Mayampurath A, et al. Applications of artificial intelligence in pediatric oncology: a systematic review. JCO Clin Cancer Inform 2021;5:1208-1219 [FREE Full text] [CrossRef] [Medline]
- Wang W, Kiik M, Peek N, Curcin V, Marshall IJ, Rudd AG, et al. A systematic review of machine learning models for predicting outcomes of stroke with structured data. PLoS One 2020;15(6):e0234722 [FREE Full text] [CrossRef] [Medline]
- Lazer D, Kennedy R, King G, Vespignani A. Big data. The parable of Google flu: traps in big data analysis. Science 2014;343(6176):1203-1205. [CrossRef] [Medline]
- Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019;366(6464):447-453. [CrossRef] [Medline]
- Pfob A, Mehrara BJ, Nelson JA, Wilkins EG, Pusic AL, Sidey-Gibbons C. Towards patient-centered decision-making in breast cancer surgery: machine learning to predict individual patient-reported outcomes at 1-year follow-up. Ann Surg (forthcoming) 2021. [CrossRef] [Medline]
- Roberts M, Driggs D, Thorpe M, Gilbey J, Yeung M, Ursprung S, AIX-COVNET. Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. Nat Mach Intell 2021;3(3):199-217 [FREE Full text] [CrossRef]
- Stupple A, Singerman D, Celi LA. The reproducibility crisis in the age of digital medicine. NPJ Digit Med 2019;2:2 [FREE Full text] [CrossRef] [Medline]
- He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat Med 2019;25(1):30-36 [FREE Full text] [CrossRef] [Medline]
- McDermott MB, Wang S, Marinsek N, Ranganath R, Foschini L, Ghassemi M. Reproducibility in machine learning for health research: still a ways to go. Sci Transl Med 2021;13(586):eabb1655. [CrossRef] [Medline]
- Uddin MF. Addressing accuracy paradox using enhanched weighted performance metric in machine learning. In: Sixth HCT Information Technology Trends. 2019 Presented at: ITT '19; November 20-21, 2019; Ras Al Khaimah, UAE p. 319-324 URL: https://doi.org/10.1109/itt48889.2019.9075071 [CrossRef]
- Valverde-Albacete FJ, Peláez-Moreno C. 100% classification accuracy considered harmful: the normalized information transfer factor explains the accuracy paradox. PLoS One 2014;9(1):e84217 [FREE Full text] [CrossRef] [Medline]
- Aromataris E, Munn Z. JBI Manual for Evidence Synthesis. Adelaide, Australia: Joanna Briggs Institute; 2020.
- Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021;372:n71 [FREE Full text] [CrossRef] [Medline]
- Moons KG, Wolff RF, Riley RD, Whiting PF, Westwood M, Collins GS, et al. PROBAST: a tool to assess risk of bias and applicability of prediction model studies: explanation and elaboration. Ann Intern Med 2019;170(1):W1-33 [FREE Full text] [CrossRef] [Medline]
- Collins GS, Reitsma JB, Altman DG, Moons KG. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement. Ann Intern Med 2015;162(1):55-63 [FREE Full text] [CrossRef] [Medline]
- Oh SE, Seo SW, Choi MG, Sohn TS, Bae JM, Kim S. Prediction of overall survival and novel classification of patients with gastric cancer using the survival recurrent network. Ann Surg Oncol 2018;25(5):1153-1159. [CrossRef] [Medline]
- Karhade AV, Ahmed A, Pennington Z, Chara A, Schilling A, Thio QC, et al. External validation of the SORG 90-day and 1-year machine learning algorithms for survival in spinal metastatic disease. Spine J 2020;20(1):14-21. [CrossRef] [Medline]
- Manz CR, Chen J, Liu M, Chivers C, Regli SH, Braun J, et al. Validation of a machine learning algorithm to predict 180-day mortality for outpatients with cancer. JAMA Oncol 2020;6(11):1723-1730 [FREE Full text] [CrossRef] [Medline]
- Sena GR, Lima TP, Mello MJ, Thuler LC, Lima JT. Developing machine learning algorithms for the prediction of early death in elderly cancer patients: usability study. JMIR Cancer 2019;5(2):e12163 [FREE Full text] [CrossRef] [Medline]
- Parikh RB, Manz C, Chivers C, Regli SH, Braun J, Draugelis ME, et al. Machine learning approaches to predict 6-month mortality among patients with cancer. JAMA Netw Open 2019;2(10):e1915997 [FREE Full text] [CrossRef] [Medline]
- Zhang M, Yin F, Chen B, Li B, Li YP, Yan LN, et al. Mortality risk after liver transplantation in hepatocellular carcinoma recipients: a nonlinear predictive model. Surgery 2012;151(6):889-897. [CrossRef] [Medline]
- Karhade AV, Thio QC, Ogink PT, Shah AA, Bono CM, Oh KS, et al. Development of machine learning algorithms for prediction of 30-day mortality after surgery for spinal metastasis. Neurosurgery 2019;85(1):E83-E91. [CrossRef] [Medline]
- Karhade AV, Thio QC, Ogink PT, Bono CM, Ferrone ML, Oh KS, et al. Predicting 90-day and 1-year mortality in spinal metastatic disease: development and internal validation. Neurosurgery 2019;85(4):E671-E681. [CrossRef] [Medline]
- Elfiky AA, Pany MJ, Parikh RB, Obermeyer Z. Development and application of a machine learning approach to assess short-term mortality risk among patients with cancer starting chemotherapy. JAMA Netw Open 2018;1(3):e180926 [FREE Full text] [CrossRef] [Medline]
- Hanai T, Yatabe Y, Nakayama Y, Takahashi T, Honda H, Mitsudomi T, et al. Prognostic models in patients with non-small-cell lung cancer using artificial neural networks in comparison with logistic regression. Cancer Sci 2003;94(5):473-477 [FREE Full text] [CrossRef] [Medline]
- Nilsaz-Dezfouli H, Abu-Bakar MR, Arasan J, Adam MB, Pourhoseingholi MA. Improving gastric cancer outcome prediction using single time-point artificial neural network models. Cancer Inform 2017;16:1176935116686062 [FREE Full text] [CrossRef] [Medline]
- Arostegui I, Gonzalez N, Fernández-de-Larrea N, Lázaro-Aramburu S, Baré M, Redondo M, REDISSEC CARESS-CCR Group. Combining statistical techniques to predict postsurgical risk of 1-year mortality for patients with colon cancer. Clin Epidemiol 2018;10:235-251 [FREE Full text] [CrossRef] [Medline]
- Biglarian A, Hajizadeh E, Kazemnejad A, Zali M. Application of artificial neural network in predicting the survival rate of gastric cancer patients. Iran J Public Health 2011;40(2):80-86 [FREE Full text] [Medline]
- Klén R, Salminen AP, Mahmoudian M, Syvänen KT, Elo LL, Boström PJ. Prediction of complication related death after radical cystectomy for bladder cancer with machine learning methodology. Scand J Urol 2019;53(5):325-331. [CrossRef] [Medline]
- Chiu HC, Ho TW, Lee KT, Chen HY, Ho WH. Mortality predicted accuracy for hepatocellular carcinoma patients with hepatic resection using artificial neural network. ScientificWorldJournal 2013;2013:201976 [FREE Full text] [CrossRef] [Medline]
- Bertsimas D, Kung J, Trichakis N, Wang Y, Hirose R, Vagefi PA. Development and validation of an optimized prediction of mortality for candidates awaiting liver transplantation. Am J Transplant 2019;19(4):1109-1118 [FREE Full text] [CrossRef] [Medline]
- Shi HY, Lee KT, Wang JJ, Sun DP, Lee HH, Chiu CC. Artificial neural network model for predicting 5-year mortality after surgery for hepatocellular carcinoma: a nationwide study. J Gastrointest Surg 2012;16(11):2126-2131. [CrossRef] [Medline]
- Dhiman P, Ma J, Navarro CA, Speich B, Bullock G, Damen JA, et al. Reporting of prognostic clinical prediction models based on machine learning methods in oncology needs to be improved. J Clin Epidemiol 2021;138:60-72 [FREE Full text] [CrossRef] [Medline]
- El Naqa I, Ruan D, Valdes G, Dekker A, McNutt T, Ge Y, et al. Machine learning and modeling: data, validation, communication challenges. Med Phys 2018;45(10):e834-e840 [FREE Full text] [CrossRef] [Medline]
- Johnston SS, Fortin S, Kalsekar I, Reps J, Coplan P. Improving visual communication of discriminative accuracy for predictive models: the probability threshold plot. JAMIA Open 2021;4(1):ooab017 [FREE Full text] [CrossRef] [Medline]
- Halligan S, Altman DG, Mallett S. Disadvantages of using the area under the receiver operating characteristic curve to assess imaging tests: a discussion and proposal for an alternative approach. Eur Radiol 2015;25(4):932-939 [FREE Full text] [CrossRef] [Medline]
- Sutter T, Roth JA, Chin-Cheong K, Hug BL, Vogt JE. A comparison of general and disease-specific machine learning models for the prediction of unplanned hospital readmissions. J Am Med Inform Assoc 2021;28(4):868-873 [FREE Full text] [CrossRef] [Medline]
- Christodoulou E, Ma J, Collins GS, Steyerberg EW, Verbakel JY, Van Calster B. A systematic review shows no performance benefit of machine learning over logistic regression for clinical prediction models. J Clin Epidemiol 2019;110:12-22. [CrossRef] [Medline]
- Ben-Israel D, Jacobs WB, Casha S, Lang S, Ryu WH, de Lotbiniere-Bassett M, et al. The impact of machine learning on patient care: a systematic review. Artif Intell Med 2020;103:101785. [CrossRef] [Medline]
- van der Ploeg T, Austin PC, Steyerberg EW. Modern modelling techniques are data hungry: a simulation study for predicting dichotomous endpoints. BMC Med Res Methodol 2014;14:137 [FREE Full text] [CrossRef] [Medline]
- Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP. SMOTE: synthetic minority over-sampling technique. J Artif Intell Res 2002;16(1):321-357. [CrossRef]
- Molnar C. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Morrisville, NC: Lulu Publishing; 2019.
- Li R, Shinde A, Liu A, Glaser S, Lyou Y, Yuh B, et al. Machine learning-based interpretation and visualization of nonlinear interactions in prostate cancer survival. JCO Clin Cancer Inform 2020;4:637-646 [FREE Full text] [CrossRef] [Medline]
- Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al. Cochrane Handbook for Systematic Reviews of Interventions. 2nd Edition. Chichester, UK: John Wiley & Sons; 2019.
Abbreviations
AUROC: area under the receiver operating characteristic curve |
EHR: electronic health record |
LR: logistic regression |
ML: machine learning |
PPV: positive predictive value |
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses |
PROSPERO: International Prospective Register of Systematic Reviews |
RF: random forest |
ROB: risk of bias |
TRIPOD: the transparent reporting of a multivariable prediction model for individual prognosis or diagnosis |
Edited by C Lovis; submitted 26.08.21; peer-reviewed by L Zhou, JA Benítez-Andrades; comments to author 02.01.22; revised version received 23.01.22; accepted 31.01.22; published 14.03.22
Copyright©Sheng-Chieh Lu, Cai Xu, Chandler H Nguyen, Yimin Geng, André Pfob, Chris Sidey-Gibbons. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 14.03.2022.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on https://medinform.jmir.org/, as well as this copyright and license information must be included.