Published on in Vol 9, No 8 (2021): August

Preprints (earlier versions) of this paper are available at, first published .
Patient-Level Cancer Prediction Models From a Nationwide Patient Cohort: Model Development and Validation

Patient-Level Cancer Prediction Models From a Nationwide Patient Cohort: Model Development and Validation

Patient-Level Cancer Prediction Models From a Nationwide Patient Cohort: Model Development and Validation

Original Paper

1Department of Mathematics, Pohang University of Science and Technology, Pohang-si, Republic of Korea

2Office of eHealth Research and Businesses, Seoul National University Bundang Hospital, Seongnam-si, Republic of Korea

3AMSquare Corporation, Pohang-si, Republic of Korea

*these authors contributed equally

Corresponding Author:

Hyung Ju Hwang, PhD

Department of Mathematics

Pohang University of Science and Technology

77 Cheongam-ro


Pohang-si, 37673

Republic of Korea

Phone: 82 054 279 2056

Fax:82 054 279 2799


Background: Nationwide population-based cohorts provide a new opportunity to build automated risk prediction models at the patient level, and claim data are one of the more useful resources to this end. To avoid unnecessary diagnostic intervention after cancer screening tests, patient-level prediction models should be developed.

Objective: We aimed to develop cancer prediction models using nationwide claim databases with machine learning algorithms, which are explainable and easily applicable in real-world environments.

Methods: As source data, we used the Korean National Insurance System Database. Every Korean in ≥40 years old undergoes a national health checkup every 2 years. We gathered all variables from the database including demographic information, basic laboratory values, anthropometric values, and previous medical history. We applied conventional logistic regression methods, light gradient boosting methods, neural networks, survival analysis, and one-class embedding classifier methods to effectively analyze high dimension data based on deep learning–based anomaly detection. Performance was measured with area under the curve and area under precision recall curve. We validated our models externally with a health checkup database from a tertiary hospital.

Results: The one-class embedding classifier model received the highest area under the curve scores with values of 0.868, 0.849, 0.798, 0.746, 0.800, 0.749, and 0.790 for liver, lung, colorectal, pancreatic, gastric, breast, and cervical cancers, respectively. For area under precision recall curve, the light gradient boosting models had the highest score with values of 0.383, 0.401, 0.387, 0.300, 0.385, 0.357, and 0.296 for liver, lung, colorectal, pancreatic, gastric, breast, and cervical cancers, respectively.

Conclusions: Our results show that it is possible to easily develop applicable cancer prediction models with nationwide claim data using machine learning. The 7 models showed acceptable performances and explainability, and thus can be distributed easily in real-world environments.

JMIR Med Inform 2021;9(8):e29807



Cancer is a major cause of death, accounting for nearly 10 million deaths worldwide in 2020 [1]. It is a preventable disease requiring major lifestyle modifications [2], for which screening is important because it can help health care professionals with early detection and treatment of several types of cancer before they become aggravated [3]. In the early stages, cancer is normally indolent and symptomless. Thus, nationwide cancer screening programs for the general population have been adopted in many countries [4-8]. A national cancer control program (NCCP) framework, a public health program designed to mitigate the number of cancer cases and deaths and improve quality of life of patients, was proposed by the World Health Organization [6,9]. In South Korea, the NCCP was designed in 1996 and implemented in 1999 to provide free screening services for low-income Medical Aid patients. Beginning in 2000, the NCCP has expanded its target population to include all National Health Insurance (NHI) recipients. Since that time, the survival rate of cancer patients has continued to improve. According to cancer registration statistics in 2013, the relative survival rate of cancer patients has increased to 70.3% [10]. For 7 major cancer, namely, stomach, colorectal, breast, lung, cervical, pancreas, and liver cancer, every NHI beneficiary receives cancer screening tests mainly based on his or her age and gender. For instance, everyone ≥40 years old is examined by upper gastrointestinography or gastrointestinal endoscopy every 2 years to screen for stomach cancer. However, concerns have been raised about this one-size-fits-all cancer screening program because every procedure for cancer screening has its own risks for false-positive cases. For instance, false-positive cases of mammograms for screening breast cancer have resulted in many unnecessary invasive breast excisional biopsies, which reduce the quality of life in women [11,12]. Thus, personalized cancer screening protocols based on patient’s individual risks have been in need since the NCCP was introduced [13,14]. The National Health Insurance System (NHIS) has collected health checkup data since 2003 under a structured data format and made it available for researchers [15]. There are two types of NHIS cohort data: a 1-million-person cohort sampled randomly from all NHI beneficiaries reflecting general characteristics of the entire South Korean population and a 500-thousand-person cohort sampled from those who received national health checkup services. All data include every diagnosis code and medications of each patient in all hospitals and clinics. For beneficiaries of national health checkup services, data include basic anthropometric measurements, laboratory values, past medical history, and family history. Despite the limited number of variables for the development of machine learning algorithms compared to electronic health records (EHRs) in hospitals, this type of data has the substantial advantages of a well-refined structured format and large sample size [16]. The data structure of the NHIS cohort and the monthly claim data from every EHR in hospitals are the same; therefore, the developed patient-level prediction models can be implemented in any EHR system in South Korea. In this study, we aimed to develop practical patient-level prediction models of 7 major cancers with acceptable performances and explainability, which can be distributed easily in real-world environments.

Data Description

We used an NHIS database to develop our cancer prediction models. The NHIS, a mandatory social insurance system, has collected health screening data at the national population level since the mid-1970s [15]. As this is a centralized system, Korean health screening data can be centralized, while paid health care providers act on a per-service basis [17]. The NHIS database consists of 2 different data sets: a health checkup cohort and a national sample cohort [18]. We used the health checkup cohort in the learning process and included training and internal validation and the remaining national sample cohort for external validation.

The NHIS provides a free health checkup program to all NHI members every 2 years. The health checkup cohort contains a total of 514,866 patients’ health checkup records randomly extracted from health insurance members who have undergone a heath checkup program. The national sample cohort contains about 1 million patient records corresponding to about 2.2% of the Korean population in 2002. This data set was collected by considering demographics, such as population, age, and geographic factors. Both data sets include social and economic eligibility variables, health resource utilization status, description, treatment details, disease type, prescription details, and clinic status. The NHIS data set statistics are presented in Table 1.

Table 1. Statistics of the National Health Insurance Service data sets (2002-2013).
DescriptionHealth checkup cohort, nNational sample cohort, n
Diagnostic codes (full code name)17,38519,626
Diagnostic codes (first 3 digits)21602319
Annual patient visits, mean15.68.9
Diagnostic codes/visit, mean2.42.5
Drug/prescription, mean4.44.4

Study Population Definition

It is mandatory that all cancer patients in South Korea be enrolled into a national cancer management program in the hospital where the cancer is diagnosed so that cancer patients only pay 5% of the total medical cost [19]. This means that almost all cancer patients in South Korea can be identified by diagnosis codes registered in the NHIS database [20].

We used the Korean Classification of Disease version 7, which is compatible with International Classification of Disease (ICD)-9 and defined the following 7 major cancers [21]: liver cancer (malignant neoplasm of the liver and intrahepatic bile ducts), C22; lung cancer (malignant neoplasm of the bronchus and lung), C34; colorectal cancer (malignant neoplasm of the colon, rectosigmoid junction, and rectum), C18, C19, and C20; pancreatic cancer (malignant neoplasm of the pancreas), C25; stomach cancer (malignant neoplasm of the stomach), C16; and breast cancer (malignant neoplasm of the breast), C50; and cervical cancer (malignant neoplasm of the cervix uteri), C53.

The prevalence of each cancer is presented in Table 2.

Table 2. The number of cancer-free patients and the number of cancer patients diagnosed for each cancer.
Patient typeLiverLungColorectalPancreaticStomachBreastCervical
Free, n234,659233,931233,203235,633232,49391,98292,736
Diagnosed, n15872335284555136791029306

Input Features and Algorithms

First, we used basic features consisting of simple demographic information, including age and gender, health examination, and survey results (18 features, level 1). Second, we added 11 more features obtained from a questionnaire, including the patient's medical history and family medical history (29 features, level 2). Third, we included 10 specific disease diagnostic records that appeared significant through univariate analysis for each cancer (39 features, level 3). The specific codes for each of the 10 cancers are provided in Multimedia Appendix 1.

To predict future cancers, we focused on cancer incidence within the next 5 years based on the time of screening. We first trained our predictive model with 4 common machine learning models: logistic regression (LR), random forest (RF), Light Gradient Boosting Machine (LGBM; a tree-based gradient boosting model), and multilayer perceptron (MLP). Further, we built a one-class embedding classifier (OCEC), which is a deep anomaly detection-based model (Figure 1). This method assumes that the data have one large class and several types of small anomalies not included in that class. This is an appropriate assumption because, while most people have normal screening records, few have cancer. To build our OCEC structure, we modified a deep one-class classification, the first deep learning–based anomaly detection model [22]. We then added a small classifier to the latent space to predict future cancer. The hyperparameters used for training models are shown in Multimedia Appendix 2.

Figure 1. Concept of one-class embedding classifier.
View this figure

Model Evaluation Strategy

We divided an entire health checkup cohort, with 80% placed into a training set and 20% placed into a validation set. The model was trained only with the training set while the internal validation set was not used in the learning process. After training, the model output a prediction score for the probability of developing cancer in the next 5 years after the input year.

A cancer prediction problem is heavily imbalanced because the proportion of cancer-diagnosed patients is too small. In our data, the proportions of cancer-diagnosed patients were <2% for all 7 cancers. Thus, we used the area under the receiver operating characteristic curve (AUROC) and area under the precision recall curve (AUPRC) score to evaluate our models. The AUROC is an evaluation metric with values between 0 and 1 that is widely used as an evaluation metric for the imbalance problem, while the AUPRC combines recall and precision and corresponds to the average of the precision according to the precision recall curve. The baseline for AUROC is always 0.5, meaning a random classifier would produce an AUROC of 0.5. However, with AUPRC, the baseline is equal to the fraction of positive cancer cases (number of positive examples/total number of examples). The baseline AUPRC for each cancer in both the internal and external validation sets is shown in Table 3.

Table 3. The baseline area under the precision recall curve for the internal and external validation sets.
Validation setLiverLungColorectalPancreaticStomachBreastCervical
Internal validation4.45×10–36.03×10–37.72×10–31.50×10–31.04×10–27.65×10–32.39×10–3
External validation2.96×10–33.86×10–35.52×10–31.01×10–36.65×10–37.97×10–32.22×10–3

We evaluated the above metrics for both internal and external validation sets and compared the results. Additionally, for the external data set, we used the survival analysis method. We plotted Kaplan-Meier cumulative density curves to see the actual effectiveness of the predictive score. The study flow chart for learning and verification of the overall process is shown in Figure 2.

The NHIS institutional review board approved all data requests for research purposes (NHIS-2017-2-326). Because this public database is fully anonymized, institutional approval of Seoul National University Bundang Hospital (SNUBH) was waived by the institutional review board (X-2009-634-902).

Figure 2. Flow chart of the overall process. AUROC: area under the receiver operating characteristic curve.
View this figure

Performance of Cancer Prediction Models

Table 4 shows the internal validation results for each cancer across the 5 models. Overall, the LGBM and deep learning models performed better than did LR and RF. The former models performed well in terms of AUROC and AUPRC scores. LR, the most widely used classic model, showed low AUPRC scores, while RF had a low AUROC.

Notably, more than half of the OCEC AUROC scores were top rated compared to other models. Two models, OCEC and MLP, are both deep learning structured models. However, OCEC uses dense dimension reduction and performed better for both AUROC and AUPRC score compared to the MLP model. This shows that the anomaly-based one-class classification model can be a suitable deep learning structure for rare disease prediction.

When looking at the internal validation results of each cancer, liver and lung cancers showed the best results (AUROC>0.8), followed by stomach, pancreatic, and colorectal cancers (0.8>AUROC>0.7). Cervical and breast cancers (both female cancers) showed the lowest results (0.7>AUROC>0.6). The same findings also appeared in the external validation (Table 5).

According to feature level, the results tended to improve as feature level increased from level 1 to 3, but this was not significant. However, in some cases, the opposite tendency was observed.

The findings for the external validation score were similar to those of the internal score. Interestingly, the external validation scores (Table 5) were higher than the internal ones overall.

Table 4. Internal validation performance of outcome prediction across models.
Cancer typeFeature levelLGBMaLRbRFcMLPdOCECe

Level 10.8580.3590.8360.0450.7480.3590.8580.2960.8570.313
Level 20.8680.3630.8410.0480.7700.3420.8560.2970.8600.301
Level 30.8710.3830.8520.0800.7880.3610.8600.3150.8680.334

Level 10.8450.3960.8230.1060.7350.3660.8450.3600.8490.382
Level 20.8450.3950.8220.1100.7500.3660.8320.3380.8410.338
Level 30.8450.4010.8290.1300.7540.3670.8410.3450.8430.343

Level 10.7900.3850.7640.0550.7070.3660.7940.3470.7950.371
Level 20.7920.3870.7670.0630.7010.3630.7900.3210.7980.342
Level 30.7940.3850.7690.0750.7040.3600.7910.3220.7960.342

Level 10.7230.3000.7240.0170.6760.3160.7440.2340.7460.259
Level 20.7200.2810.7270.0180.6690.3090.7250.2400.7450.240
Level 30.7230.2710.7300.0180.6820.3110.7300.2250.7430.231

Level 10.7870.3850.7680.0860.7130.3530.7930.3480.7980.367
Level 20.7900.3820.7700.0920.7040.3510.7960.3450.8000.345
Level 30.7910.3830.7720.1080.7150.3510.7870.3290.7950.329

Level 10.6840.3440.6890.0770.6660.3430.7050.3250.7130.332
Level 20.6960.3450.6960.0830.6810.3460.7060.3240.7110.327
Level 30.7220.3570.7330.1290.6890.3530.7340.3390.7490.345

Level 10.6470.2680.6670.0130.6560.2730.6710.2630.6900.265
Level 20.6720.2710.6690.0120.6320.2740.6600.2660.6700.266
Level 30.6530.2960.6120.0270.6790.3010.6380.2750.6450.279

aLGBM: Light Gradient Boosting Model.

bLR: logistic regression.

cRF: random forest.

dMLP: multilayer perceptron.

eOCEC: one-class embedding classifier.

fAUROC: area under receiver operator characteristics curve.

gAUPRC: area under precision recall curve.

Table 5. External performance of outcome prediction across models.
Cancer typeFeature levelLGBMaLRbRFcMLPdOCECe

Level 10.9100.4850.8930.0650.8150.5020.9110.4330.9120.442
Level 20.9090.4850.8950.0670.8260.4880.9000.3910.9110.433
Level 30.9150.5140.9070.1200.8380.5270.9100.4630.9190.471

Level 10.8960.4650.8750.0970.7890.4680.8980.4310.8970.450
Level 20.8950.4630.8750.1040.7880.4650.8860.2960.8940.401
Level 30.8970.4640.8790.1180.7940.4710.8870.4020.8940.408

Level 10.8720.4550.8580.0700.7760.4820.8830.4260.8870.449
Level 20.8740.4530.8580.0760.7800.4810.8740.3940.8870.423
Level 30.8770.4550.8590.0850.7760.4730.8820.3930.8840.415

Level 10.8910.4200.8840.0290.7530.4560.8980.3600.9040.336
Level 20.8880.4050.8840.0300.7470.4500.8830.3350.9020.337
Level 30.8850.4070.8860.0390.7590.4500.8830.3230.8970.336

Level 10.8890.4810.8630.0880.7950.4780.8910.4570.8940.440
Level 20.8910.4800.8640.0950.7930.4790.8870.4220.8930.436
Level 30.8890.4780.8640.1090.7920.4730.8850.4010.8900.413

Level 10.7630.4850.7040.1080.7500.4920.6860.4060.7530.421
Level 20.7710.4880.7160.1060.7450.4920.6780.3960.6970.410
Level 30.7800.4970.7590.1430.7570.4910.7300.4110.7450.429

Level 10.7290.3640.7420.0210.7220.3750.6710.2930.7350.336
Level 20.7210.3700.7440.0180.7150.3770.7100.3380.7320.334
Level 30.7490.3860.7600.0580.7310.4000.7440.3490.7440.354

aLGBM: Light Gradient Boosting Model.

bLR: logistic regression.

cRF: random forest.

dMLP: multilayer perceptron.

eOCEC: one-class embedding classifier.

fAUROC: area under receiver operator characteristics curve.

gAUPRC: area under precision recall curve.

Survival Analysis

To unveil the actual cancer incidence according to the predicted value, we use a survival analysis method. We analyzed the prediction scores of the LGBM model, one of the best performing of the aforementioned models. The prediction score indicates the probability of developing cancer within 5 years from the screening date. Therefore, the closer the prediction score is to 1, the likelier it is that cancer will actually occur after a certain time. We analyzed 5 groups of patients by prediction scores: group 1 (prediction score ≥0.95), group 2 (prediction score ≥0.90), group 3 (prediction score ≥0.75), group 4 (prediction score ≥0.50), and total patient groups. We drew Kaplan-Meier cumulative density curves for each group and compared them. In Figure 3, the x-axis represents time from the screening date, and the y-axis the rate of cancer incidence within the group. All these analyses were performed with external validation data. As the proportion of cancer patients is <1% for all cancers, the cumulative density curves are attached to the x-axis. The density curve of the group with the higher probability score is located at the higher cumulative density value (y-axis). These trends were collectively observed in all cancers and show the reliability of our models. Significantly, >80% of patients in group 1 actually developed cancer within 5 years.

Figure 3. Kaplan-Meier cumulative density curves.
View this figure

Model Explainability

With the LGBM and Shapley Additive Explanations (SHAP) method we can explain how the model outputs cancer prediction scores [23]. We can evaluate which features are the most important to predicting future cancer. Moreover, it is possible to know whether a feature has a positive effect or a negative effect.

Table 6 shows the top 5 features for predicting cancer incidence for each type of cancer. Overall, age was the most important variable as was gender except for women’s cancers. In addition, drinking frequency, alcohol consumption, and total cholesterol levels were all relevant factors.

In particular, aspartate aminotransferase and gamma-glutamyl transferase levels are important for liver cancer. Smoking frequency is an important variable in lung cancer but not in other cancers. Similarly, drinking is the third most important feature for stomach cancer. In breast and pancreatic cancers, blood glucose levels were a more important variable than they were for other cancers. For further details on SHAP values including correlations between each variable and cancer prediction, see Multimedia Appendix 3.

Table 6. Top 5 features by Shapley Additive Explanations.
GTPaSmokingSexHemoglobinSexBMIFasting glucose
ASTbSexBMITotal cholesterolBMITotal cholesterolBMI
Total cholesterolBMITotal cholesterolBPc (high)Drinking habitFasting glucoseConjunctivitis
BMIGTPFasting glucoseBMIHemoglobinBP (high)Total cholesterol

aGTP: guanosine triphosphate.

bAST: aspartate aminotransferase.

cBP: blood pressure.

In this study, we used nationwide population-based health care data to construct a machine learning model to predict the future incidence of 7 common types of cancer: liver, stomach, colorectal, lung, pancreatic, breast, and cervical cancer.

Among the 5 distinct models, the LGBM and OCEC, which is our original structure, performed best. Both models had a higher AUROC and AUPRC than did the other models. Interestingly, OCEC scored best in terms of AUROC score and outperformed the normal deep learning method (MLP). Our dense dimension reduction method with one-class anomaly insights was the best model structure.

All models performed well on the external validation set; therefore, it was a success in terms of generalization. Actually, the external validation results were even better than those of the internal validation, thus ensuring the generalizability of our models. We believe that this result was obtained due to the different sampling methods use between the training and validation cohort: the training data set consisted of only those with health checkup information, whereas the validation data set was sampled based on patients' demographic information. As such, the national sample cohort has a similar distribution to the health checkup cohort. In addition, the national sample cohort has a sufficient number of data samples, thus producing good external validation results.

We drew a Kaplan-Meier cumulative density curve for the LGBM model, which is the traditional way to determining whether the marker (prediction score in this case) is suitable to predict cancer occurrence. More than 80% of the people with a prediction score ≥0.95 actually developed cancer within 5 years from the screening date. This is a significant result, which shows that our model can be a powerful tool for identifying high-risk groups. These high-risk groups could then take precautions before the cancer develops. In female cancers, such as breast and cervical cancer, the predictive power was lower than in other cancers. This is probably because both the size of the total female data sample and the number of cancer patients were relatively small. On the other hand, the predictive power for liver and lung cancer was very high. Our data set included liver-related features such as glutamic oxaloacetic transaminase and glutamate pyruvate transaminase. Moreover, we believed that smoking- and drinking-related features also helped predict these cancers. Accordingly, we can conclude that securing high-quality features and a large amount of data can improve predictive power.

There have been previous attempts to develop cancer prediction models with various input features. Japanese researchers developed a prediction model for the 10-year risk of hepatocellular carcinoma in middle-aged Japanese people using data obtained from 17,654 Japanese aged 40 to 69 years who participated in regular health checkups [24]. They obtained a higher AUROC (0.933) than did our models (0.912 in level 1 feature set). However, they did not provide AUPRC, which is important in real-world settings. Furthermore, they used viral markers of hepatitis virus B and C, which are not commonly checked in the normal population. Compared to the previous model, our model used general input features that are easily obtainable, and we acquired a comparable AUROC to the previous model. A Korean research group developed a risk prediction model using Cox proportional hazard regression models for colorectal cancer with a population of 846,559 men and 479,449 women who participated in health examinations by the National Health Insurance Corporation, and they obtained C statistics between 0.69 and 0.78 [25]. They used a similar data set with a different timespan (from 1997 to1997) from our data set and obtained a similar performance to our model (0.730 vs 0.780) This means the performance of classifiers tends to depend on the training data set characteristics rather than the data and time windows. In another study, a multivariable lung cancer risk prediction model including low-dose computed tomography screening results from 22,229 participants obtained an AUROC of 0.761, which is lower than that of our model (0.898 in the MLP model) [26]. Importantly, our model showed a higher performance with an AUROC of 0.875 in a simple linear model (logistic regression with level 1 input features).

In terms of real-world implementation, this study has several implications. Thus far, many studies using machine learning have been conducted on EHR time sequence data. One study aimed to predict heart failure from EHR data [27], and others focused on diabetes development [28-30] or hypertension [31,32]. Furthermore, a few studies have used nationwide claim health checkup data to create a cancer prediction model [33-36]. To solve the overdiagnosis problem of cancer screening programs resulting in unnecessary intervention, accurate, easy-to-implement, patient-level models should be developed. Applying the developed algorithms in previous studies to hospital sites requires considerable effort because the data structure of the developed model differs from that of hospitals. However, our models have the same data structure as the national health care claim data generated on a monthly basis, which means that our models can be directly applied to EHR and makes this study meaningful in terms of its easy applicability. In addition, since we applied an explainable model to LGBM, every doctor can access the modifiable risk factors from the predicted results.

Our research has several limitations. First, this study used only South Korean nationwide claim data. Depending on the country, the performance of the developed algorithms can differ. The value of NHIS data is well-known, and the data have been used in previous epidemiologic studies. Furthermore, we validated the developed algorithms using another database. Future additional external model validations using claim data from other countries can provide robustness to the models. Second, comparative effectiveness research is needed to prove the usefulness of the developed models. Conventional screening models can be compared to new patient-level prediction models in terms of cost and the number of false-positives avoided by the new models.


This research was supported by the SNUBH Research Fund (grant #14-2017-0018), the National Research Foundation of Korea grant funded by the Korea government (NRF-2017R1E1A1A03070105 and NRF-2019R1A5A1028324), and the Institute for Information & Communications Technology Promotion grant funded by the Korea government (Artificial Intelligence Graduate School Program [POSTECH]; #2019-0-01906).

Conflicts of Interest

None declared.

Multimedia Appendix 1

Ten disease codes used as features for each cancer.

DOCX File , 13 KB

Multimedia Appendix 2

Hyperparameters used for training models.

DOCX File , 13 KB

Multimedia Appendix 3

Shapley Additive Explanations (SHAP) summary plot for each cancer.

DOCX File , 378 KB

  1. Global Cancer Observatory. World Health Organization.   URL: [accessed 2021-04-13]
  2. Anand P, Kunnumakkara AB, Kunnumakara AB, Sundaram C, Harikumar KB, Tharakan ST, et al. Cancer is a preventable disease that requires major lifestyle changes. Pharm Res 2008 Sep;25(9):2097-2116 [FREE Full text] [CrossRef] [Medline]
  3. Centers for Disease ControlPrevention (CDC). Cancer screening - United States, 2010. MMWR Morb Mortal Wkly Rep 2012 Jan 27;61(3):41-45 [FREE Full text] [Medline]
  4. Fracheboud J, de Koning H, Boer R, Groenewoud J, Verbeek A, Broeders M, National Evaluation Team for Breast cancer screening in The Netherlands. Nationwide breast cancer screening programme fully implemented in The Netherlands. Breast 2001 Feb;10(1):6-11. [CrossRef] [Medline]
  5. de Koning H. Assessment of nationwide cancer-screening programmes. The Lancet 2000 Jan;355(9198):80-81. [CrossRef]
  6. Romero Y, Trapani D, Johnson S, Tittenbrun Z, Given L, Hohman K, et al. National cancer control plans: a global analysis. The Lancet Oncology 2018 Oct;19(10):e546-e555. [CrossRef]
  7. Suh Y, Lee J, Woo H, Shin D, Kong S, Lee H, et al. National cancer screening program for gastric cancer in Korea: Nationwide treatment benefit and cost. Cancer 2020 Jan 01;126(9):1929-1939 [FREE Full text] [CrossRef] [Medline]
  8. Geller AC, Greinert R, Sinclair C, Weinstock MA, Aitken J, Boniol M, et al. A nationwide population-based skin cancer screening in Germany: proceedings of the first meeting of the International Task Force on Skin Cancer Screening and Prevention (September 24 and 25, 2009). Cancer Epidemiol 2010 Jun;34(3):355-358. [CrossRef] [Medline]
  9. Published Online First:3 February 2017. WHO | National Cancer Control Programmes (NCCP).   URL: [accessed 2021-04-13]
  10. Kim Y, Jun JK, Choi KS, Lee HY, Park EC. Overview of the National Cancer screening programme and the cancer screening status in Korea. Asian Pac J Cancer Prev 2011;12(3):725-730 [FREE Full text] [Medline]
  11. Lewis S, Huang K, Nguyen T, Gandomkar Z, Norsuddin N, Thoms C. Characteristics of frequently recalled false positive cases in screening mammography. 2020 Presented at: The 15th International Workshop on Breast Imaging (IWBI2020); 24-27 May 2020; Leuven, Belgium. [CrossRef]
  12. Le MT, Mothersill CE, Seymour CB, McNeill FE. Is the false-positive rate in mammography in North America too high? Br J Radiol 2016 Sep;89(1065):20160045 [FREE Full text] [CrossRef] [Medline]
  13. Walker R, Enderling H. A new paradigm for personalized cancer screening. bioRxiv. 2018.   URL: [accessed 2021-04-13]
  14. Román M, Sala M, Domingo L, Posso M, Louro J, Castells X. Personalized breast cancer screening strategies: A systematic review and quality assessment. PLoS One 2019 Dec 16;14(12):e0226352 [FREE Full text] [CrossRef] [Medline]
  15. Seong SC, Kim Y, Park SK, Khang YH, Kim HC, Park JH, et al. Cohort profile: the National Health Insurance Service-National Health Screening Cohort (NHIS-HEALS) in Korea. BMJ Open 2017 Sep 24;7(9):e016640 [FREE Full text] [CrossRef] [Medline]
  16. Shen W, Zhou M, Yang F, Dong D, Yang C, Zang Y, et al. Learning from experts: developing transferable deep features for patient-level lung cancer prediction. 2016 Presented at: International Conference on Medical Image Computing and Computer-Assisted Intervention; 17-21 Oct 2016; Athens. [CrossRef]
  17. Kwon S. Thirty years of national health insurance in South Korea: lessons for achieving universal health care coverage. Health Policy Plan 2009 Jan 12;24(1):63-71. [CrossRef] [Medline]
  18. Lee YH, Han K, Ko SH, Ko KS, Lee KU, Taskforce Team of Diabetes Fact Sheet of the Korean Diabetes Association. data analytic process of a nationwide population-based study using national health information database established by national health insurance service. Diabetes Metab J 2016 Feb;40(1):79-82 [FREE Full text] [CrossRef] [Medline]
  19. Hong S, Won YJ, Park YR, Jung KW, Kong HJ, Lee ES, Community of Population-Based Regional Cancer Registries. Cancer statistics in Korea: incidence, mortality, survival, and prevalence in 2017. Cancer Res Treat 2020 Apr;52(2):335-350 [FREE Full text] [CrossRef] [Medline]
  20. NHI program. h-well NHIS.   URL: [accessed 2021-04-16]
  21. Statistics Korea news. Statistics Korea.   URL: [accessed 2021-04-16]
  22. Ruff L, Vandermeulen R, Goernitz N, Deecke L, Siddiqui S, Binder A, et al. Deep one-class classification. 2018 Presented at: The 35th International Conference on Machine Learning; 10-15 July 2018; Stockholm.
  23. Lundberg S, Su-In L. A unified approach to interpreting model predictions. arXiv.   URL: [accessed 2021-04-13]
  24. Michikawa T, Inoue M, Sawada N, Iwasaki M, Tanaka Y, Shimazu T, Japan Public Health Center-based Prospective Study Group. Development of a prediction model for 10-year risk of hepatocellular carcinoma in middle-aged Japanese: the Japan Public Health Center-based Prospective Study Cohort II. Prev Med 2012 Aug;55(2):137-143. [CrossRef] [Medline]
  25. Shin A, Joo J, Yang H, Bak J, Park Y, Kim J, et al. Risk prediction model for colorectal cancer: National Health Insurance Corporation study, Korea. PLoS One 2014 Feb 12;9(2):e88079 [FREE Full text] [CrossRef] [Medline]
  26. Tammemägi MC, Ten Haaf K, Toumazis I, Kong CY, Han SS, Jeon J, et al. Development and validation of a multivariable lung cancer risk prediction model that includes low-dose computed tomography screening results: a secondary analysis of data from the national lung screening trial. JAMA Netw Open 2019 Mar 01;2(3):e190204 [FREE Full text] [CrossRef] [Medline]
  27. Ng K, Steinhubl SR, deFilippi C, Dey S, Stewart W. Early detection of heart failure using electronic health records: practical implications for time before diagnosis, data diversity, data quantity, and data density. Circ Cardiovasc Qual Outcomes 2016 Nov;9(6):649-658 [FREE Full text] [CrossRef] [Medline]
  28. Lai H, Huang H, Keshavjee K, Guergachi A, Gao X. Predictive models for diabetes mellitus using machine learning techniques. BMC Endocr Disord 2019 Oct 15;19(1):101 [FREE Full text] [CrossRef] [Medline]
  29. Badholia A. Predictive modelling and analytics for diabetes using a machine learning approach. ITII 2021 Feb 28;9(1):215-223. [CrossRef]
  30. Daanouni O, Cherradi B, Tmiri A. Type 2 diabetes mellitus prediction model based on machine learning approach. 2019 Presented at: Fourth International Conference on Smart City Applications (SCA2019); 2-4 October 2019; Casablanca, Morocco. [CrossRef]
  31. Kanegae H, Suzuki K, Fukatani K, Ito T, Harada N, Kario K. Highly precise risk prediction model for new-onset hypertension using artificial intelligence techniques. J Clin Hypertens (Greenwich) 2020 Mar 09;22(3):445-450 [FREE Full text] [CrossRef] [Medline]
  32. Elshawi R, Al-Mallah MH, Sakr S. On the interpretability of machine learning-based model for predicting hypertension. BMC Med Inform Decis Mak 2019 Jul 29;19(1):146 [FREE Full text] [CrossRef] [Medline]
  33. Sihto H, Lundin J, Lundin M, Lehtimäki T, Ristimäki A, Holli K, et al. Breast cancer biological subtypes and protein expression predict for the preferential distant metastasis sites: a nationwide cohort study. Breast Cancer Res 2011 Sep 13;13(5):R87. [CrossRef] [Medline]
  34. Lee T, Wang C, Chen T, Kuo KN, Wu M, Lin J, Taiwan Gastrointestinal Disease Helicobacter Consortium. A tool to predict risk for gastric cancer in patients with peptic ulcer disease on the basis of a nationwide cohort. Clin Gastroenterol Hepatol 2015 Feb;13(2):287-293.e1. [CrossRef] [Medline]
  35. Zelic R, Garmo H, Zugna D, Stattin P, Richiardi L, Akre O, et al. Corrigendum re "Predicting prostate cancer death with different pretreatment risk stratification tools: a head-to-head comparison in a nationwide cohort study" [Eur Urol 2020;77:180-8]. Eur Urol 2020 Jul;78(1):e45-e47. [CrossRef] [Medline]
  36. Ali Khan U, Fallah M, Sundquist K, Sundquist J, Brenner H, Kharazmi E. Risk of colorectal cancer in patients with diabetes mellitus: A Swedish nationwide cohort study. PLoS Med 2020 Nov 13;17(11):e1003431 [FREE Full text] [CrossRef] [Medline]

AUPRC: area under precision recall curve
AUROC: area under receiver operator characteristics curve
EHR: electronic health record
LGBM: Light Gradient Boosting Machine
LR: logistic regression
MLP: multilayer perceptron
NCCP: national cancer control program
NHI: National Health Insurance
NHIS: National Health Insurance System
OCEC: one-class embedding classifier
RF: random forest
SHAP: Shapley Additive Explanations
SNUBH: Seoul National University Bundang Hospital

Edited by G Eysenbach; submitted 21.04.21; peer-reviewed by X Cheng, N Hardikar; comments to author 12.05.21; revised version received 07.07.21; accepted 26.07.21; published 30.08.21


©Eunsaem Lee, Se Young Jung, Hyung Ju Hwang, Jaewoo Jung. Originally published in JMIR Medical Informatics (, 30.08.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.