Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?


Journal Description

JMIR Medical Informatics (JMI, ISSN 2291-9694; Impact Factor: 2.58) (Editor-in-chief: Christian Lovis MD MPH FACMI) is a PubMed/SCIE-indexed journal that focuses on clinical informatics, big data in health and health care, decision support for health professionals, electronic health records, ehealth infrastructures and implementation. In June 2020, the journal received an impact factor of 2.58. 

Published by JMIR Publications, JMIR Medical Informatics has a focus on applied, translational research, with a broad readership including clinicians, CIOs, engineers, industry and health informatics professionals.

JMIR Medical Informatics adheres to rigorous quality standards, involving a rapid and thorough peer-review process, professional copyediting, professional production of PDF, XHTML, and XML proofs (ready for deposit in PubMed Central/PubMed).


Recent Articles:

  • Family members. Source: FlickR; Copyright: Jo Zimny; URL:; License: Creative Commons Attribution + Noncommercial + NoDerivatives (CC-BY-NC-ND).

    Family History Information Extraction With Neural Attention and an Enhanced Relation-Side Scheme: Algorithm Development and Validation


    Background: Identifying and extracting family history information (FHI) from clinical reports are significant for recognizing disease susceptibility. However, FHI is usually described in a narrative manner within patients’ electronic health records, which requires the application of natural language processing technologies to automatically extract such information to provide more comprehensive patient-centered information to physicians. Objective: This study aimed to overcome the 2 main challenges observed in previous research focusing on FHI extraction. One is the requirement to develop postprocessing rules to infer the member and side information of family mentions. The other is to efficiently utilize intrasentence and intersentence information to assist FHI extraction. Methods: We formulated the task as a sequential labeling problem and propose an enhanced relation-side scheme that encodes the required family member properties to not only eliminate the need for postprocessing rules but also relieve the insufficient training instance issues. Moreover, an attention-based neural network structure was proposed to exploit cross-sentence information to identify FHI and its attributes requiring cross-sentence inference. Results: The dataset released by the 2019 n2c2/OHNLP family history extraction task was used to evaluate the performance of the proposed methods. We started by comparing the performance of the traditional neural sequence models with the ordinary scheme and enhanced scheme. Next, we studied the effectiveness of the proposed attention-enhanced neural networks by comparing their performance with that of the traditional networks. It was observed that, with the enhanced scheme, the recall of the neural network can be improved, leading to an increase in the F score of 0.024. The proposed neural attention mechanism enhanced both the recall and precision and resulted in an improved F score of 0.807, which was ranked fourth in the shared task. Conclusions: We presented an attention-based neural network along with an enhanced tag scheme that enables the neural network model to learn and interpret the implicit relationship and side information of the recognized family members across sentences without relying on heuristic rules.

  • Source: Freepik; Copyright: pressfoto; URL:; License: Licensed by JMIR.

    Exploring Fever of Unknown Origin Intelligent Diagnosis Based on Clinical Data: Model Development and Validation


    Background: Fever of unknown origin (FUO) is a group of diseases with heterogeneous complex causes that are misdiagnosed or have delayed diagnoses. Previous studies have focused mainly on the statistical analysis and research of the cases. The treatments are very different for the different categories of FUO. Therefore, how to intelligently diagnose FUO into one category is worth studying. Objective: We aimed to fuse all of the medical data together to automatically predict the categories of the causes of FUO among patients using a machine learning method, which could help doctors diagnose FUO more accurately. Methods: In this paper, we innovatively and manually built the FUO intelligent diagnosis (FID) model to help clinicians predict the category of the cause and improve the manual diagnostic precision. First, we classified FUO cases into four categories (infections, immune diseases, tumors, and others) according to the large numbers of different causes and treatment methods. Then, we cleaned the basic information data and clinical laboratory results and structured the electronic medical record (EMR) data using the bidirectional encoder representations from transformers (BERT) model. Next, we extracted the features based on the structured sample data and trained the FID model using LightGBM. Results: Experiments were based on data from 2299 desensitized cases from Peking Union Medical College Hospital. From the extensive experiments, the precision of the FID model was 81.68% for top 1 classification diagnosis and 96.17% for top 2 classification diagnosis, which were superior to the precision of the comparative method. Conclusions: The FID model showed excellent performance in FUO diagnosis and thus would be a potentially useful tool for clinicians to enhance the precision of FUO diagnosis and reduce the rate of misdiagnosis.

  • Source: Pexels; Copyright: Matthias Zomer; URL:; License: Licensed by JMIR.

    Machine Learning Electronic Health Record Identification of Patients with Rheumatoid Arthritis: Algorithm Pipeline Development and Validation Study


    Background: Financial codes are often used to extract diagnoses from electronic health records. This approach is prone to false positives. Alternatively, queries are constructed, but these are highly center and language specific. A tantalizing alternative is the automatic identification of patients by employing machine learning on format-free text entries. Objective: The aim of this study was to develop an easily implementable workflow that builds a machine learning algorithm capable of accurately identifying patients with rheumatoid arthritis from format-free text fields in electronic health records. Methods: Two electronic health record data sets were employed: Leiden (n=3000) and Erlangen (n=4771). Using a portion of the Leiden data (n=2000), we compared 6 different machine learning methods and a naïve word-matching algorithm using 10-fold cross-validation. Performances were compared using the area under the receiver operating characteristic curve (AUROC) and the area under the precision recall curve (AUPRC), and F1 score was used as the primary criterion for selecting the best method to build a classifying algorithm. We selected the optimal threshold of positive predictive value for case identification based on the output of the best method in the training data. This validation workflow was subsequently applied to a portion of the Erlangen data (n=4293). For testing, the best performing methods were applied to remaining data (Leiden n=1000; Erlangen n=478) for an unbiased evaluation. Results: For the Leiden data set, the word-matching algorithm demonstrated mixed performance (AUROC 0.90; AUPRC 0.33; F1 score 0.55), and 4 methods significantly outperformed word-matching, with support vector machines performing best (AUROC 0.98; AUPRC 0.88; F1 score 0.83). Applying this support vector machine classifier to the test data resulted in a similarly high performance (F1 score 0.81; positive predictive value [PPV] 0.94), and with this method, we could identify 2873 patients with rheumatoid arthritis in less than 7 seconds out of the complete collection of 23,300 patients in the Leiden electronic health record system. For the Erlangen data set, gradient boosting performed best (AUROC 0.94; AUPRC 0.85; F1 score 0.82) in the training set, and applied to the test data, resulted once again in good results (F1 score 0.67; PPV 0.97). Conclusions: We demonstrate that machine learning methods can extract the records of patients with rheumatoid arthritis from electronic health record data with high precision, allowing research on very large populations for limited costs. Our approach is language and center independent and could be applied to any type of diagnosis. We have developed our pipeline into a universally applicable and easy-to-implement workflow to equip centers with their own high-performing algorithm. This allows the creation of observational studies of unprecedented size covering different countries for low cost from already available data in electronic health record systems.

  • Source: Freepik; Copyright: diana.grytsku; URL:; License: Licensed by JMIR.

    Identifying Ectopic Pregnancy in a Large Integrated Health Care Delivery System: Algorithm Validation


    Background: Surveillance of ectopic pregnancy (EP) using electronic databases is important. To our knowledge, no published study has assessed the validity of EP case ascertainment using electronic health records. Objective: We aimed to assess the validity of an enhanced version of a previously validated algorithm, which used a combination of encounters with EP-related diagnostic/procedure codes and methotrexate injections. Methods: Medical records of 500 women aged 15-44 years with membership at Kaiser Permanente Southern and Northern California between 2009 and 2018 and a potential EP were randomly selected for chart review, and true cases were identified. The enhanced algorithm included diagnostic/procedure codes from the International Classification of Diseases, Tenth Revision, used telephone appointment visits, and excluded cases with only abdominal EP diagnosis codes. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and overall performance (Youden index and F-score) of the algorithm were evaluated and compared to the validated algorithm. Results: There were 334 true positive and 166 true negative EP cases with available records. True positive and true negative EP cases did not differ significantly according to maternal age, race/ethnicity, and smoking status. EP cases with only one encounter and non-tubal EPs were more likely to be misclassified. The sensitivity, specificity, PPV, and NPV of the enhanced algorithm for EP were 97.6%, 84.9%, 92.9%, and 94.6%, respectively. The Youden index and F-score were 82.5% and 95.2%, respectively. The sensitivity and NPV were lower for the previously published algorithm at 94.3% and 88.1%, respectively. The sensitivity of surgical procedure codes from electronic chart abstraction to correctly identify surgical management was 91.9%. The overall accuracy, defined as the percentage of EP cases with correct management (surgical, medical, and unclassified) identified by electronic chart abstraction, was 92.3%. Conclusions: The performance of the enhanced algorithm for EP case ascertainment in integrated health care databases is adequate to allow for use in future epidemiological studies. Use of this algorithm will likely result in better capture of true EP cases than the previously validated algorithm.

  • Source: freepik; Copyright: Racool_studio; URL:; License: Licensed by the authors.

    The 2019 n2c2/OHNLP Track on Clinical Semantic Textual Similarity: Overview


    Background: Semantic textual similarity is a common task in the general English domain to assess the degree to which the underlying semantics of 2 text segments are equivalent to each other. Clinical Semantic Textual Similarity (ClinicalSTS) is the semantic textual similarity task in the clinical domain that attempts to measure the degree of semantic equivalence between 2 snippets of clinical text. Due to the frequent use of templates in the Electronic Health Record system, a large amount of redundant text exists in clinical notes, making ClinicalSTS crucial for the secondary use of clinical text in downstream clinical natural language processing applications, such as clinical text summarization, clinical semantics extraction, and clinical information retrieval. Objective: Our objective was to release ClinicalSTS data sets and to motivate natural language processing and biomedical informatics communities to tackle semantic text similarity tasks in the clinical domain. Methods: We organized the first BioCreative/OHNLP ClinicalSTS shared task in 2018 by making available a real-world ClinicalSTS data set. We continued the shared task in 2019 in collaboration with National NLP Clinical Challenges (n2c2) and the Open Health Natural Language Processing (OHNLP) consortium and organized the 2019 n2c2/OHNLP ClinicalSTS track. We released a larger ClinicalSTS data set comprising 1642 clinical sentence pairs, including 1068 pairs from the 2018 shared task and 1006 new pairs from 2 electronic health record systems, GE and Epic. We released 80% (1642/2054) of the data to participating teams to develop and fine-tune the semantic textual similarity systems and used the remaining 20% (412/2054) as blind testing to evaluate their systems. The workshop was held in conjunction with the American Medical Informatics Association 2019 Annual Symposium. Results: Of the 78 international teams that signed on to the n2c2/OHNLP ClinicalSTS shared task, 33 produced a total of 87 valid system submissions. The top 3 systems were generated by IBM Research, the National Center for Biotechnology Information, and the University of Florida, with Pearson correlations of r=.9010, r=.8967, and r=.8864, respectively. Most top-performing systems used state-of-the-art neural language models, such as BERT and XLNet, and state-of-the-art training schemas in deep learning, such as pretraining and fine-tuning schema, and multitask learning. Overall, the participating systems performed better on the Epic sentence pairs than on the GE sentence pairs, despite a much larger portion of the training data being GE sentence pairs. Conclusions: The 2019 n2c2/OHNLP ClinicalSTS shared task focused on computing semantic similarity for clinical text sentences generated from clinical notes in the real world. It attracted a large number of international teams. The ClinicalSTS shared task could continue to serve as a venue for researchers in natural language processing and medical informatics communities to develop and improve semantic textual similarity techniques for clinical text.

  • Source: iStock; Copyright: AndreyPopov; URL:; License: Licensed by the authors.

    Identification of Adverse Drug Event–Related Japanese Articles: Natural Language Processing Analysis


    Background: Medical articles covering adverse drug events (ADEs) are systematically reported by pharmaceutical companies for drug safety information purposes. Although policies governing reporting to regulatory bodies vary among countries and regions, all medical article reporting may be categorized as precision or recall based. Recall-based reporting, which is implemented in Japan, requires the reporting of any possible ADE. Therefore, recall-based reporting can introduce numerous false negatives or substantial amounts of noise, a problem that is difficult to address using limited manual labor. Objective: Our aim was to develop an automated system that could identify ADE-related medical articles, support recall-based reporting, and alleviate manual labor in Japanese pharmaceutical companies. Methods: Using medical articles as input, our system based on natural language processing applies document-level classification to extract articles containing ADEs (replacing manual labor in the first screening) and sentence-level classification to extract sentences within those articles that imply ADEs (thus supporting experts in the second screening). We used 509 Japanese medical articles annotated by a medical engineer to evaluate the performance of the proposed system. Results: Document-level classification yielded an F1 of 0.903. Sentence-level classification yielded an F1 of 0.413. These were averages of fivefold cross-validations. Conclusions: A simple automated system may alleviate the manual labor involved in screening drug safety–related medical articles in pharmaceutical companies. After improving the accuracy of the sentence-level classification by considering a wider context, we intend to apply this system toward real-world postmarketing surveillance.

  • TOC Image. Source: Adobe Stock; Copyright: terovesalainen; URL:; License: Licensed by the authors.

    Identification of Semantically Similar Sentences in Clinical Notes: Iterative Intermediate Training Using Multi-Task Learning


    Background: Although electronic health records (EHRs) have been widely adopted in health care, effective use of EHR data is often limited because of redundant information in clinical notes introduced by the use of templates and copy-paste during note generation. Thus, it is imperative to develop solutions that can condense information while retaining its value. A step in this direction is measuring the semantic similarity between clinical text snippets. To address this problem, we participated in the 2019 National NLP Clinical Challenges (n2c2)/Open Health Natural Language Processing Consortium (OHNLP) clinical semantic textual similarity (ClinicalSTS) shared task. Objective: This study aims to improve the performance and robustness of semantic textual similarity in the clinical domain by leveraging manually labeled data from related tasks and contextualized embeddings from pretrained transformer-based language models. Methods: The ClinicalSTS data set consists of 1642 pairs of deidentified clinical text snippets annotated in a continuous scale of 0-5, indicating degrees of semantic similarity. We developed an iterative intermediate training approach using multi-task learning (IIT-MTL), a multi-task training approach that employs iterative data set selection. We applied this process to bidirectional encoder representations from transformers on clinical text mining (ClinicalBERT), a pretrained domain-specific transformer-based language model, and fine-tuned the resulting model on the target ClinicalSTS task. We incrementally ensembled the output from applying IIT-MTL on ClinicalBERT with the output of other language models (bidirectional encoder representations from transformers for biomedical text mining [BioBERT], multi-task deep neural networks [MT-DNN], and robustly optimized BERT approach [RoBERTa]) and handcrafted features using regression-based learning algorithms. On the basis of these experiments, we adopted the top-performing configurations as our official submissions. Results: Our system ranked first out of 87 submitted systems in the 2019 n2c2/OHNLP ClinicalSTS challenge, achieving state-of-the-art results with a Pearson correlation coefficient of 0.9010. This winning system was an ensembled model leveraging the output of IIT-MTL on ClinicalBERT with BioBERT, MT-DNN, and handcrafted medication features. Conclusions: This study demonstrates that IIT-MTL is an effective way to leverage annotated data from related tasks to improve performance on a target task with a limited data set. This contribution opens new avenues of exploration for optimized data set selection to generate more robust and universal contextual representations of text in the clinical domain.

  • Source: Unsplash; Copyright: Saulo Mohana on Unsplash; URL:; License: Licensed by JMIR.

    Use of Social Media by Hospitals and Clinics in Japan: Descriptive Study


    Background: The use of social media by hospitals has become widespread in the United States and Western European countries. However, in Japan, the extent to which hospitals and clinics use social media is unknown. Furthermore, recent revisions to the Medical Care Act may subject social media content to regulation. Objective: The purpose of this study was to examine social media use in Japanese hospitals and clinics. We investigated the adoption of social media, analyzed social media content, and compared content with medical advertising guidelines. Methods: We randomly sampled 300 hospitals and 300 clinics from a list of medical institutions that was compiled by the Ministry of Health, Labour and Welfare. We performed web and social media (Facebook and Twitter) searches using the hospital and clinic names to determine whether they had social media accounts. We collected Facebook posts and Twitter tweets and categorized them based on their content (eg, health promotion, participation in academic meetings and publications, public relations or news announcements, and recruitment). We compared the collected content with medical advertising guidelines. Results: We found that 26.0% (78/300) of the hospitals and 7.7% (23/300) of the clinics used Facebook, Twitter, or both. Public relations or news announcements accounted for 53.99% (724/1341) of the Facebook posts by hospitals and 58.4% (122/209) of the Facebook posts by clinics. In hospitals, 16/1341 (1.19%) Facebook posts and 6/574 (1.0%) tweets and in clinics, 8/209 (3.8%) Facebook posts and 15/330 (4.5%) tweets could conflict medical advertising guidelines. Conclusions: Fewer hospitals and clinics in Japan use social media as compared to other countries. Social media were mainly used for public relations. Some content disseminated by medical institutions could conflict with medical advertising guidelines. This study may serve as a reference for medical institutions to guide social media usage and may help improve medical website advertising in Japan.

  • Source: Freepik; Copyright:; URL:; License: Licensed by JMIR.

    Predicting Unplanned Readmissions Following a Hip or Knee Arthroplasty: Retrospective Observational Study


    Background: Total joint replacements are high-volume and high-cost procedures that should be monitored for cost and quality control. Models that can identify patients at high risk of readmission might help reduce costs by suggesting who should be enrolled in preventive care programs. Previous models for risk prediction have relied on structured data of patients rather than clinical notes in electronic health records (EHRs). The former approach requires manual feature extraction by domain experts, which may limit the applicability of these models. Objective: This study aims to develop and evaluate a machine learning model for predicting the risk of 30-day readmission following knee and hip arthroplasty procedures. The input data for these models come from raw EHRs. We empirically demonstrate that unstructured free-text notes contain a reasonably predictive signal for this task. Methods: We performed a retrospective analysis of data from 7174 patients at Partners Healthcare collected between 2006 and 2016. These data were split into train, validation, and test sets. These data sets were used to build, validate, and test models to predict unplanned readmission within 30 days of hospital discharge. The proposed models made predictions on the basis of clinical notes, obviating the need for performing manual feature extraction by domain and machine learning experts. The notes that served as model inputs were written by physicians, nurses, pathologists, and others who diagnose and treat patients and may have their own predictions, even if these are not recorded. Results: The proposed models output readmission risk scores (propensities) for each patient. The best models (as selected on a development set) yielded an area under the receiver operating characteristic curve of 0.846 (95% CI 82.75-87.11) for hip and 0.822 (95% CI 80.94-86.22) for knee surgery, indicating reasonable discriminative ability. Conclusions: Machine learning models can predict which patients are at a high risk of readmission within 30 days following hip and knee arthroplasty procedures on the basis of notes in EHRs with reasonable discriminative power. Following further validation and empirical demonstration that the models realize predictive performance above that which clinical judgment may provide, such models may be used to build an automated decision support tool to help caretakers identify at-risk patients.

  • The Human-Algorithm Synergism Diagnosis for Hip Fracture on Plain Films. Source: The authors / PlaceIt; Copyright: The Authors / PlaceIt; URL:; License: Licensed by JMIR.

    A Human-Algorithm Integration System for Hip Fracture Detection on Plain Radiography: System Development and Validation Study


    Background: Hip fracture is the most common type of fracture in elderly individuals. Numerous deep learning (DL) algorithms for plain pelvic radiographs (PXRs) have been applied to improve the accuracy of hip fracture diagnosis. However, their efficacy is still undetermined. Objective: The objective of this study is to develop and validate a human-algorithm integration (HAI) system to improve the accuracy of hip fracture diagnosis in a real clinical environment. Methods: The HAI system with hip fracture detection ability was developed using a deep learning algorithm trained on trauma registry data and 3605 PXRs from August 2008 to December 2016. To compare their diagnostic performance before and after HAI system assistance using an independent testing dataset, 34 physicians were recruited. We analyzed the physicians’ accuracy, sensitivity, specificity, and agreement with the algorithm; we also performed subgroup analyses according to physician specialty and experience. Furthermore, we applied the HAI system in the emergency departments of different hospitals to validate its value in the real world. Results: With the support of the algorithm, which achieved 91% accuracy, the diagnostic performance of physicians was significantly improved in the independent testing dataset, as was revealed by the sensitivity (physician alone, median 95%; HAI, median 99%; P<.001), specificity (physician alone, median 90%; HAI, median 95%; P<.001), accuracy (physician alone, median 90%; HAI, median 96%; P<.001), and human-algorithm agreement [physician alone κ, median 0.69 (IQR 0.63-0.74); HAI κ, median 0.80 (IQR 0.76-0.82); P<.001. With the help of the HAI system, the primary physicians showed significant improvement in their diagnostic performance to levels comparable to those of consulting physicians, and both the experienced and less-experienced physicians benefited from the HAI system. After the HAI system had been applied in 3 departments for 5 months, 587 images were examined. The sensitivity, specificity, and accuracy of the HAI system for detecting hip fractures were 97%, 95.7%, and 96.08%, respectively. Conclusions: HAI currently impacts health care, and integrating this technology into emergency departments is feasible. The developed HAI system can enhance physicians’ hip fracture diagnostic performance.

  • Source:; Copyright: Pong; URL:; License: Licensed by the authors.

    Applying eHealth for Pandemic Management in Saudi Arabia in the Context of COVID-19: Survey Study and Framework Proposal

    Authors List:


    Background: The increased frequency of epidemics such as Middle East respiratory syndrome, severe acute respiratory syndrome, Ebola virus, and Zika virus has created stress on health care management and operations as well as on relevant stakeholders. In addition, the recent COVID-19 outbreak has been creating challenges for various countries and their respective health care organizations in managing and controlling the pandemic. One of the most important observations during the recent outbreak is the lack of effective eHealth frameworks for managing and controlling pandemics. Objective: The aims of this study are to review the current National eHealth Strategy of Saudi Arabia and to propose an integrated eHealth framework that can be effective for managing health care operations and services during pandemics. Methods: A questionnaire-based survey was administered to 316 health care professionals to review the current national eHealth framework of Saudi Arabia and identify the objectives, factors, and components that are key for managing and controlling pandemics. Purposive sampling was used to collect responses from diverse experts, including physicians, technical experts, nurses, administrative experts, and pharmacists. The survey was administered at five hospitals in Saudi Arabia by forwarding the survey link using a web-based portal. A sample population of 350 was achieved, which was filtered to exclude incomplete and ineligible samples, giving a sample of 316 participants. Results: Of the 316 participants, 187 (59.2%) found the current eHealth framework to be ineffective, and more than 50% of the total participants stated that the framework lacked some essential components and objectives. Additional components and objectives focusing on using eHealth for managing information, creating awareness, increasing accessibility and reachability, promoting self-management and self-collaboration, promoting electronic services, and extensive stakeholder engagement were considered to be the most important factors by more than 80% of the total participants. Conclusions: Managing pandemics requires an effective and efficient eHealth framework that can be used to manage various health care services by integrating different eHealth components and collaborating with all stakeholders.

  • Saliency map generated by the model in a true-positive case with eGFR = 50 mL/min/1.73 m2. Source: Image created by the authors; Copyright: The Authors; URL:; License: Creative Commons Attribution (CC-BY).

    Deep Learning–Based Detection of Early Renal Function Impairment Using Retinal Fundus Images: Model Development and Validation


    Background: Retinal imaging has been applied for detecting eye diseases and cardiovascular risks using deep learning–based methods. Furthermore, retinal microvascular and structural changes were found in renal function impairments. However, a deep learning–based method using retinal images for detecting early renal function impairment has not yet been well studied. Objective: This study aimed to develop and evaluate a deep learning model for detecting early renal function impairment using retinal fundus images. Methods: This retrospective study enrolled patients who underwent renal function tests with color fundus images captured at any time between January 1, 2001, and August 31, 2019. A deep learning model was constructed to detect impaired renal function from the images. Early renal function impairment was defined as estimated glomerular filtration rate <90 mL/min/1.73 m2. Model performance was evaluated with respect to the receiver operating characteristic curve and area under the curve (AUC). Results: In total, 25,706 retinal fundus images were obtained from 6212 patients for the study period. The images were divided at an 8:1:1 ratio. The training, validation, and testing data sets respectively contained 20,787, 2189, and 2730 images from 4970, 621, and 621 patients. There were 10,686 and 15,020 images determined to indicate normal and impaired renal function, respectively. The AUC of the model was 0.81 in the overall population. In subgroups stratified by serum hemoglobin A1c (HbA1c) level, the AUCs were 0.81, 0.84, 0.85, and 0.87 for the HbA1c levels of ≤6.5%, >6.5%, >7.5%, and >10%, respectively. Conclusions: The deep learning model in this study enables the detection of early renal function impairment using retinal fundus images. The model was more accurate for patients with elevated serum HbA1c levels. Trial Registration:

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Latest Submissions Open for Peer-Review:

View All Open Peer Review Articles
  • Diagnostic Model of in-Hospital Mortality in Patients with Acute ST-Segment Elevation Myocardial Infarction: Algorithm Development and Validation

    Date Submitted: Nov 10, 2020

    Open Peer Review Period: Nov 10, 2020 - Jan 5, 2021

    Background: Coronary heart disease, including ST-segment elevation myocardial infarction (STEMI), remains the main cause of death. Objective: The objective of our research was to develop and externall...

    Background: Coronary heart disease, including ST-segment elevation myocardial infarction (STEMI), remains the main cause of death. Objective: The objective of our research was to develop and externally validate a diagnostic model of in-hospital mortality in acute STEMI patients. Methods: We performed multiple logistic regression analysis on a cohort of hospitalized acute STEMI patients. Participants: From January 2002 to December 2011, a total of 2,183 inpatients with acute STEMI were admitted for development.The external validation data set of this model comes from 7,485 hospitalized patients with acute STEMI from January 2012 to August 2019.We used logistic regression analysis to analyze the risk factors of in-hospital mortality in the development data set.We developed a diagnostic model of in-hospital mortality and constructed a nomogram. We evaluated the predictive performance of the diagnostic model in the validation data set by examining the measures of discrimination, calibration, and decision curve analysis (DCA). Results: In the development data set, 61 of the 2,183 participants (2.8%) experienced in-hospital mortality. The strongest predictors of in-hospital mortality were advanced age and high Killip classification. Logistic regression analysis showed the difference between the two groups with and without in-hospital mortality (odds ratio [OR] 1.058, 95% CI 1.029-1.088; P <.001), Killip III (OR 8.249, 95% CI 3.502-19.433; P <.001) and Killip IV (OR 39.234, 95% CI 18.178-84.679; P <.001). We had developed a diagnostic model of in-hospital mortality. The area under the receiver operating characteristic curve (AUC) was 0.9126 (SD 0.0166, 95% CI 0.88015-0.94504). We constructed a nomogram based on age and Killip classification. In-hospital mortality occurred in 127 of 7,485 participants(1.7%) in the validation data set. The AUC was 0 .9305(SD 0.0113, 95% CI 0. 90827-0. 95264). Conclusions: We had developed and externally validated a diagnostic model of in-hospital mortality in acute STEMI patients. It was found that the discrimination, calibration and DCA of this model were satisfactory. Clinical Trial: ChiCTR1900027129;