Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?


Journal Description

JMIR Medical Informatics (JMI, ISSN 2291-9694; Impact Factor: 2.58) (Editor-in-chief: Christian Lovis MD MPH FACMI) is a PubMed/SCIE-indexed journal that focuses on clinical informatics, big data in health and health care, decision support for health professionals, electronic health records, ehealth infrastructures and implementation. In June 2020, the journal received an impact factor of 2.58. 

Published by JMIR Publications, JMIR Medical Informatics has a focus on applied, translational research, with a broad readership including clinicians, CIOs, engineers, industry and health informatics professionals.

JMIR Medical Informatics adheres to rigorous quality standards, involving a rapid and thorough peer-review process, professional copyediting, professional production of PDF, XHTML, and XML proofs (ready for deposit in PubMed Central/PubMed).


Recent Articles:

  • Baby undergoing phototherapy. Source: FlickR; Copyright: Katia Strieck; URL:; License: Creative Commons Attribution + Noncommercial + NoDerivatives (CC-BY-NC-ND).

    Predictive Models for Neonatal Follow-Up Serum Bilirubin: Model Development and Validation

    Authors List:


    Background: Hyperbilirubinemia affects many newborn infants and, if not treated appropriately, can lead to irreversible brain injury. Objective: This study aims to develop predictive models of follow-up total serum bilirubin measurement and to compare their accuracy with that of clinician predictions. Methods: Subjects were patients born between June 2015 and June 2019 at 4 hospitals in Massachusetts. The prediction target was a follow-up total serum bilirubin measurement obtained <72 hours after a previous measurement. Birth before versus after February 2019 was used to generate a training set (27,428 target measurements) and a held-out test set (3320 measurements), respectively. Multiple supervised learning models were trained. To further assess model performance, predictions on the held-out test set were also compared with corresponding predictions from clinicians. Results: The best predictive accuracy on the held-out test set was obtained with the multilayer perceptron (ie, neural network, mean absolute error [MAE] 1.05 mg/dL) and Xgboost (MAE 1.04 mg/dL) models. A limited number of predictors were sufficient for constructing models with the best performance and avoiding overfitting: current bilirubin measurement, last rate of rise, proportion of time under phototherapy, time to next measurement, gestational age at birth, current age, and fractional weight change from birth. Clinicians made a total of 210 prospective predictions. The neural network model accuracy on this subset of predictions had an MAE of 1.06 mg/dL compared with clinician predictions with an MAE of 1.38 mg/dL (P<.0001). In babies born at 35 weeks of gestation or later, this approach was also applied to predict the binary outcome of subsequently exceeding consensus guidelines for phototherapy initiation and achieved an area under the receiver operator characteristic curve of 0.94 (95% CI 0.91 to 0.97). Conclusions: This study developed predictive models for neonatal follow-up total serum bilirubin measurements that outperform clinicians. This may be the first report of models that predict specific bilirubin values, are not limited to near-term patients without risk factors, and take into account the effect of phototherapy. Trial Registration:

  • Clinicians use artificial intelligence technology to predict 1-year risk of death for dialysis patients and high-risk factors. Source: Image created by the authors; Copyright: The Authors; URL:; License: Creative Commons Attribution (CC-BY).

    Prognostic Machine Learning Models for First-Year Mortality in Incident Hemodialysis Patients: Development and Validation Study


    Background: The first-year survival rate among patients undergoing hemodialysis remains poor. Current mortality risk scores for patients undergoing hemodialysis employ regression techniques and have limited applicability and robustness. Objective: We aimed to develop a machine learning model utilizing clinical factors to predict first-year mortality in patients undergoing hemodialysis that could assist physicians in classifying high-risk patients. Methods: Training and testing cohorts consisted of 5351 patients from a single center and 5828 patients from 97 renal centers undergoing hemodialysis (incident only). The outcome was all-cause mortality during the first year of dialysis. Extreme gradient boosting was used for algorithm training and validation. Two models were established based on the data obtained at dialysis initiation (model 1) and data 0-3 months after dialysis initiation (model 2), and 10-fold cross-validation was applied to each model. The area under the curve (AUC), sensitivity (recall), specificity, precision, balanced accuracy, and F1 score were used to assess the predictive ability of the models. Results: In the training and testing cohorts, 585 (10.93%) and 764 (13.11%) patients, respectively, died during the first-year follow-up. Of 42 candidate features, the 15 most important features were selected. The performance of model 1 (AUC 0.83, 95% CI 0.78-0.84) was similar to that of model 2 (AUC 0.85, 95% CI 0.81-0.86). Conclusions: We developed and validated 2 machine learning models to predict first-year mortality in patients undergoing hemodialysis. Both models could be used to stratify high-risk patients at the early stages of dialysis.

  • Ambulance running through a street in Tokyo, Japan. Source: iStock; Copyright: 4X-image; URL:; License: Licensed by the authors.

    Institution-Specific Machine Learning Models for Prehospital Assessment to Predict Hospital Admission: Prediction Model Development Study


    Background: Although multiple prediction models have been developed to predict hospital admission to emergency departments (EDs) to address overcrowding and patient safety, only a few studies have examined prediction models for prehospital use. Development of institution-specific prediction models is feasible in this age of data science, provided that predictor-related information is readily collectable. Objective: We aimed to develop a hospital admission prediction model based on patient information that is commonly available during ambulance transport before hospitalization. Methods: Patients transported by ambulance to our ED from April 2018 through March 2019 were enrolled. Candidate predictors were age, sex, chief complaint, vital signs, and patient medical history, all of which were recorded by emergency medical teams during ambulance transport. Patients were divided into two cohorts for derivation (3601/5145, 70.0%) and validation (1544/5145, 30.0%). For statistical models, logistic regression, logistic lasso, random forest, and gradient boosting machine were used. Prediction models were developed in the derivation cohort. Model performance was assessed by area under the receiver operating characteristic curve (AUROC) and association measures in the validation cohort. Results: Of 5145 patients transported by ambulance, including deaths in the ED and hospital transfers, 2699 (52.5%) required hospital admission. Prediction performance was higher with the addition of predictive factors, attaining the best performance with an AUROC of 0.818 (95% CI 0.792-0.839) with a machine learning model and predictive factors of age, sex, chief complaint, and vital signs. Sensitivity and specificity of this model were 0.744 (95% CI 0.716-0.773) and 0.745 (95% CI 0.709-0.776), respectively. Conclusions: For patients transferred to EDs, we developed a well-performing hospital admission prediction model based on routinely collected prehospital information including chief complaints.

  • Source: Adobe Stock; Copyright: artqu; URL:; License: Licensed by the authors.

    Data Object Exchange (DOEx) as a Method to Facilitate Intraorganizational Collaboration by Managed Data Sharing: Viewpoint


    Background: To help reduce expenses, shorten timelines, and improve the quality of final deliverables, the Veterans Health Administration (VA) and other health care systems promote sharing of expertise among informatics user groups. Traditional barriers to time-efficient sharing of expertise include difficulties in finding potential collaborators and availability of a mechanism to share expertise. Objective: We aim to describe how the VA shares expertise among its informatics groups by describing a custom-built tool, the Data Object Exchange (DOEx), along with statistics on its usage. Methods: A centrally managed web application was developed in the VA to share informatics expertise using database objects. Visitors to the site can view a catalog of objects published by other informatics user groups. Requests for subscription and publication made through the site are routed to database administrators, who then actualize the resource requests through modifications of database object permissions. Results: As of April 2019, the DOEx enabled the publication of 707 database objects to 1202 VA subscribers from 758 workgroups. Overall, over 10,000 requests are made each year regarding permissions on these shared database objects, involving diverse information. Common “flavors” of shared data include disease-specific study populations (eg, patients with asthma), common data definitions (eg, hemoglobin laboratory results), and results of complex analyses (eg, models of anticipated resource utilization). Shared database objects also enable construction of community-built data pipelines. Conclusions: To increase the efficiency of informatics user groups, a method was developed to facilitate intraorganizational collaboration by managed data sharing. The advantages of this system include (1) reduced duplication of work (thereby reducing expenses and shortening timelines) and (2) higher quality of work based on simplifying the adoption of specialized knowledge among groups.

  • Source: Freepik; Copyright: pressfoto; URL:; License: Licensed by JMIR.

    How High-Risk Comorbidities Co-Occur in Readmitted Patients With Hip Fracture: Big Data Visual Analytical Approach


    Background: When older adult patients with hip fracture (HFx) have unplanned hospital readmissions within 30 days of discharge, it doubles their 1-year mortality, resulting in substantial personal and financial burdens. Although such unplanned readmissions are predominantly caused by reasons not related to HFx surgery, few studies have focused on how pre-existing high-risk comorbidities co-occur within and across subgroups of patients with HFx. Objective: This study aims to use a combination of supervised and unsupervised visual analytical methods to (1) obtain an integrated understanding of comorbidity risk, comorbidity co-occurrence, and patient subgroups, and (2) enable a team of clinical and methodological stakeholders to infer the processes that precipitate unplanned hospital readmission, with the goal of designing targeted interventions. Methods: We extracted a training data set consisting of 16,886 patients (8443 readmitted patients with HFx and 8443 matched controls) and a replication data set consisting of 16,222 patients (8111 readmitted patients with HFx and 8111 matched controls) from the 2010 and 2009 Medicare database, respectively. The analyses consisted of a supervised combinatorial analysis to identify and replicate combinations of comorbidities that conferred significant risk for readmission, an unsupervised bipartite network analysis to identify and replicate how high-risk comorbidity combinations co-occur across readmitted patients with HFx, and an integrated visualization and analysis of comorbidity risk, comorbidity co-occurrence, and patient subgroups to enable clinician stakeholders to infer the processes that precipitate readmission in patient subgroups and to propose targeted interventions. Results: The analyses helped to identify (1) 11 comorbidity combinations that conferred significantly higher risk (ranging from P<.001 to P=.01) for a 30-day readmission, (2) 7 biclusters of patients and comorbidities with a significant bicluster modularity (P<.001; Medicare=0.440; random mean 0.383 [0.002]), indicating strong heterogeneity in the comorbidity profiles of readmitted patients, and (3) inter- and intracluster risk associations, which enabled clinician stakeholders to infer the processes involved in the exacerbation of specific combinations of comorbidities leading to readmission in patient subgroups. Conclusions: The integrated analysis of risk, co-occurrence, and patient subgroups enabled the inference of processes that precipitate readmission, leading to a comorbidity exacerbation risk model for readmission after HFx. These results have direct implications for (1) the management of comorbidities targeted at high-risk subgroups of patients with the goal of pre-emptively reducing their risk of readmission and (2) the development of more accurate risk prediction models that incorporate information about patient subgroups.

  • Source: Image created by the Authors; Copyright: The Authors; URL:; License: Creative Commons Attribution (CC-BY).

    Automated Cluster Detection of Health Care–Associated Infection Based on the Multisource Surveillance of Process Data in the Area Network: Retrospective...


    Background: The cluster detection of health care–associated infections (HAIs) is crucial for identifying HAI outbreaks in the early stages. Objective: We aimed to verify whether multisource surveillance based on the process data in an area network can be effective in detecting HAI clusters. Methods: We retrospectively analyzed the incidence of HAIs and 3 indicators of process data relative to infection, namely, antibiotic utilization rate in combination, inspection rate of bacterial specimens, and positive rate of bacterial specimens, from 4 independent high-risk units in a tertiary hospital in China. We utilized the Shewhart warning model to detect the peaks of the time-series data. Subsequently, we designed 5 surveillance strategies based on the process data for the HAI cluster detection: (1) antibiotic utilization rate in combination only, (2) inspection rate of bacterial specimens only, (3) positive rate of bacterial specimens only, (4) antibiotic utilization rate in combination + inspection rate of bacterial specimens + positive rate of bacterial specimens in parallel, and (5) antibiotic utilization rate in combination + inspection rate of bacterial specimens + positive rate of bacterial specimens in series. We used the receiver operating characteristic (ROC) curve and Youden index to evaluate the warning performance of these surveillance strategies for the detection of HAI clusters. Results: The ROC curves of the 5 surveillance strategies were located above the standard line, and the area under the curve of the ROC was larger in the parallel strategy than in the series strategy and the single-indicator strategies. The optimal Youden indexes were 0.48 (95% CI 0.29-0.67) at a threshold of 1.5 in the antibiotic utilization rate in combination–only strategy, 0.49 (95% CI 0.45-0.53) at a threshold of 0.5 in the inspection rate of bacterial specimens–only strategy, 0.50 (95% CI 0.28-0.71) at a threshold of 1.1 in the positive rate of bacterial specimens–only strategy, 0.63 (95% CI 0.49-0.77) at a threshold of 2.6 in the parallel strategy, and 0.32 (95% CI 0.00-0.65) at a threshold of 0.0 in the series strategy. The warning performance of the parallel strategy was greater than that of the single-indicator strategies when the threshold exceeded 1.5. Conclusions: The multisource surveillance of process data in the area network is an effective method for the early detection of HAI clusters. The combination of multisource data and the threshold of the warning model are 2 important factors that influence the performance of the model.

  • Source: Istock; Copyright: Rido; URL:; License: Licensed by JMIR.

    Investigating the Acceptance of Video Consultation by Patients in Rural Primary Care: Empirical Comparison of Preusers and Actual Users


    Background: The ongoing digitalization in health care is enabling patients to receive treatment via telemedical technologies, such as video consultation (VC), which are increasingly being used by general practitioners. Rural areas in particular exhibit a rapidly aging population, with an increase in associated health issues, whereas the level of attraction for working in those regions is decreasing for young physicians. Integrating telemedical approaches in treating patients can help lessen the professional workload and counteract the trend toward the spatial undersupply in many countries. As a result, an increasing number of patients are being confronted with digital treatment and new forms of care delivery. These novel ways of care engender interactions with patients and their private lives in unprecedented ways, calling for studies that incorporate patient needs, expectations, and behavior into the design and application of telemedical technology within the field of primary care. Objective: This study aims to unveil and compare the acceptance-promoting factors of patients without (preusers) and with experiences (actual users) in using VC in a primary care setting and to provide implications for the design, theory, and use of VC. Methods: In total, 20 semistructured interviews were conducted with patients in 2 rural primary care practices to identify and analyze patient needs, perceptions, and experiences that facilitate the acceptance of VC technology and adoption behavior. Both preusers and actual users of VC were engaged, allowing for an empirical comparison. For data analysis, a procedure was followed based on open, axial, and selective coding. Results: The study delivers factors and respective subdimensions that foster the perceptions of patients toward VC in rural primary care. Factors cover attitudes and expectations toward the use of VC, the patient-physician relationship and its impact on technology assessment and use, patients’ rights and obligations that emerge with the introduction of VC in primary care, and the influence of social norms on the use of VC and vice versa. With regard to these factors, the results indicate differences between preusers and actual users of VC, which imply ways of designing and implementing VC concerning the respective user group. Actual users attach higher importance to the perceived benefits of VC and their responsibility to use it appropriately, which might be rooted in the technological intervention they experienced. On the contrary, preusers valued the opinions and expectations of their peers. Conclusions: The way the limitations and potential of VC are perceived varies across patients. When practicing VC in primary care, different aspects should be considered when dealing with preusers, such as maintaining a physical interaction with the physician or incorporating social cues. Once the digital intervention takes place, patients tend to value benefits such as flexibility and effectiveness over potential concerns.

  • Source: Pexels; Copyright: Burst; URL:; License: Licensed by JMIR.

    Building a Pharmacogenomics Knowledge Model Toward Precision Medicine: Case Study in Melanoma


    Background: Many drugs do not work the same way for everyone owing to distinctions in their genes. Pharmacogenomics (PGx) aims to understand how genetic variants influence drug efficacy and toxicity. It is often considered one of the most actionable areas of the personalized medicine paradigm. However, little prior work has included in-depth explorations and descriptions of drug usage, dosage adjustment, and so on. Objective: We present a pharmacogenomics knowledge model to discover the hidden relationships between PGx entities such as drugs, genes, and diseases, especially details in precise medication. Methods: PGx open data such as DrugBank and RxNorm were integrated in this study, as well as drug labels published by the US Food and Drug Administration. We annotated 190 drug labels manually for entities and relationships. Based on the annotation results, we trained 3 different natural language processing models to complete entity recognition. Finally, the pharmacogenomics knowledge model was described in detail. Results: In entity recognition tasks, the Bidirectional Encoder Representations from Transformers–conditional random field model achieved better performance with micro-F1 score of 85.12%. The pharmacogenomics knowledge model in our study included 5 semantic types: drug, gene, disease, precise medication (population, daily dose, dose form, frequency, etc), and adverse reaction. Meanwhile, 26 semantic relationships were defined in detail. Taking melanoma caused by a BRAF gene mutation into consideration, the pharmacogenomics knowledge model covered 7 related drugs and 4846 triples were established in this case. All the corpora, relationship definitions, and triples were made publically available. Conclusions: We highlighted the pharmacogenomics knowledge model as a scalable framework for clinicians and clinical pharmacists to adjust drug dosage according to patient-specific genetic variation, and for pharmaceutical researchers to develop new drugs. In the future, a series of other antitumor drugs and automatic relation extractions will be taken into consideration to further enhance our framework with more PGx linked data.

  • Source: Pexels; Copyright: bongkarn; URL:; License: Licensed by JMIR.

    AutoScore: A Machine Learning–Based Automatic Clinical Score Generator and Its Application to Mortality Prediction Using Electronic Health Records


    Background: Risk scores can be useful in clinical risk stratification and accurate allocations of medical resources, helping health providers improve patient care. Point-based scores are more understandable and explainable than other complex models and are now widely used in clinical decision making. However, the development of the risk scoring model is nontrivial and has not yet been systematically presented, with few studies investigating methods of clinical score generation using electronic health records. Objective: This study aims to propose AutoScore, a machine learning–based automatic clinical score generator consisting of 6 modules for developing interpretable point-based scores. Future users can employ the AutoScore framework to create clinical scores effortlessly in various clinical applications. Methods: We proposed the AutoScore framework comprising 6 modules that included variable ranking, variable transformation, score derivation, model selection, score fine-tuning, and model evaluation. To demonstrate the performance of AutoScore, we used data from the Beth Israel Deaconess Medical Center to build a scoring model for mortality prediction and then compared the data with other baseline models using the receiver operating characteristic analysis. A software package in R 3.5.3 (R Foundation) was also developed to demonstrate the implementation of AutoScore. Results: Implemented on the data set with 44,918 individual admission episodes of intensive care, the AutoScore-created scoring models performed comparably well as other standard methods (ie, logistic regression, stepwise regression, least absolute shrinkage and selection operator, and random forest) in terms of predictive accuracy and model calibration but required fewer predictors and presented high interpretability and accessibility. The nine-variable, AutoScore-created, point-based scoring model achieved an area under the curve (AUC) of 0.780 (95% CI 0.764-0.798), whereas the model of logistic regression with 24 variables had an AUC of 0.778 (95% CI 0.760-0.795). Moreover, the AutoScore framework also drives the clinical research continuum and automation with its integration of all necessary modules. Conclusions: We developed an easy-to-use, machine learning–based automatic clinical score generator, AutoScore; systematically presented its structure; and demonstrated its superiority (predictive performance and interpretability) over other conventional methods using a benchmark database. AutoScore will emerge as a potential scoring tool in various medical applications.

  • Source: Wikimedia commons; Copyright: Genusfotografen and Wikimedia Sverige; URL:; License: Creative Commons Attribution + ShareAlike (CC-BY-SA).

    Feasibility of Asynchronous and Automated Telemedicine in Otolaryngology: Prospective Cross-Sectional Study


    Background: COVID-19 often causes respiratory symptoms, making otolaryngology offices one of the most susceptible places for community transmission of the virus. Thus, telemedicine may benefit both patients and physicians. Objective: This study aims to explore the feasibility of telemedicine for the diagnosis of all otologic disease types. Methods: A total of 177 patients were prospectively enrolled, and the patient’s clinical manifestations with otoendoscopic images were written in the electrical medical records. Asynchronous diagnoses were made for each patient to assess Top-1 and Top-2 accuracy, and we selected 20 cases to conduct a survey among four different otolaryngologists to assess the accuracy, interrater agreement, and diagnostic speed. We also constructed an experimental automated diagnosis system and assessed Top-1 accuracy and diagnostic speed. Results: Asynchronous diagnosis showed Top-1 and Top-2 accuracies of 77.40% and 86.44%, respectively. In the selected 20 cases, the Top-2 accuracy of the four otolaryngologists was on average 91.25% (SD 7.50%), with an almost perfect agreement between them (Cohen kappa=0.91). The automated diagnostic model system showed 69.50% Top-1 accuracy. Otolaryngologists could diagnose an average of 1.55 (SD 0.48) patients per minute, while the machine learning model was capable of diagnosing on average 667.90 (SD 8.3) patients per minute. Conclusions: Asynchronous telemedicine in otology is feasible owing to the reasonable Top-2 accuracy when assessed by experienced otolaryngologists. Moreover, enhanced diagnostic speed while sustaining the accuracy shows the possibility of optimizing medical resources to provide expertise in areas short of physicians.

  • Muscle quality map generation using an automated web-based toolkit. Source: Image created by the Authors; Copyright: The Authors; URL:; License: Creative Commons Attribution (CC-BY).

    Assessment of Myosteatosis on Computed Tomography by Automatic Generation of a Muscle Quality Map Using a Web-Based Toolkit: Feasibility Study


    Background: Muscle quality is associated with fatty degeneration or infiltration of the muscle, which may be associated with decreased muscle function and increased disability. Objective: The aim of this study is to evaluate the feasibility of automated quantitative measurements of the skeletal muscle on computed tomography (CT) images to assess normal-attenuation muscle and myosteatosis. Methods: We developed a web-based toolkit to generate a muscle quality map by categorizing muscle components. First, automatic segmentation of the total abdominal muscle area (TAMA), visceral fat area, and subcutaneous fat area was performed using a predeveloped deep learning model on a single axial CT image at the L3 vertebral level. Second, the Hounsfield unit of each pixel in the TAMA was measured and categorized into 3 components: normal-attenuation muscle area (NAMA), low-attenuation muscle area (LAMA), and inter/intramuscular adipose tissue (IMAT) area. The myosteatosis area was derived by adding the LAMA and IMAT area. We tested the feasibility of the toolkit using randomly selected healthy participants, comprising 6 different age groups (20 to 79 years). With stratification by sex, these indices were compared between age groups using 1-way analysis of variance (ANOVA). Correlations between the myosteatosis area or muscle densities and fat areas were analyzed using Pearson correlation coefficient r. Results: A total of 240 healthy participants (135 men and 105 women) with 40 participants per age group were included in the study. In the 1-way ANOVA, the NAMA, LAMA, and IMAT were significantly different between the age groups in both male and female participants (P≤.004), whereas the TAMA showed a significant difference only in male participants (male, P<.001; female, P=.88). The myosteatosis area had a strong negative correlation with muscle densities (r=–0.833 to –0.894), a moderate positive correlation with visceral fat areas (r=0.607 to 0.669), and a weak positive correlation with the subcutaneous fat areas (r=0.305 to 0.441). Conclusions: The automated web-based toolkit is feasible and enables quantitative CT assessment of myosteatosis, which can be a potential quantitative biomarker for evaluating structural and functional changes brought on by aging in the skeletal muscle. Trial Registration:

  • Source: Freepik; Copyright: jcomp; URL:; License: Licensed by JMIR.

    Clinical Decision Support Systems for Pressure Ulcer Management: Systematic Review


    Background: The clinical decision-making process in pressure ulcer management is complex, and its quality depends on both the nurse's experience and the availability of scientific knowledge. This process should follow evidence-based practices incorporating health information technologies to assist health care professionals, such as the use of clinical decision support systems. These systems, in addition to increasing the quality of care provided, can reduce errors and costs in health care. However, the widespread use of clinical decision support systems still has limited evidence, indicating the need to identify and evaluate its effects on nursing clinical practice. Objective: The goal of the review was to identify the effects of nurses using clinical decision support systems on clinical decision making for pressure ulcer management. Methods: The systematic review was conducted in accordance with PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) recommendations. The search was conducted in April 2019 on 5 electronic databases: MEDLINE, SCOPUS, Web of Science, Cochrane, and CINAHL, without publication date or study design restrictions. Articles that addressed the use of computerized clinical decision support systems in pressure ulcer care applied in clinical practice were included. The reference lists of eligible articles were searched manually. The Mixed Methods Appraisal Tool was used to assess the methodological quality of the studies. Results: The search strategy resulted in 998 articles, 16 of which were included. The year of publication ranged from 1995 to 2017, with 45% of studies conducted in the United States. Most addressed the use of clinical decision support systems by nurses in pressure ulcers prevention in inpatient units. All studies described knowledge-based systems that assessed the effects on clinical decision making, clinical effects secondary to clinical decision support system use, or factors that influenced the use or intention to use clinical decision support systems by health professionals and the success of their implementation in nursing practice. Conclusions: The evidence in the available literature about the effects of clinical decision support systems (used by nurses) on decision making for pressure ulcer prevention and treatment is still insufficient. No significant effects were found on nurses' knowledge following the integration of clinical decision support systems into the workflow, with assessments made for a brief period of up to 6 months. Clinical effects, such as outcomes in the incidence and prevalence of pressure ulcers, remain limited in the studies, and most found clinically but nonstatistically significant results in decreasing pressure ulcers. It is necessary to carry out studies that prioritize better adoption and interaction of nurses with clinical decision support systems, as well as studies with a representative sample of health care professionals, randomized study designs, and application of assessment instruments appropriate to the professional and institutional profile. In addition, long-term follow-up is necessary to assess the effects of clinical decision support systems that can demonstrate a more real, measurable, and significant effect on clinical decision making. Trial Registration: PROSPERO International Prospective Register of Systematic Reviews CRD42019127663;

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Latest Submissions Open for Peer-Review:

View All Open Peer Review Articles
  • Prediction of Foodborne Diseases Pathogens: A Machine Learning Approach

    Date Submitted: Oct 11, 2020

    Open Peer Review Period: Oct 11, 2020 - Dec 6, 2020

    Background: Foodborne diseases, as a type of disease with a high global incidence, place a heavy burden on public health and social economy. Foodborne pathogens, as the main factor of foodborne diseas...

    Background: Foodborne diseases, as a type of disease with a high global incidence, place a heavy burden on public health and social economy. Foodborne pathogens, as the main factor of foodborne diseases, play an important role in the treatment and prevention of foodborne diseases. However, foodborne diseases caused by different pathogens lack specificity in the clinical features, then there is a low proportion of clinically actual pathogen detection in real life. Objective: Analyzing the data of foodborne disease cases, selecting appropriate features based on the analysis results, and using machine learning methods to classify foodborne disease pathogens, so as to predict the pathogens of foodborne diseases which have not been tested. Methods: Extracting features such as space, time, and food exposure from the data of foodborne disease cases, analyzing the relationship between these features and the pathogens of foodborne diseases, using a variety of machine learning methods to classify the pathogens of foodborne diseases, and comparing the results to obtain the optimal pathogen prediction model with the highest accuracy. Results: By comparing the results of four models we used, the GBDT model obtains the highest accuracy, which is almost 69% in identifying four pathogenic bacteria including Salmonella, Norovirus, Escherichia coli, and Vibrio parahaemolyticus. And by evaluating the importance of features, we find that the time of illness, geographical longitude and latitude, diarrhea frequency and so on, play important roles in classifying the foodborne disease pathogens. Conclusions: Related data analysis can reflect the distribution of some features of foodborne diseases and the relationship among the features. The classification of pathogens based on the analysis results and machine learning methods can provide beneficial support for clinical auxiliary diagnosis and treatment of foodborne diseases.