Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Monday, December 24 through Wednesday, December 26 inclusive. We apologize in advance for any inconvenience this may cause you.

Who will be affected?


Journal Description

JMIR Medical Informatics (JMI, ISSN 2291-9694) (Editor-in-chief: Christian Lovis MD MPH FACMI) is a Pubmed/SCIE-indexed, top-rated, tier A journal with impact factor expected in 2019, which focuses on clinical informatics, big data in health and health care, decision support for health professionals, electronic health records, ehealth infrastructures and implementation. It has a focus on applied, translational research, with a broad readership including clinicians, CIOs, engineers, industry and health informatics professionals.

Published by JMIR Publications, publisher of the Journal of Medical Internet Research (JMIR), the leading eHealth/mHealth journal (Impact Factor 2018: 4.671), JMIR Med Inform has a slightly different scope (emphasizing more on applications for clinicians and health professionals rather than consumers/citizens, which is the focus of JMIR), publishes even faster, and also allows papers which are more technical or more formative than what would be published in the Journal of Medical Internet Research.

JMIR Medical Informatics is indexed in PubMed Central/PubMed, and has also been accepted for SCIE, with an official Clarivate impact factor 2018 expected to be released in 2019 (see announcement).

JMIR Medical Informatics adheres to the same quality standards as JMIR and all articles published here are also cross-listed in the Table of Contents of JMIR, the worlds' leading medical journal in health sciences / health services research and health informatics (


Recent Articles:

  • A free clinic for children with precocious puberty, held by the doctors of endocrinology department in Guangzhou Women and Children's Medical Center. Source: Image created by the Authors; Copyright: The Authors; URL:; License: Public Domain (CC0).

    Development of Prediction Models Using Machine Learning Algorithms for Girls with Suspected Central Precocious Puberty: Retrospective Study


    Background: Central precocious puberty (CPP) in girls seriously affects their physical and mental development in childhood. The method of diagnosis—gonadotropin-releasing hormone (GnRH)–stimulation test or GnRH analogue (GnRHa)–stimulation test—is expensive and makes patients uncomfortable due to the need for repeated blood sampling. Objective: We aimed to combine multiple CPP–related features and construct machine learning models to predict response to the GnRHa-stimulation test. Methods: In this retrospective study, we analyzed clinical and laboratory data of 1757 girls who underwent a GnRHa test in order to develop XGBoost and random forest classifiers for prediction of response to the GnRHa test. The local interpretable model-agnostic explanations (LIME) algorithm was used with the black-box classifiers to increase their interpretability. We measured sensitivity, specificity, and area under receiver operating characteristic (AUC) of the models. Results: Both the XGBoost and random forest models achieved good performance in distinguishing between positive and negative responses, with the AUC ranging from 0.88 to 0.90, sensitivity ranging from 77.91% to 77.94%, and specificity ranging from 84.32% to 87.66%. Basal serum luteinizing hormone, follicle-stimulating hormone, and insulin-like growth factor-I levels were found to be the three most important factors. In the interpretable models of LIME, the abovementioned variables made high contributions to the prediction probability. Conclusions: The prediction models we developed can help diagnose CPP and may be used as a prescreening tool before the GnRHa-stimulation test.

  • Source: Air Force Medical Service (Malcolm Mayfield); Copyright: US Air Force; URL:; License: Public Domain (CC0).

    Detection of Bleeding Events in Electronic Health Record Notes Using Convolutional Neural Network Models Enhanced With Recurrent Neural Network Autoencoders:...


    Background: Bleeding events are common and critical and may cause significant morbidity and mortality. High incidences of bleeding events are associated with cardiovascular disease in patients on anticoagulant therapy. Prompt and accurate detection of bleeding events is essential to prevent serious consequences. As bleeding events are often described in clinical notes, automatic detection of bleeding events from electronic health record (EHR) notes may improve drug-safety surveillance and pharmacovigilance. Objective: We aimed to develop a natural language processing (NLP) system to automatically classify whether an EHR note sentence contains a bleeding event. Methods: We expert annotated 878 EHR notes (76,577 sentences and 562,630 word-tokens) to identify bleeding events at the sentence level. This annotated corpus was used to train and validate our NLP systems. We developed an innovative hybrid convolutional neural network (CNN) and long short-term memory (LSTM) autoencoder (HCLA) model that integrates a CNN architecture with a bidirectional LSTM (BiLSTM) autoencoder model to leverage large unlabeled EHR data. Results: HCLA achieved the best area under the receiver operating characteristic curve (0.957) and F1 score (0.938) to identify whether a sentence contains a bleeding event, thereby surpassing the strong baseline support vector machines and other CNN and autoencoder models. Conclusions: By incorporating a supervised CNN model and a pretrained unsupervised BiLSTM autoencoder, the HCLA achieved high performance in detecting bleeding events.

  • Source: Image created by the Authors; Copyright: The Authors; URL:; License: Creative Commons Attribution (CC-BY).

    Use of Electronic Health Record Access and Audit Logs to Identify Physician Actions Following Noninterruptive Alert Opening: Descriptive Study


    Background: Electronic health record (EHR) access and audit logs record behaviors of providers as they navigate the EHR. These data can be used to better understand provider responses to EHR–based clinical decision support (CDS), shedding light on whether and why CDS is effective. Objective: This study aimed to determine the feasibility of using EHR access and audit logs to track primary care physicians’ (PCPs’) opening of and response to noninterruptive alerts delivered to EHR InBaskets. Methods: We conducted a descriptive study to assess the use of EHR log data to track provider behavior. We analyzed data recorded following opening of 799 noninterruptive alerts sent to 75 PCPs’ InBaskets through a prior randomized controlled trial. Three types of alerts highlighted new medication concerns for older patients’ posthospital discharge: information only (n=593), medication recommendations (n=37), and test recommendations (n=169). We sought log data to identify the person opening the alert and the timing and type of PCPs’ follow-up EHR actions (immediate vs by the end of the following day). We performed multivariate analyses examining associations between alert type, patient characteristics, provider characteristics, and contextual factors and likelihood of immediate or subsequent PCP action (general, medication-specific, or laboratory-specific actions). We describe challenges and strategies for log data use. Results: We successfully identified the required data in EHR access and audit logs. More than three-quarters of alerts (78.5%, 627/799) were opened by the PCP to whom they were directed, allowing us to assess immediate PCP action; of these, 208 alerts were followed by immediate action. Expanding on our analyses to include alerts opened by staff or covering physicians, we found that an additional 330 of the 799 alerts demonstrated PCP action by the end of the following day. The remaining 261 alerts showed no PCP action. Compared to information-only alerts, the odds ratio (OR) of immediate action was 4.03 (95% CI 1.67-9.72) for medication-recommendation and 2.14 (95% CI 1.38-3.32) for test-recommendation alerts. Compared to information-only alerts, ORs of medication-specific action by end of the following day were significantly greater for medication recommendations (5.59; 95% CI 2.42-12.94) and test recommendations (1.71; 95% CI 1.09-2.68). We found a similar pattern for OR of laboratory-specific action. We encountered 2 main challenges: (1) Capturing a historical snapshot of EHR status (number of InBasket messages at time of alert delivery) required incorporation of data generated many months prior with longitudinal follow-up. (2) Accurately interpreting data elements required iterative work by a physician/data manager team taking action within the EHR and then examining audit logs to identify corresponding documentation. Conclusions: EHR log data could inform future efforts and provide valuable information during development and refinement of CDS interventions. To address challenges, use of these data should be planned before implementing an EHR–based study.

  • Source: Flickr; Copyright: Kanaka Rastamon; URL:; License: Creative Commons Attribution + Noncommercial (CC-BY-NC).

    Data Analysis and Visualization of Newspaper Articles on Thirdhand Smoke: A Topic Modeling Approach


    Background: Thirdhand smoke has been a growing topic for years in China. Thirdhand smoke (THS) consists of residual tobacco smoke pollutants that remain on surfaces and in dust. These pollutants are re-emitted as a gas or react with oxidants and other compounds in the environment to yield secondary pollutants. Objective: Collecting media reports on THS from major media outlets and analyzing this subject using topic modeling can facilitate a better understanding of the role that the media plays in communicating this health issue to the public. Methods: The data were retrieved from the Wiser and Factiva news databases. A preliminary investigation focused on articles dated between January 1, 2013, and December 31, 2017. Use of Latent Dirichlet Allocation yielded the top 10 topics about THS. The use of the modified LDAvis tool enabled an overall view of the topic model, which visualizes different topics as circles. Multidimensional scaling was used to represent the intertopic distances on a two-dimensional plane. Results: We found 745 articles dated between January 1, 2013, and December 31, 2017. The United States ranked first in terms of publications (152 articles on THS from 2013-2017). We found 279 news reports about THS from the Chinese media over the same period and 363 news reports from the United States. Given our analysis of the percentage of news related to THS in China, Topic 1 (Cancer) was the most popular among the topics and was mentioned in 31.9% of all news stories. Topic 2 (Control of quitting smoking) was related to roughly 15% of news items on THS. Conclusions: Data analysis and the visualization of news articles can generate useful information. Our study shows that topic modeling can offer insights into understanding news reports related to THS. This analysis of media trends indicated that related diseases, air and particulate matter (PM2.5), and control and restrictions are the major concerns of the Chinese media reporting on THS. The Chinese press still needs to consider fuller reports on THS based on scientific evidence and with less focus on sensational headlines. We recommend that additional studies be conducted related to sentiment analysis of news data to verify and measure the influence of THS-related topics.

  • Source: Wikimedia Commons; Copyright: Intel Free Press; URL:; License: Creative Commons Attribution + ShareAlike (CC-BY-SA).

    The Connected Intensive Care Unit Patient: Exploratory Analyses and Cohort Discovery From a Critical Care Telemedicine Database


    Background: Many intensive care units (ICUs) utilize telemedicine in response to an expanding critical care patient population, off-hours coverage, and intensivist shortages, particularly in rural facilities. Advances in digital health technologies, among other reasons, have led to the integration of active, well-networked critical care telemedicine (tele-ICU) systems across the United States, which in turn, provide the ability to generate large-scale remote monitoring data from critically ill patients. Objective: The objective of this study was to explore opportunities and challenges of utilizing multisite, multimodal data acquired through critical care telemedicine. Using a publicly available tele-ICU, or electronic ICU (eICU), database, we illustrated the quality and potential uses of remote monitoring data, including cohort discovery for secondary research. Methods: Exploratory analyses were performed on the eICU Collaborative Research Database that includes deidentified clinical data collected from adult patients admitted to ICUs between 2014 and 2015. Patient and ICU characteristics, top admission diagnoses, and predictions from clinical scoring systems were extracted and analyzed. Additionally, a case study on respiratory failure patients was conducted to demonstrate research prospects using tele-ICU data. Results: The eICU database spans more than 200 hospitals and over 139,000 ICU patients across the United States with wide-ranging clinical data and diagnoses. Although mixed medical-surgical ICU was the most common critical care setting, patients with cardiovascular conditions accounted for more than 20% of ICU stays, and those with neurological or respiratory illness accounted for nearly 15% of ICU unit stays. The case study on respiratory failure patients showed that cohort discovery using the eICU database can be highly specific, albeit potentially limiting in terms of data provenance and sparsity for certain types of clinical questions. Conclusions: Large-scale remote monitoring data sources, such as the eICU database, have a strong potential to advance the role of critical care telemedicine by serving as a testbed for secondary research as well as for developing and testing tools, including predictive and prescriptive analytical solutions and decision support systems. The resulting tools will also inform coordination of care for critically ill patients, intensivist coverage, and the overall process of critical care telemedicine.

  • Source: Pexels; Copyright:; URL:; License: Licensed by JMIR.

    Information Technology–Assisted Treatment Planning and Performance Assessment for Severe Thalassemia Care in Low- and Middle-Income Countries:...


    Background: Successful models of information and communication technology (ICT) applied to cost-effective delivery of quality care in low- and middle-income countries (LMIC) are an increasing necessity. Severe thalassemia is one of the most common life-threatening noncommunicable diseases of children globally. Objective: The aim was to study the impact of ICT on quality of care for severe thalassemia patients in LMIC. Methods: A total of 1110 patients with severe thalassemia from five centers in India were followed over a 1-year period. The impact of consistent use of a Web-based platform designed to assist comprehensive management of severe thalassemia (ThalCare) on key indicators of quality of care such as minimum (pretransfusion) hemoglobin, serum ferritin, liver size, and spleen size were assessed. Results: Overall improvements in initial hemoglobin, ferritin, and liver and spleen size were significant (P<.001 for each). For four centers, the improvement in mean pretransfusion hemoglobin level was statistically significant (P<.001). Four of five centers achieved reduction in mean ferritin levels, with two displaying a significant drop in ferritin (P=.004 and P<.001). One of the five centers did not record liver and spleen size on palpation, but of the remaining four centers, two witnessed a large drop in liver and spleen size (P<.01), one witnessed moderate drop (P=.05 for liver; P=.03 for spleen size), while the fourth witnessed a moderate increase in liver size (P=.08) and insignificant change in spleen size (P=.12). Conclusions: Implementation of computer-assisted treatment planning and performance assessment consistently and positively impacted indexes reflecting effective delivery of care to patients suffering from severe thalassemia in LMIC.

  • Emergency department. Source: US Department of Homeland Security; Copyright: FEMA (Robert Kaufmann); URL:; License: Public Domain (CC0).

    Predicting Appropriate Hospital Admission of Emergency Department Patients with Bronchiolitis: Secondary Analysis


    Background: In children below the age of 2 years, bronchiolitis is the most common reason for hospitalization. Each year in the United States, bronchiolitis causes 287,000 emergency department visits, 32%-40% of which result in hospitalization. Due to a lack of evidence and objective criteria for managing bronchiolitis, clinicians often make emergency department disposition decisions on hospitalization or discharge to home subjectively, leading to large practice variation. Our recent study provided the first operational definition of appropriate hospital admission for emergency department patients with bronchiolitis and showed that 6.08% of emergency department disposition decisions for bronchiolitis were inappropriate. An accurate model for predicting appropriate hospital admission can guide emergency department disposition decisions for bronchiolitis and improve outcomes, but has not been developed thus far. Objective: The objective of this study was to develop a reasonably accurate model for predicting appropriate hospital admission. Methods: Using Intermountain Healthcare data from 2011-2014, we developed the first machine learning classification model to predict appropriate hospital admission for emergency department patients with bronchiolitis. Results: Our model achieved an accuracy of 90.66% (3242/3576, 95% CI: 89.68-91.64), a sensitivity of 92.09% (1083/1176, 95% CI: 90.33-93.56), a specificity of 89.96% (2159/2400, 95% CI: 88.69-91.17), and an area under the receiver operating characteristic curve of 0.960 (95% CI: 0.954-0.966). We identified possible improvements to the model to guide future research on this topic. Conclusions: Our model has good accuracy for predicting appropriate hospital admission for emergency department patients with bronchiolitis. With further improvement, our model could serve as a foundation for building decision-support tools to guide disposition decisions for children with bronchiolitis presenting to emergency departments. International Registered Report Identifier (IRRID): RR2-10.2196/resprot.5155

  • Source: iStock by Getty Images; Copyright: Cecilie_Arcurs; URL:; License: Licensed by the authors.

    SNOMED CT Concept Hierarchies for Computable Clinical Phenotypes From Electronic Health Record Data: Comparison of Intensional Versus Extensional Value Sets


    Background: Defining clinical phenotypes from electronic health record (EHR)–derived data proves crucial for clinical decision support, population health endeavors, and translational research. EHR diagnoses now commonly draw from a finely grained clinical terminology—either native SNOMED CT or a vendor-supplied terminology mapped to SNOMED CT concepts as the standard for EHR interoperability. Accordingly, electronic clinical quality measures (eCQMs) increasingly define clinical phenotypes with SNOMED CT value sets. The work of creating and maintaining list-based value sets proves daunting, as does insuring that their contents accurately represent the clinically intended condition. Objective: The goal of the research was to compare an intensional (concept hierarchy-based) versus extensional (list-based) value set approach to defining clinical phenotypes using SNOMED CT–encoded data from EHRs by evaluating value set conciseness, time to create, and completeness. Methods: Starting from published Centers for Medicare and Medicaid Services (CMS) high-priority eCQMs, we selected 10 clinical conditions referenced by those eCQMs. For each, the published SNOMED CT list-based (extensional) value set was downloaded from the Value Set Authority Center (VSAC). Ten corresponding SNOMED CT hierarchy-based intensional value sets for the same conditions were identified within our EHR. From each hierarchy-based intensional value set, an exactly equivalent full extensional value set was derived enumerating all included descendant SNOMED CT concepts. Comparisons were then made between (1) VSAC-downloaded list-based (extensional) value sets, (2) corresponding hierarchy-based intensional value sets for the same conditions, and (3) derived list-based (extensional) value sets exactly equivalent to the hierarchy-based intensional value sets. Value set conciseness was assessed by the number of SNOMED CT concepts needed for definition. Time to construct the value sets for local use was measured. Value set completeness was assessed by comparing contents of the downloaded extensional versus intensional value sets. Two measures of content completeness were made: for individual SNOMED CT concepts and for the mapped diagnosis clinical terms available for selection within the EHR by clinicians. Results: The 10 hierarchy-based intensional value sets proved far simpler and faster to construct than exactly equivalent derived extensional value set lists, requiring a median 3 versus 78 concepts to define and 5 versus 37 minutes to build. The hierarchy-based intensional value sets also proved more complete: in comparison, the 10 downloaded 2018 extensional value sets contained a median of just 35% of the intensional value sets’ SNOMED CT concepts and 65% of mapped EHR clinical terms. Conclusions: In the EHR era, defining conditions preferentially should employ SNOMED CT concept hierarchy-based (intensional) value sets rather than extensional lists. By doing so, clinical guideline and eCQM authors can more readily engage specialists in vetting condition subtypes to include and exclude, and streamline broad EHR implementation of condition-specific decision support promoting guideline adherence for patient benefit.

  • Veterans Day 2014 Wreath Laying Ceremony for Sgt Leonard Matlovich & LGBT veterans. Source: Flickr; Copyright: Elvert Barnes; URL:; License: Creative Commons Attribution + ShareAlike (CC-BY-SA).

    Utilization of the Veterans Affairs’ Transgender E-consultation Program by Health Care Providers: Mixed-Methods Study


    Background: In 2015, the Department of Veterans Affairs (VA) nationally implemented a transgender e-consultation (e-consult) program with expert clinical guidance for providers. Objective: This mixed-methods project aimed to describe providers’ program experiences, reasons for nonuse of the program, and ways to improve the program use. Methods: From January to May 2017, 15 urban and rural VA providers who submitted at least one e-consult in the last year participated in semistructured interviews about their program experiences, which were analyzed using content analysis. From November to December 2017, 53 providers who encountered transgender patients but did not utilize the program participated in a brief online survey on the reasons for nonuse of the program and the facilitators encouraging use. Results: Qualitative analysis showed that providers learned of the program through email; colleagues; the electronic health record (EHR) system; and participation in the VA Lesbian, Gay, Bisexual, and Transgender committees or educational trainings. Providers used the program to establish care plans, hormone therapy recommendations, sexual and reproductive health education, surgical treatment education, patient-provider communication guidance, and second opinions. The facilitators of program use included understandable recommendations, ease of use through the EHR system, and status as the only transgender resource for rural providers. Barriers to use included time constraints, communication-related problems with the e-consult, impractical recommendations for underresourced sites, and misunderstanding of the e-consult purpose. Suggestions for improvement included addition of concise or sectioned responses, expansion of program awareness among providers or patients, designation of a follow-up contact person, and increase in provider education about transgender veterans and related care. Quantitative analysis showed that the common reasons for nonuse of the program were no knowledge of the program (54%), no need of the program (32%), and receipt of help from a colleague outside of e-consult (24%). Common suggestions to improve the program use in quantitative analyses included provision of more information about where to find e-consult in the chart, guidance on talking with patients about the program, and e-mail announcements to improve provider awareness of the program. Post hoc exploratory analyses showed no differences between urban and rural providers. Conclusions: The VA transgender e-consult program is useful for providers, but there are several barriers to implementing recommendations, some of which are especially challenging for rural providers. Addressing the identified barriers and enhancing the facilitators may improve program use and quality care for transgender veterans.

  • Source: Freepik; Copyright: Freepik; URL:; License: Licensed by JMIR.

    Improving Digital Hospital Transformation: Development of an Outcomes-Based Infrastructure Maturity Assessment Framework


    Background: Digital transformation in health care is being driven by the need to improve quality, reduce costs, and enhance the patient experience of health care delivery. It does this through both the direct intervention of technology to create new diagnostic and treatment opportunities and also through the improved use of information to create more engaging and efficient care processes. Objective: In a modern digital hospital, improved clinical and business processes are often driven through enhancing the information flows that support them. To understand an organization’s ability to transform their information flows requires a clear understanding of the capabilities of an organization’s information technology infrastructure. To date, hospital facilities have been challenged by the absence of uniform ways of describing this infrastructure that would enable them to benchmark where they are and create a vision of where they would like to be. While there is an industry assessment measure for electronic medical record (EMR) adoption using the Healthcare Information and Management Systems Society Analytics EMR Adoption Model, there is no equivalent for assessing the infrastructure and associated technology capabilities for digital hospitals. Our aim is to fill this gap, as hospital administrators and clinicians need to know how and why to invest in information infrastructure to support health information technology that benefits patient safety and care. Methods: Based on an operational framework for the Capability Maturity Model, devised specifically for health care, we applied information use characteristics to define eight information systems maturity levels and associated technology infrastructure capabilities. These levels are mapped to user experiences to create a linkage between technology infrastructure and experience outcomes. Subsequently, specific technology capabilities are deconstructed to identify the technology features required to meet each maturity level. Results: The resulting assessment framework clearly defines 164 individual capabilities across the five technology domains and eight maturity levels for hospital infrastructure. These level-dependent capabilities characterize the ability of the hospital’s information infrastructure to support the business of digital hospitals including clinical and administrative requirements. Further, it allows the addition of a scoring calculation for each capability, domain, and the overall infrastructure, and it identifies critical requirements to meet each of the maturity levels. Conclusions: This new Infrastructure Maturity Assessment framework will allow digital hospitals to assess the maturity of their infrastructure in terms of their digital transformation aligning to business outcomes and supporting the desired level of clinical and operational competency. It provides the ability to establish an international benchmark of hospital infrastructure performance, while identifying weaknesses in current infrastructure capability. Further, it provides a business case justification through increased functionality and a roadmap for subsequent digital transformation while moving from one maturity level to the next. As such, this framework will encourage and guide information-driven, digital transformation in health care.

  • Source: Unsplash; Copyright: rawpixel; URL:; License: Licensed by JMIR.

    A New Insight Into Missing Data in Intensive Care Unit Patient Profiles: Observational Study


    Background: The data missing from patient profiles in intensive care units (ICUs) are substantial and unavoidable. However, this incompleteness is not always random or because of imperfections in the data collection process. Objective: This study aimed to investigate the potential hidden information in data missing from electronic health records (EHRs) in an ICU and examine whether the presence or missingness of a variable itself can convey information about the patient health status. Methods: Daily retrieval of laboratory test (LT) measurements from the Medical Information Mart for Intensive Care III database was set as our reference for defining complete patient profiles. Missingness indicators were introduced as a way of representing presence or absence of the LTs in a patient profile. Thereafter, various feature selection methods (filter and embedded feature selection methods) were used to examine the predictive power of missingness indicators. Finally, a set of well-known prediction models (logistic regression [LR], decision tree, and random forest) were used to evaluate whether the absence status itself of a variable recording can provide predictive power. We also examined the utility of missingness indicators in improving predictive performance when used with observed laboratory measurements as model input. The outcome of interest was in-hospital mortality and mortality at 30 days after ICU discharge. Results: Regardless of mortality type or ICU day, more than 40% of the predictors selected by feature selection methods were missingness indicators. Notably, employing missingness indicators as the only predictors achieved reasonable mortality prediction on all days and for all mortality types (for instance, in 30-day mortality prediction with LR, we achieved area under the curve of the receiver operating characteristic [AUROC] of 0.6836±0.012). Including indicators with observed measurements in the prediction models also improved the AUROC; the maximum improvement was 0.0426. Indicators also improved the AUROC for Simplified Acute Physiology Score II model—a well-known ICU severity of illness score—confirming the additive information of the indicators (AUROC of 0.8045±0.0109 for 30-day mortality prediction for LR). Conclusions: Our study demonstrated that the presence or absence of LT measurements is informative and can be considered a potential predictor of in-hospital and 30-day mortality. The comparative analysis of prediction models also showed statistically significant prediction improvement when indicators were included. Moreover, missing data might reflect the opinions of examining clinicians. Therefore, the absence of measurements can be informative in ICUs and has predictive power beyond the measured data themselves. This initial case study shows promise for more in-depth analysis of missing data and its informativeness in ICUs. Future studies are needed to generalize these results.

  • Human-Centred High-Tech: Blockchain at the Annual Meeting of New Champions of the World Economic Forum, 2017, Dalian, China. Source: World Economic Forum; Copyright: World Economic Forum; URL:; License: Creative Commons Attribution + Noncommercial + ShareAlike (CC-BY-NC-SA).

    Using Blockchain Technology to Manage Clinical Trials Data: A Proof-of-Concept Study


    Background: Blockchain technology is emerging as an innovative tool in data and software security. Objective: This study aims to explore the role of blockchain in supporting clinical trials data management and develop a proof-of-concept implementation of a patient-facing and researcher-facing system. Methods: Blockchain-based Smart Contracts were built using the Ethereum platform. Results: We described BlockTrial, a system that uses a Web-based interface to allow users to run trials-related Smart Contracts on an Ethereum network. Functions allow patients to grant researchers access to their data and allow researchers to submit queries for data that are stored off chain. As a type of distributed ledger, the system generates a durable and transparent log of these and other transactions. BlockTrial could be used to increase the trustworthiness of data collected during clinical research with benefits to researchers, regulators, and drug companies alike. In addition, the system could empower patients to become more active and fully informed partners in research. Conclusions: Blockchain technology presents an opportunity to address some of the common threats to the integrity of data collected in clinical trials and ensure that the analysis of these data comply with prespecified plans. Further technical work is needed to add additional functions. Policies must be developed to determine the optimal models for participation in the system by its various stakeholders.

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Latest Submissions Open for Peer-Review:

View All Open Peer Review Articles
  • Weight loss trajectories after obesity/bariatric surgery; a mathematical model satisfactorily classifies 93% of patients.

    Date Submitted: Feb 9, 2019

    Open Peer Review Period: Feb 12, 2019 - Apr 9, 2019

    Background: Obesity surgery has proven its effectiveness in weight loss. However, after a loss phase of about 12-18 months, between 20 and 40% of patients regain weight. Objective: Prediction of weigh...

    Background: Obesity surgery has proven its effectiveness in weight loss. However, after a loss phase of about 12-18 months, between 20 and 40% of patients regain weight. Objective: Prediction of weight evolution is therefore, useful for early detection of weight regain. Methods: This was a monocentric retrospective study with calculation of the weight trajectory of patients having undergone gastric bypass surgery. Data on 795 patients after an interval of 2 years allowed modelling of weight trajectories according to a hierarchical cluster analysis (HCA) tending to minimize the intragroup distance according to Ward. Clinical judgement was used to finalise the identification of clinically relevant representative trajectories. This modelling was validated on a group of 381 patients for whom the observed weight at 18 months was compared to the predicted weight, and the weights were transformed according to Reinhold’s classification of results. Results: Two successive HCA produced 14 representative trajectories, distributed among 4 clinically relevant families: 6 of the 14 weight trajectories decreased systematically over time or decreased, then stagnated; 4 of the 14 trajectories decreased, then increased, then decreased again; 2 of the 14 trajectories decreased, then increased; 2 of the 14 trajectories stagnated at first, then began a decline. A comparison of observed weight and that estimated by modelling made it possible to correctly classify 97.6% of persons with "excess weight loss (EWL) >50% ", and more than 58% of persons with " EWL between 25 and 50% ". In the category of persons with "EWL >50% ", weight data over the first 6 months were adequate to correctly predict the observed result. Conclusions: this modelling allowed correct classification of persons with EWL >50%. Other studies are needed to validate this model in other populations, with other types of surgery and other medical-surgical teams.

  • CirrODS: a web-based clinical decision and workflow support tool for evidence-based management of patients with cirrhosis

    Date Submitted: Feb 7, 2019

    Open Peer Review Period: Feb 11, 2019 - Apr 8, 2019

    Background: There are gaps in delivering evidence-based care for patients with chronic liver disease and cirrhosis. Objective: Our objective was to use interactive user-centered design methods to deve...

    Background: There are gaps in delivering evidence-based care for patients with chronic liver disease and cirrhosis. Objective: Our objective was to use interactive user-centered design methods to develop the CirrODS (Cirrhosis Order set and clinical Decision Support) tool in order to improve clinical decision-making and workflow. Methods: Two workgroups were convened with clinicians, user-experience designers, human-factors and health-services researchers, and information technologists to create user-interface designs. CirrODS prototypes underwent several rounds of formative design. Physicians (n=20) at three hospitals were provided with clinical scenarios of patients with cirrhosis, and the admission orders made with and without the CirrODS tool were compared. The physicians rated their experience using CirrODS and provided comments, which we coded into categories and themes. We assessed the safety, usability, and quality of CirrODS using qualitative and quantitative methods. Results: We created an interactive CirrODS prototype that displays an alert when existing electronic data indicate a patient is at risk for cirrhosis. The tool consists of two primary frames, presenting relevant patient data and allowing recommended evidence-based tests and treatments to be ordered and categorized. Physicians viewed the tool positively and suggested that it would be most useful at the time of admission. When using the tool, the clinicians placed fewer orders than they placed when not using the tool, but more of the orders placed were considered to be “high priority” when the tool was used than when it was not used. The physicians’ ratings of CirrODS indicated above average usability. Conclusions: We developed a novel web-based combined clinical decision-making and workflow support tool to alert and assist clinicians caring for patients with cirrhosis. Further studies are underway to assess the impact on quality of care for patients with cirrhosis in actual practice.

  • Identification of Knee Osteoarthritis Based on Bayesian Network: A Pilot Study

    Date Submitted: Jan 30, 2019

    Open Peer Review Period: Feb 4, 2019 - Apr 1, 2019

    Background: Early identification of knee osteoarthritis (OA) can improve treatment outcomes and reduce medical costs. However, there are major limitations amongst existing classification or prediction...

    Background: Early identification of knee osteoarthritis (OA) can improve treatment outcomes and reduce medical costs. However, there are major limitations amongst existing classification or prediction models including invisible data processing and complicated dataset attributes, which hinder their applications in clinical practice. Objective: Develop a BN-based classification model to classify people with knee OA. The proposed model can be treated as a cheap and portable prescreening tool which can provide decision support for health professionals. Methods: A classification model is developed to classify knee OA based on Bayesian network (BN). The model’s structure is based on a three-level BN structure, and then retrained by the Bayesian Search (BS) learning algorithm. The model’s parameters are determined by the Expectation-Maximization (EM) algorithm. A total of 157 instances are adopted as the dataset which includes backgrounds (5 attributes, the basic characteristics of subjects), the target disease (namely the knee OA), and predictors (13 attributes, the scores of physical fitness tests). The performance of the model is evaluated based on classification accuracy, area under a curve (AUC), specificity and sensitivity, and is also compared with other well-known classification models. A test is also performed to explore whether physical fitness tests could improve the performance of the proposed model. Results: The proposed model’s results are higher than, or equal to, the mean scores of the other classification models: 0.754 for accuracy, 0.78 for AUC, 0.78 for specificity and 0.73 for sensitivity. Meanwhile, the proposed model also shows significant improvement when compared to the traditional BN model: 6.35% increase in accuracy (from 0.709 to 0.754), 4.00% increase in AUC (from 0.75 to 0.78), 6.85% increase in specificity (from 0.73 to 0.78) and 5.80% increase in sensitivity (from 0.69 to 0.73). Furthermore, the test results show that the performance of the proposed model could be largely enhanced through physical fitness tests in three evaluation indexes: 10.56% increase in accuracy (from 0.682 to 0.754), 16.42% increase in AUC (from 0.67 to 0.78) and 30.00% increase in specificity (from 0.60 to 0.78). Conclusions: The proposed model presents a promising method to classify people with knee OA when compared to other classification models and the traditional BN model. The proposed model could be implemented in clinical practice as a prescreening tool for knee OA, which could, not only improve the quality of healthcare for elderly people, but also reduce overall medical costs.

  • Core Data Elements in Acute Myeloid Leukemia

    Date Submitted: Jan 30, 2019

    Open Peer Review Period: Feb 4, 2019 - Apr 1, 2019

    Background: For cancer domains as Acute Myeloid Leukemia (AML), a large set of data elements is obtained from different institutions with heterogeneous data definitions within one patient course. The...

    Background: For cancer domains as Acute Myeloid Leukemia (AML), a large set of data elements is obtained from different institutions with heterogeneous data definitions within one patient course. The lack of clinical data harmonization impedes cross-institutional electronic data exchange and future meta-analyses. Objective: To identify and harmonize a semantic core of common data elements (CDEs) in clinical routine and research documentation based on a systematic metadata analysis of existing documentation models. Methods: Lists of relevant data items were collected and reviewed by hematologists from two university hospitals regarding routine documentation and several case report forms of clinical trials for AML. In addition, existing registries and international recommendations were included. Data items were coded to medical concepts via the Unified Medical Language System and then systematically analyzed for concept overlaps and identification of most frequent concepts. The most frequent concepts were then implemented as data elements in the standardized format Operational Data Model by the Clinical Data Interchange Standards Consortium. Results: 3265 medical concepts were identified of which 1414 were unique. Among 1414 unique medical concepts, the 50 most frequent cover 27.0% percent of all concept occurrences within the collected AML documentation. The top 100 concepts represent 39.5% of all concepts occurrences. Implementation of common data elements is available on a European research infrastructure and can be downloaded in different formats for reuse in different electronic data capture systems. Conclusions: Information management is a complex process for research-intense disease entities as AML that is associated with a large set of lab-based diagnostics and different treatment options. Our systematic UMLS-based analysis revealed the existence of a core data set and an exemplary reusable implementation for harmonized data capture is available on an established metadata repository.

  • Impact on Readmission Reduction Among Heart Failure Patients Using Digital Health Monitoring: Feasibility and Adoption in a Real World Setting

    Date Submitted: Jan 31, 2019

    Open Peer Review Period: Feb 1, 2019 - Mar 29, 2019

    Background: Congestive heart failure (CHF) is a condition that affects approximately 6.5 million people in the U.S. with a mortality rate of around 30%. With the incidence rate expected to rise by 46%...

    Background: Congestive heart failure (CHF) is a condition that affects approximately 6.5 million people in the U.S. with a mortality rate of around 30%. With the incidence rate expected to rise by 46% to exceed 8 million cases by 2030, projections estimate that total CHF costs will increase about to nearly $70 billion. Recently, the advent of remote monitoring technology has significantly broadened the scope of the physician’s reach in chronic disease management. Objective: The goal of this project was to see feasibility of using digital health monitoring in real world hospital setting, ascertain patient adoption and evaluate impact on 30-day readmission rate as primary outcome. Methods: A digital medicine software platform developed by Rx.Health, called RxUniverse, was used to prescribe HealthPROMISE and iHealth mobile apps to patients’ personal smartphones. Patients updated and recorded their CHF-related symptoms and quality of life measures daily on HealthPROMISE. Vital sign data, including blood pressure and weight, were collected through an ambulatory remote monitoring system that integrated the iHealth app and complementary consumer grade Bluetooth-connected smart devices (blood pressure cuff and digital scale). Physicians were notified of abnormal patient blood pressure and weight change readings and further action was left to the physician’s discretion. We used statistical analyses to determine risk factors associated with 30-day all-cause readmission. Results: Overall, the HeartHealth project included 60 patients admitted to Mount Sinai Hospital for CHF. There were six 30-day hospital readmissions (10% 30-day readmission rate), compared to the national readmission rates of around 25%. Single marital status (p = 0.064) and history of percutaneous coronary intervention (p = 0.075) were associated with readmission. Readmitted patients were also less likely to have been previously prescribed angiotensin converting enzyme inhibitors or angiotensin II receptor blockers (p = 0.019). Notably, readmitted patients utilized the blood pressure and weight monitors less than non-readmitted patients, and patients aged less than 70 used the monitors more frequently on average than those over 70, though these trends did not reach statistical significance. The percentage of patients using the monitors at least once dropped steadily from 83% in the first week after discharge to 46% in the fourth week. Additionally, 88% of patients used the monitor at least 4 times and 62% at least 10 times, with some patients using the monitors multiple times per day. Conclusions: Given the increasing burden of CHF, there is a need for an effective and sustainable remote monitoring system for CHF patients following hospital discharge. We identified clinical and social factors as well as remote monitoring usage trends that identify targetable patient populations that could benefit most from integration of daily remote monitoring. In addition, we demonstrated that interventions driven by real-time vitals data may greatly aid in reducing hospital readmissions and costs while improving patient outcomes. Future studies should seek to measure population health-wide impact by expanding digital health remote monitoring enterprise-wide.

  • The use of artificially-intelligent self-diagnosing digital platforms by the general public: A scoping review

    Date Submitted: Jan 20, 2019

    Open Peer Review Period: Jan 23, 2019 - Mar 20, 2019

    Background: Artificially-intelligent self-diagnosing digital platforms are becoming widely available and used by the general population. Little is known about the body of knowledge surrounding this te...

    Background: Artificially-intelligent self-diagnosing digital platforms are becoming widely available and used by the general population. Little is known about the body of knowledge surrounding this technology. Objective: The objectives of this scoping review are: 1) to systematically map the extent and nature of the literature and topic areas pertaining to digital platforms that use computerized algorithms to provide a list of potential diagnoses and 2) to identify key knowledge gaps. Methods: The following databases were searched: ACM, IEEE, Google Scholar, Open Grey, ProQuest Dissertations and Theses. The search strategy was developed and refined with the assistance of a librarian and consisted of three main concepts: 1) self-diagnosis; 2) digital platforms; 3) public or patients. The search generated 2,536 articles from which 217 were duplicates. Following the Tricco et al. 2018 checklist, two researchers screened the titles and abstracts (n=2,316) and full-texts (n=104), independently. A total of 19 articles were included for review and data were retrieved following a data-charting form that was pre-tested by the research team. Results: Included articles were mainly conducted in the US (n=10) or the UK (n=4). Among the articles, topic areas included: accuracy or correspondence with a doctor’s diagnosis (n=6), commentaries (n=2), regulation (n=3), sociological (n=2), user experience (n=2), theoretical (n=1), privacy and security (n=1), ethical (n=1), design (n=1). Individuals who do not have access to health care and perceive to have a stigmatizing condition are more likely to use this technology. The accuracy of this technology to provide a correct first diagnosis varied substantially based on the disease examined and platform used. Factors influencing accuracy include the design of the online platform and sociodemographic profile of the user. Regulation of this technology is lacking in most parts of the world; however, they are currently under development. Conclusions: There are prominent research gaps in the literature surrounding the use of self-diagnosing AI digital platforms. Given the variety of digital platforms and the types of diseases they cover, measuring accuracy is cumbersome. More research is needed to inform regulations and to consider user experience.