JMIR Medical Informatics

Clinical informatics, decision support for health professionals, electronic health records, and eHealth infrastructures.

Editor-in-Chief:

Christian Lovis, MD, MPH, FACMI, Division of Medical Information Sciences, University Hospitals of Geneva (HUG), University of Geneva (UNIGE), Switzerland


Impact Factor 3.1 CiteScore 7.9

JMIR Medical Informatics (JMI, ISSN 2291-9694, Journal Impact Factor™ 3.1 (Journal Citation Reports™ from Clarivate, 2023)) (Editor-in-chief: Christian Lovis, MD, MPH, FACMI) is an open-access PubMed/SCIE-indexed journal that focuses on the challenges and impacts of clinical informatics, digitalization of care processes, clinical and health data pipelines from acquisition to reuse, including semantics, natural language processing, natural interactions, meaningful analytics and decision support, electronic health records, infrastructures, implementation, and evaluation (see Focus and Scope).

JMIR Medical Informatics adheres to rigorous quality standards, involving a rapid and thorough peer-review process, professional copyediting, and professional production of PDF, XHTML, and XML proofs. The journal is indexed in PubMed, PubMed Central, DOAJ, SCOPUS, and SCIE (Clarivate)

With a CiteScore of 7.9, JMIR Medical Informatics ranks in the 78th percentile (#30 of 138) and the 77th percentile (#14 of 59) as a Q1 journal in the fields of Health Informatics and Health Information Management, according to Scopus data.

Recent Articles

Article Thumbnail
Natural Language Processing

Large language models (LLMs) have achieved great progress in natural language processing tasks and demonstrated the potential for use in clinical applications. Despite their capabilities, LLMs in the medical domain are prone to generating hallucinations (not fully reliable responses). Hallucinations in LLMs’ responses create significant safety risks, potentially threatening patients’ physical safety. Thus, to perceive and prevent this safety risk, it is essential to evaluate LLMs in the medical domain and build a systematic evaluation.

|
Article Thumbnail
Viewpoints on and Experiences with Digital Technologies in Health

Integrating machine learning (ML) models into clinical practice presents a challenge of maintaining their efficacy over time. While existing literature offers valuable strategies for detecting declining model performance, there is a need to document the broader challenges and solutions associated with the real-world development and integration of model monitoring solutions. This work details the development and use of a platform for monitoring the performance of a production-level ML model operating in Mayo Clinic. In this paper, we aimed to provide a series of considerations and guidelines necessary for integrating such a platform into a team’s technical infrastructure and workflow. We have documented our experiences with this integration process, discussed the broader challenges encountered with real-world implementation and maintenance, and included the source code for the platform. Our monitoring platform was built as an R shiny application, developed and implemented over the course of 6 months. The platform has been used and maintained for 2 years and is still in use as of July 2023. The considerations necessary for the implementation of the monitoring platform center around 4 pillars: feasibility (what resources can be used for platform development?); design (through what statistics or models will the model be monitored, and how will these results be efficiently displayed to the end user?); implementation (how will this platform be built, and where will it exist within the IT ecosystem?); and policy (based on monitoring feedback, when and what actions will be taken to fix problems, and how will these problems be translated to clinical staff?). While much of the literature surrounding ML performance monitoring emphasizes methodological approaches for capturing changes in performance, there remains a battery of other challenges and considerations that must be addressed for successful real-world implementation.

|
Article Thumbnail
Viewpoints on and Experiences with Digital Technologies in Health

The pursuit of groundbreaking health care innovations has led to the convergence of artificial intelligence (AI) and traditional Chinese medicine (TCM), thus marking a new frontier that demonstrates the promise of combining the advantages of ancient healing practices with cutting-edge advancements in modern technology. TCM, which is a holistic medical system with >2000 years of empirical support, uses unique diagnostic methods such as inspection, auscultation and olfaction, inquiry, and palpation. AI is the simulation of human intelligence processes by machines, especially via computer systems. TCM is experience oriented, holistic, and subjective, and its combination with AI has beneficial effects, which presumably arises from the perspectives of diagnostic accuracy, treatment efficacy, and prognostic veracity. The role of AI in TCM is highlighted by its use in diagnostics, with machine learning enhancing the precision of treatment through complex pattern recognition. This is exemplified by the greater accuracy of TCM syndrome differentiation via tongue images that are analyzed by AI. However, integrating AI into TCM also presents multifaceted challenges, such as data quality and ethical issues; thus, a unified strategy, such as the use of standardized data sets, is required to improve AI understanding and application of TCM principles. The evolution of TCM through the integration of AI is a key factor for elucidating new horizons in health care. As research continues to evolve, it is imperative that technologists and TCM practitioners collaborate to drive innovative solutions that push the boundaries of medical science and honor the profound legacy of TCM. We can chart a future course wherein AI-augmented TCM practices contribute to more systematic, effective, and accessible health care systems for all individuals.

|
Article Thumbnail
Secondary Use of Clinical Data for Research and Surveillance

The traditional clinical trial data collection process requires a Clinical Research Coordinator (CRC) who is authorized by the investigators to read from the hospital electronic medical record. Using electronic source data opens a new path to extract subjects' data from EHR and transfer directly to EDC (often the method is referred to as eSource ).The eSource technology in clinical trial data flow can improve data quality without compromising timeliness. At the same time, improved data collection efficiency reduces clinical trial costs.

|
Article Thumbnail
Ontologies, Classifications, and Coding

Chronic obstructive pulmonary disease (COPD) is a chronic condition among the main causes of morbidity and mortality worldwide, representing a burden on health care systems. Scientific literature highlights that nutrition is pivotal in respiratory inflammatory processes connected to COPD, including exacerbations. Patients with COPD have an increased risk of developing nutrition-related comorbidities, such as diabetes, cardiovascular diseases, and malnutrition. Moreover, these patients often manifest sarcopenia and cachexia. Therefore, an adequate nutritional assessment and therapy are essential to help individuals with COPD in managing the progress of the disease. However, the role of nutrition in pulmonary rehabilitation (PR) programs is often underestimated due to a lack of resources and dedicated services, mostly because pneumologists may lack the specialized training for such a discipline.

|
Article Thumbnail
Secondary Use of Clinical Data for Research and Surveillance

Self-administered web-based questionnaires are widely used to collect health data from patients and clinical research participants. REDCap (Research Electronic Data Capture; Vanderbilt University) is a global, secure web application for building and managing electronic data capture. Unfortunately, stakeholder needs and preferences of electronic data collection via REDCap have rarely been studied.

|
Article Thumbnail
Implementation Report

Biomedical data warehouses have become an essential tool to facilitate the reuse of health data for both research and decisional applications. Beyond technical issues, the implementation of biomedical data warehouses (BDW) requires strong institutional data governance and operational knowledge of the European and national legal framework for the management of research data access and use.

|
Article Thumbnail
Ontologies, Classifications, and Coding

Dermoscopy is a growing field that uses microscopy to allow dermatologists and primary care physicians to identify skin lesions. For a given skin lesion, a wide variety of differential diagnoses exist, which may be challenging for inexperienced users to name and understand.

|
Article Thumbnail
Natural Language Processing

Vaccines serve as a crucial public health tool, although vaccine hesitancy continues to pose a significant threat to full vaccine uptake and, consequently, community health. Understanding and tracking vaccine hesitancy is essential for effective public health interventions; however, traditional survey methods present various limitations.

|
Article Thumbnail
Adoption and Change Management of eHealth Systems

The chronic disease information systems in hospitals and communities play a significant role in disease prevention, control, and monitoring. However, due to various reasons, the platforms are generally isolated, the patient health information and medical resources are not effectively integrated, and Internet plus medical technology is not implemented throughout the patient consultation process.

|
Article Thumbnail
Decision Support for Health Professionals

Diagnostic errors pose significant health risks and contribute to patient mortality. With the growing accessibility of electronic health records, machine learning models offer a promising avenue for enhancing diagnosis quality. Current research has primarily focused on a limited set of diseases with ample training data, neglecting diagnostic scenarios with limited data availability.

|
Article Thumbnail
Decision Support for Health Professionals

Synthetic patient data (SPD) generation for survival analysis in oncology trials holds significant potential for accelerating clinical development. Various machine learning methods, including classification and regression trees (CART), random forest (RF), Bayesian network (BN), and CTGAN, have been employed for this purpose, but their performance in reflecting actual patient survival data remains under investigation.

|

Preprints Open for Peer-Review

We are working in partnership with