JMIR Publications

JMIR Medical Informatics

Clinical informatics, decision support for health professionals, electronic health records, and ehealth infrastructures.


Journal Description

JMIR Medical Informatics (JMI, ISSN 2291-9694) focusses on clinical informatics, big data in health and health care, decision support for health professionals, electronic health records, ehealth infrastructures and implementation. It has a focus on applied, translational research, with a broad readership including clinicians, CIOs, engineers, industry and health informatics professionals.

Published by JMIR Publications, publisher of the Journal of Medical Internet Research (JMIR), the leading eHealth/mHealth journal (Impact Factor 2015: 4.532), JMIR Med Inform has a different scope (emphasizing more on applications for clinicians and health professionals rather than consumers/citizens, which is the focus of JMIR), publishes even faster, and also allows papers which are more technical or more formative than what would be published in the Journal of Medical Internet Research.

JMIR Medical Informatics journal features a rapid and thorough peer-review process, professional copyediting, professional production of PDF, XHTML, and XML proofs (ready for deposit in PubMed Central/PubMed). The site is optimized for mobile and iPad use.

JMIR Medical Informatics adheres to the same quality standards as JMIR and all articles published here are also cross-listed in the Table of Contents of JMIR, the worlds' leading medical journal in health sciences / health services research and health informatics (


Recent Articles:

  • Image Source: the ward, copyright allenran 917,, Licensed under Creative Commons Attribution cc-by 2.0

    Forecasting Daily Patient Outflow From a Ward Having No Real-Time Clinical Data


    Objective: Our study investigates different models to forecast the total number of next-day discharges from an open ward having no real-time clinical data. Methods: We compared 5 popular regression algorithms to model total next-day discharges: (1) autoregressive integrated moving average (ARIMA), (2) the autoregressive moving average with exogenous variables (ARMAX), (3) k-nearest neighbor regression, (4) random forest regression, and (5) support vector regression. Although the autoregressive integrated moving average model relied on past 3-month discharges, nearest neighbor forecasting used median of similar discharges in the past in estimating next-day discharge. In addition, the ARMAX model used the day of the week and number of patients currently in ward as exogenous variables. For the random forest and support vector regression models, we designed a predictor set of 20 patient features and 88 ward-level features. Results: Our data consisted of 12,141 patient visits over 1826 days. Forecasting quality was measured using mean forecast error, mean absolute error, symmetric mean absolute percentage error, and root mean square error. When compared with a moving average prediction model, all 5 models demonstrated superior performance with the random forests achieving 22.7% improvement in mean absolute error, for all days in the year 2014. Conclusions: In the absence of clinical information, our study recommends using patient-level and ward-level data in predicting next-day discharges. Random forest and support vector regression models are able to use all available features from such data, resulting in superior performance over traditional autoregressive methods. An intelligent estimate of available beds in wards plays a crucial role in relieving access block in emergency departments.

  • Image created by and copyright: the authors.

    Evaluation of an Expert System for the Generation of Speech and Language Therapy Plans


    Background: Speech and language pathologists (SLPs) deal with a wide spectrum of disorders, arising from many different conditions, that affect voice, speech, language, and swallowing capabilities in different ways. Therefore, the outcomes of Speech and Language Therapy (SLT) are highly dependent on the accurate, consistent, and complete design of personalized therapy plans. However, SLPs often have very limited time to work with their patients and to browse the large (and growing) catalogue of activities and specific exercises that can be put into therapy plans. As a consequence, many plans are suboptimal and fail to address the specific needs of each patient. Objective: We aimed to evaluate an expert system that automatically generates plans for speech and language therapy, containing semiannual activities in the five areas of hearing, oral structure and function, linguistic formulation, expressive language and articulation, and receptive language. The goal was to assess whether the expert system speeds up the SLPs’ work and leads to more accurate, consistent, and complete therapy plans for their patients. Methods: We examined the evaluation results of the SPELTA expert system in supporting the decision making of 4 SLPs treating children in three special education institutions in Ecuador. The expert system was first trained with data from 117 cases, including medical data; diagnosis for voice, speech, language and swallowing capabilities; and therapy plans created manually by the SLPs. It was then used to automatically generate new therapy plans for 13 new patients. The SLPs were finally asked to evaluate the accuracy, consistency, and completeness of those plans. A four-fold cross-validation experiment was also run on the original corpus of 117 cases in order to assess the significance of the results. Results: The evaluation showed that 87% of the outputs provided by the SPELTA expert system were considered valid therapy plans for the different areas. The SLPs rated the overall accuracy, consistency, and completeness of the proposed activities with 4.65, 4.6, and 4.6 points (to a maximum of 5), respectively. The ratings for the subplans generated for the areas of hearing, oral structure and function, and linguistic formulation were nearly perfect, whereas the subplans for expressive language and articulation and for receptive language failed to deal properly with some of the subject cases. Overall, the SLPs indicated that over 90% of the subplans generated automatically were “better than” or “as good as” what the SLPs would have created manually if given the average time they can devote to the task. The cross-validation experiment yielded very similar results. Conclusions: The results show that the SPELTA expert system provides valuable input for SLPs to design proper therapy plans for their patients, in a shorter time and considering a larger set of activities than proceeding manually. The algorithms worked well even in the presence of a sparse corpus, and the evidence suggests that the system will become more reliable as it is trained with more subjects.

  • Source:; CC0 Public Domain.

    Data Safe Havens and Trust: Toward a Common Understanding of Trusted Research Platforms for Governing Secure and Ethical Health Research


    In parallel with the advances in big data-driven clinical research, the data safe haven concept has evolved over the last decade. It has led to the development of a framework to support the secure handling of health care information used for clinical research that balances compliance with legal and regulatory controls and ethical requirements while engaging with the public as a partner in its governance. We describe the evolution of 4 separately developed clinical research platforms into services throughout the United Kingdom-wide Farr Institute and their common deployment features in practice. The Farr Institute is a case study from which we propose a common definition of data safe havens as trusted platforms for clinical academic research. We use this common definition to discuss the challenges and dilemmas faced by the clinical academic research community, to help promote a consistent understanding of them and how they might best be handled in practice. We conclude by questioning whether the common definition represents a safe and trustworthy model for conducting clinical research that can stand the test of time and ongoing technical advances while paying heed to evolving public and professional concerns.

  • UVON method diagram, created and uploaded by the author.
Common Creatives Attribution-NonCommercial 4.0 International (CC BY-NC).

    Evaluating Health Information Systems Using Ontologies


    Background: There are several frameworks that attempt to address the challenges of evaluation of health information systems by offering models, methods, and guidelines about what to evaluate, how to evaluate, and how to report the evaluation results. Model-based evaluation frameworks usually suggest universally applicable evaluation aspects but do not consider case-specific aspects. On the other hand, evaluation frameworks that are case specific, by eliciting user requirements, limit their output to the evaluation aspects suggested by the users in the early phases of system development. In addition, these case-specific approaches extract different sets of evaluation aspects from each case, making it challenging to collectively compare, unify, or aggregate the evaluation of a set of heterogeneous health information systems. Objectives: The aim of this paper is to find a method capable of suggesting evaluation aspects for a set of one or more health information systems—whether similar or heterogeneous—by organizing, unifying, and aggregating the quality attributes extracted from those systems and from an external evaluation framework. Methods: On the basis of the available literature in semantic networks and ontologies, a method (called Unified eValuation using Ontology; UVON) was developed that can organize, unify, and aggregate the quality attributes of several health information systems into a tree-style ontology structure. The method was extended to integrate its generated ontology with the evaluation aspects suggested by model-based evaluation frameworks. An approach was developed to extract evaluation aspects from the ontology that also considers evaluation case practicalities such as the maximum number of evaluation aspects to be measured or their required degree of specificity. The method was applied and tested in Future Internet Social and Technological Alignment Research (FI-STAR), a project of 7 cloud-based eHealth applications that were developed and deployed across European Union countries. Results: The relevance of the evaluation aspects created by the UVON method for the FI-STAR project was validated by the corresponding stakeholders of each case. These evaluation aspects were extracted from a UVON-generated ontology structure that reflects both the internally declared required quality attributes in the 7 eHealth applications of the FI-STAR project and the evaluation aspects recommended by the Model for ASsessment of Telemedicine applications (MAST) evaluation framework. The extracted evaluation aspects were used to create questionnaires (for the corresponding patients and health professionals) to evaluate each individual case and the whole of the FI-STAR project. Conclusions: The UVON method can provide a relevant set of evaluation aspects for a heterogeneous set of health information systems by organizing, unifying, and aggregating the quality attributes through ontological structures. Those quality attributes can be either suggested by evaluation models or elicited from the stakeholders of those systems in the form of system requirements. The method continues to be systematic, context sensitive, and relevant across a heterogeneous set of health information systems.

  • Radiologist is checking the picture archiving and communication system at Mubarak Hospital. Source and copyright: the authors.

    Users’ Perspectives on a Picture Archiving and Communication System (PACS): An In-Depth Study in a Teaching Hospital in Kuwait


    Background: Picture archiving and communication system (PACS) is a well-known imaging informatics application in health care organizations, specifically designed for the radiology department. Health care providers have exhibited willingness toward evaluating PACS in hospitals to ascertain the critical success and failure of the technology, considering that evaluation is a basic requirement. Objective: This study aimed at evaluating the success of a PACS in a regional teaching hospital of Kuwait, from users’ perspectives, using information systems success criteria. Methods: An in-depth study was conducted by using quantitative and qualitative methods. This mixed-method study was based on: (1) questionnaires, distributed to all radiologists and technologists and (2) interviews, conducted with PACS administrators. Results: In all, 60 questionnaires were received from the respondents. These included 39 radiologists (75% response rate) and 21 technologists (62% response rate), with the results showing almost three-quarters (74%, 44 of 59) of the respondents rating PACS positively and as user friendly. This study’s findings revealed that the demographic data, including computer experience, was an insignificant factor, having no influence on the users’ responses. The findings were further substantiated by the administrators’ interview responses, which supported the benefits of PACS, indicating the need for developing a unified policy aimed at streamlining and improving the departmental workflow. Conclusions: The PACS had a positive and productive impact on the radiologists’ and technologists’ work performance. They were endeavoring to resolve current problems while keeping abreast of advances in PACS technology, including teleradiology and mobile image viewer, which is steadily increasing in usage in the Kuwaiti health system.

  • Image Source: Makeshift Desk, copyright Jo Guldi,,
Licensed under Creative Commons Attribution cc-by 2.0

    Adoption Factors of the Electronic Health Record: A Systematic Review


    Background: The Health Information Technology for Economic and Clinical Health (HITECH) was a significant piece of legislation in America that served as a catalyst for the adoption of health information technology. Following implementation of the HITECH Act, Health Information Technology (HIT) experienced broad adoption of Electronic Health Records (EHR), despite skepticism exhibited by many providers for the transition to an electronic system. A thorough review of EHR adoption facilitator and barriers provides ongoing support for the continuation of EHR implementation across various health care structures, possibly leading to a reduction in associated economic expenditures. Objective: The purpose of this review is to compile a current and comprehensive list of facilitators and barriers to the adoption of the EHR in the United States. Methods: Authors searched Cumulative Index of Nursing and Allied Health Literature (CINAHL) and MEDLINE, 01/01/2012–09/01/2015, core clinical/academic journals, MEDLINE full text, and evaluated only articles germane to our research objective. Team members selected a final list of articles through consensus meetings (n=31). Multiple research team members thoroughly read each article to confirm applicability and study conclusions, thereby increasing validity. Results: Group members identified common facilitators and barriers associated with the EHR adoption process. In total, 25 adoption facilitators were identified in the literature occurring 109 times; the majority of which were efficiency, hospital size, quality, access to data, perceived value, and ability to transfer information. A total of 23 barriers to adoption were identified in the literature, appearing 95 times; the majority of which were cost, time consuming, perception of uselessness, transition of data, facility location, and implementation issues. Conclusions: The 25 facilitators and 23 barriers to the adoption of the EHR continue to reveal a preoccupation on cost, despite incentives in the HITECH Act. Limited financial backing and outdated technology were also common barriers frequently mentioned during data review. Future public policy should include incentives commensurate with those in the HITECH Act to maintain strong adoption rates.

  • Facilitating Secure Sharing of Personal Health Data in the Cloud


    Background: Internet-based applications are providing new ways of promoting health and reducing the cost of care. Although data can be kept encrypted in servers, the user does not have the ability to decide whom the data are shared with. Technically this is linked to the problem of who owns the data encryption keys required to decrypt the data. Currently, cloud service providers, rather than users, have full rights to the key. In practical terms this makes the users lose full control over their data. Trust and uptake of these applications can be increased by allowing patients to feel in control of their data, generally stored in cloud-based services. Objective: This paper addresses this security challenge by providing the user a way of controlling encryption keys independently of the cloud service provider. We provide a secure and usable system that enables a patient to share health information with doctors and specialists. Methods: We contribute a secure protocol for patients to share their data with doctors and others on the cloud while keeping complete ownership. We developed a simple, stereotypical health application and carried out security tests, performance tests, and usability tests with both students and doctors (N=15). Results: We developed the health application as an app for Android mobile phones. We carried out the usability tests on potential participants and medical professionals. Of 20 participants, 14 (70%) either agreed or strongly agreed that they felt safer using our system. Using mixed methods, we show that participants agreed that privacy and security of health data are important and that our system addresses these issues. Conclusions: We presented a security protocol that enables patients to securely share their eHealth data with doctors and nurses and developed a secure and usable system that enables patients to share mental health information with doctors.

  • Source:, CC0 Licensed, Public Domain; modified by authors.

    A Legal Framework to Support Development and Assessment of Digital Health Services


    Background: Digital health services empower people to track, manage, and improve their own health and quality of life while delivering a more personalized and precise health care, at a lower cost and with higher efficiency and availability. Essential for the use of digital health services is that the treatment of any personal data is compatible with the Patient Data Act, Personal Data Act, and other applicable privacy laws. Objective: The aim of this study was to develop a framework for legal challenges to support designers in development and assessment of digital health services. Methods: A purposive sampling, together with snowball recruitment, was used to identify stakeholders and information sources for organizing, extending, and prioritizing the different concepts, actors, and regulations in relation to digital health and health-promoting digital systems. The data were collected through structured interviewing and iteration, and 3 different cases were used for face validation of the framework. Results: A framework for assessing the legal challenges in developing digital health services (Legal Challenges in Digital Health [LCDH] Framework) was created and consists of 6 key questions to be used to evaluate a digital health service according to current legislation. Conclusions: Structured discussion about legal challenges in relation to health-promoting digital services can be enabled by a constructive framework to investigate, assess, and verify the digital service according to current legislation. The LCDH Framework developed in this study proposes such a framework and can be used in prospective evaluation of the relationship of a potential health-promoting digital service with the existing laws and regulations

  • Author working with colleagues. Source and copyright: the author TM.

    Putting Meaning into Meaningful Use: A Roadmap to Successful Integration of Evidence at the Point of Care

    Authors List:


    Pressures to contain health care costs, personalize patient care, use big data, and to enhance health care quality have highlighted the need for integration of evidence at the point of care. The application of evidence-based medicine (EBM) has great promise in the era of electronic health records (EHRs) and health technology. The most successful integration of evidence into EHRs has been complex decision tools that trigger at a critical point of the clinical visit and include patient specific recommendations. The objective of this viewpoint paper is to investigate why the incorporation of complex CDS tools into the EMR is equally complex and continues to challenge health service researchers and implementation scientists. Poor adoption and sustainability of EBM guidelines and CDS tools at the point of care have persisted and continue to document low rates of usage. The barriers cited by physicians include efficiency, perception of usefulness, information content, user interface, and over-triggering. Building on the traditional EHR implementation frameworks, we review keys strategies for successful CDSs: (1) the quality of the evidence, (2) the potential to reduce unnecessary care, (3) ease of integrating evidence at the point of care, (4) the evidence’s consistency with clinician perceptions and preferences, (5) incorporating bundled sets or automated documentation, and (6) shared decision making tools. As EHRs become commonplace and insurers demand higher quality and evidence-based care, better methods for integrating evidence into everyday care are warranted. We have outlined basic criteria that should be considered before attempting to integrate evidenced-based decision support tools into the EHR.

  • Source:; CC0 1.0 Public Domain.

    Impact of Implementing a Wiki to Develop Structured Electronic Order Sets on Physicians' Intention to Use Wiki-Based Order Sets


    Background: Wikis have the potential to promote best practices in health systems by sharing order sets with a broad community of stakeholders. However, little is known about the impact of using a wiki on clinicians’ intention to use wiki-based order sets. Objective: The aims of this study were: (1) to describe the use of a wiki to create structured order sets for a single emergency department; (2) to evaluate whether the use of this wiki changed emergency physicians’ future intention to use wiki-based order sets; and (3) to understand the impact of using the wiki on the behavioral determinants for using wiki-based order sets. Methods: This was a pre/post-intervention mixed-methods study conducted in one hospital in Lévis, Quebec. The intervention was comprised of receiving access to and being motivated by the department head to use a wiki for 6 months to create electronic order sets designed to be used in a computer physician order entry system. Before and after our intervention, we asked participants to complete a previously validated questionnaire based on the Theory of Planned Behavior. Our primary outcome was the intention to use wiki-based order sets in clinical practice. We also assessed participants’ attitude, perceived behavioral control, and subjective norm to use wiki-based order sets. Paired pre- and post-Likert scores were compared using Wilcoxon signed-rank tests. The post-questionnaire also included open-ended questions concerning participants’ comments about the wiki, which were then classified into themes using an existing taxonomy. Results: Twenty-eight emergency physicians were enrolled in the study (response rate: 100%). Physicians’ mean intention to use a wiki-based reminder was 5.42 (SD 1.04) before the intervention, and increased to 5.81 (SD 1.25) on a 7-point Likert scale (P=.03) after the intervention. Participants’ attitude towards using a wiki-based order set also increased from 5.07 (SD 0.90) to 5.57 (SD 0.88) (P=.003). Perceived behavioral control and subjective norm did not change. Easier information sharing was the most frequently positive impact raised. In order of frequency, the three most important facilitators reported were: ease of use, support from colleagues, and promotion by the departmental head. Although participants did not mention any perceived negative impacts, they raised the following barriers in order of frequency: poor organization of information, slow computers, and difficult wiki access. Conclusions: Emergency physicians’ intention and attitude to use wiki-based order sets increased after having access to and being motivated to use a wiki for 6 months. Future studies need to explore if this increased intention will translate into sustained actual use and improve patient care. Certain barriers need to be addressed before implementing a wiki for use on a larger scale.

  • A Clinician and EHR. Source and copyright: the author.

    Electronic Health Record-Related Safety Concerns: A Cross-Sectional Survey of Electronic Health Record Users


    Background: The rapid expansion in the use of electronic health records (EHR) has increased the number of medical errors originating in health information systems (HIS). The sociotechnical approach helps in understanding risks in the development, implementation, and use of EHR and health information technology (HIT) while accounting for complex interactions of technology within the health care system. Objective: This study addresses two important questions: (1) ?which of the common EHR error types are associated with perceived high- and extreme-risk severity ratings among EHR users??, and (2) ?which variables are associated with high- and extreme-risk severity ratings?? Methods: This study was a quantitative, non-experimental, descriptive study of EHR users. We conducted a cross-sectional web-based questionnaire study at the largest hospital district in Finland. Statistical tests included the reliability of the summative scales tested with Cronbach?s alpha. Logistic regression served to assess the association of the independent variables to each of the eight risk factors examined. Results: A total of 2864 eligible respondents provided the final data. Almost half of the respondents reported a high level of risk related to the error type ?extended EHR unavailability?. The lowest overall risk level was associated with ?selecting incorrectly from a list of items?. In multivariate analyses, profession and clinical unit proved to be the strongest predictors for high perceived risk. Physicians perceived risk levels to be the highest (P<.001 in six of eight error types), while emergency departments, operating rooms, and procedure units were associated with higher perceived risk levels (P<.001 in four of eight error types). Previous participation in eLearning courses on EHR-use was associated with lower risk for some of the risk factors. Conclusions: Based on a large number of Finnish EHR users in hospitals, this study indicates that HIT safety hazards should be taken very seriously, particularly in operating rooms, procedure units, emergency departments, and intensive care units/critical care units. Health care organizations should use proactive and systematic assessments of EHR risks before harmful events occur. An EHR training program should be compulsory for all EHR users in order to address EHR safety concerns resulting from the failure to use HIT appropriately.

  • Using a Phoropter. Source: CC 2.0 Licensed, Attribution: National Eye Institute.

    Creation of an Accurate Algorithm to Detect Snellen Best Documented Visual Acuity from Ophthalmology Electronic Health Record Notes


    Background: Visual acuity is the primary measure used in ophthalmology to determine how well a patient can see. Visual acuity for a single eye may be recorded in multiple ways for a single patient visit (eg, Snellen vs. Jäger units vs. font print size), and be recorded for either distance or near vision. Capturing the best documented visual acuity (BDVA) of each eye in an individual patient visit is an important step for making electronic ophthalmology clinical notes useful in research. Objective: Currently, there is limited methodology for capturing BDVA in an efficient and accurate manner from electronic health record (EHR) notes. We developed an algorithm to detect BDVA for right and left eyes from defined fields within electronic ophthalmology clinical notes. Methods: We designed an algorithm to detect the BDVA from defined fields within 295,218 ophthalmology clinical notes with visual acuity data present. About 5668 unique responses were identified and an algorithm was developed to map all of the unique responses to a structured list of Snellen visual acuities. Results: Visual acuity was captured from a total of 295,218 ophthalmology clinical notes during the study dates. The algorithm identified all visual acuities in the defined visual acuity section for each eye and returned a single BDVA for each eye. A clinician chart review of 100 random patient notes showed a 99% accuracy detecting BDVA from these records and 1% observed error. Conclusions: Our algorithm successfully captures best documented Snellen distance visual acuity from ophthalmology clinical notes and transforms a variety of inputs into a structured Snellen equivalent list. Our work, to the best of our knowledge, represents the first attempt at capturing visual acuity accurately from large numbers of electronic ophthalmology notes. Use of this algorithm can benefit research groups interested in assessing visual acuity for patient centered outcome. All codes used for this study are currently available, and will be made available online at

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Latest Submissions Open for Peer-Review:

View All Open Peer Review Articles
  • Qualitative improvement methods through analysis of inquiry contents for cancer registration

    Date Submitted: Jul 17, 2016

    Open Peer Review Period: Jul 17, 2016 - Sep 11, 2016

    Background: In South Korea, the national cancer database was constructed after the initiation of the national cancer registration project in 1980, and the annual national cancer registration report ha...

    Background: In South Korea, the national cancer database was constructed after the initiation of the national cancer registration project in 1980, and the annual national cancer registration report has been published every year since 2005. Consequently, data management must begin even at the stage of data collection in order to ensure quality. Objective: To determine the suitability of cancer registries’ inquiry tools through the inquiry analysis of the Korea Central Cancer Registry (KCCR), and identify the needs to improve the quality of cancer registration. Methods: Results of 721 inquiries to the KCCR from 2000 to 2014 were analyzed by inquiry year, question type, and medical institution characteristics. Using Stata version 14.1, descriptive analysis was performed to identify general participant characteristics, and chi-square analysis was applied to investigate significant differences in distribution characteristics by factors affecting the quality of cancer registration data. Results: The number of inquiries increased in 2005–2009. During this period there were various changes, including the addition of cancer registration items such as brain tumors and guideline updates. Of the inquirers, 65.3% worked at hospitals in metropolitan cities and 60.89% of hospitals had 601–1000 beds. Tertiary hospitals had the highest number of inquiries (64.91%), and the highest number of questions by type were 353 (48.96%) for histological codes, 92 (12.76%) for primary sites, and 76 (10.54%) for reportable. Conclusions: A cancer registration inquiry system is an effective method when not confident about codes during cancer registration, or when confronting cancer cases in which previous clinical knowledge or information on the cancer registration guidelines are insufficient.