JMIR Publications

JMIR Medical Informatics

Clinical informatics, decision support for health professionals, electronic health records, and ehealth infrastructures.

Advertisement

Design of Tele-Medical Controlled ECG Patch Monitor to assist patients during a heart attack

Background: This paper deals with real time ubiquitous healthcare monitoring devices. Objective: In fact, the goal is not just render a medical service in hospitals and medical offices, but also provide a reliable service during the normal daily life. Methods: In this purpose, a monitoring system for patients suffering from heart disease is designed. This system called "ECG Patch Monitor" enables cardiac data collection and analysis in real-world environments such as at work or at home. It is an adhesive device attached directly to the skin. ECG Patch Monitor aims to prevent heart attack or any other form of heart failure. It comprises an algorithm to detect cardiac abnormalities. Once an anomaly is detected, a SIM card implanted in the device alerts the healthcare center. ECG Patch Monitor is remotely assisted by the healthcare center through a platform. Indeed, ECG Patch Monitor can perform as a defibrillator in case of cardiac arrest by producing an electroshock. Also, the patient can respond to the call made by the cardiologist (in order to verify his health condition) via the loudspeaker installed in the device. Besides, ECG Patch Monitor can realize a drug auto-injection and may also locate the patient through the Global Positioning System (GPS) and allow the doctor to send an ambulance if necessary. Results: A patent application for the design of ECG Patch Monitor has been submitted to get a legal privilege. Conclusions: The system is able to detect any kind of cardiac anomaly precisely by the use of a statistical based algorithm named HBOS which gives the best results comparing to the other unsupervised anomaly detection algorithms and makes high performance with respect to a reduced error rate near to zero.

2015-07-01

(June 2015) Thomson Reuters, producer of the Journal Citation Reports and Web of Science and other database products, is creating a new edition of Web of Science (Emerging Sources Citation Index, ESCI); and we are proud to report that JMIR journals have been selected for the content expansion. 

The new Thomson Reuters Web of Science edition ESCI, which launches later in 2015, will include influential journals covering a variety of disciplines. "The journals selected have been identified as important to key opinion leaders, funders, and evaluators worldwide.", says a Thomson Reuters communication about the database. "We are proud that the Thomson Reuters team recognizes the influence of the JMIR journals", commented Gunther Eysenbach, publisher at JMIR Publications.

The following journals are confirmed to be part of the initial ESCI release, with more JMIR journals to be added:

JMIR Publications is working on getting its newer journals such as JMIR Mental Health into the collection as well. JMIR Publications is now publishing over a dozen journals with topics covering innovation in health and technology.

Read Post

Journal Description

JMIR Medical Informatics (JMI, ISSN 2291-9694) focusses on clinical informatics, big data in health and health care, decision support for health professionals, electronic health records, ehealth infrastructures and implementation. It has a focus on applied, translational research, with a broad readership including clinicians, CIOs, engineers, industry and health informatics professionals.

Published by JMIR Publications, publisher of the Journal of Medical Internet Research (JMIR), the leading eHealth/mHealth journal (Impact Factor 2013: 4.7), JMIR Med Inform has a different scope (emphasizing more on applications for clinicians and health professionals rather than consumers/citizens, which is the focus of JMIR), publishes even faster, and also allows papers which are more technical or more formative than what would be published in the Journal of Medical Internet Research.

JMIR Medical Informatics journal features a rapid and thorough peer-review process, professional copyediting, professional production of PDF, XHTML, and XML proofs (ready for deposit in PubMed Central/PubMed). The site is optimized for mobile and iPad use.

JMIR Medical Informatics adheres to the same quality standards as JMIR and all articles published here are also cross-listed in the Table of Contents of JMIR, the worlds' leading medical journal in health sciences / health services research and health informatics (http://www.jmir.org/issue/current).

 

Recent Articles:

  • https://pixabay.com/en/surgery-instruments-surgeons-688380/ 

License: CC0 Public Domain / FAQ
Free for commercial use / No attribution required.

    Optimizing Patient Preparation and Surgical Experience Using eHealth Technology

    Abstract:

    With population growth and aging, it is expected that the demand for surgical services will increase. However, increased complexity of procedures, time pressures on staff, and the demand for a patient-centered approach continue to challenge a system characterized by finite health care resources. Suboptimal care is reported in each phase of surgical care, from the time of consent to discharge and long-term follow-up. Novel strategies are thus needed to address these challenges to produce effective and sustainable improvements in surgical care across the care pathway. The eHealth programs represent a potential strategy for improving the quality of care delivered across various phases of care, thereby improving patient outcomes. This discussion paper describes (1) the key functions of eHealth programs including information gathering, transfer, and exchange; (2) examples of eHealth programs in overcoming challenges to optimal surgical care across the care pathway; and (3) the potential challenges and future directions for implementing eHealth programs in this setting. The eHealth programs are a promising alternative for collecting patient-reported outcome data, providing access to credible health information and strategies to enable patients to take an active role in their own health care, and promote efficient communication between patients and health care providers. However, additional rigorous intervention studies examining the needs of potential role of eHealth programs in augmenting patients’ preparation and recovery from surgery, and subsequent impact on patient outcomes and processes of care are needed to advance the field. Furthermore, evidence for the benefits of eHealth programs in supporting carers and strategies to maximize engagement from end users are needed.

  • (cc) Ji et al. CC-BY-SA 2.0, please cite as http://medinform.jmir.org/article/viewFile/3982/1/65448.

    Using MEDLINE Elemental Similarity to Assist in the Article Screening Process for Systematic Reviews

    Abstract:

    Background: Systematic reviews and their implementation in practice provide high quality evidence for clinical practice but are both time and labor intensive due to the large number of articles. Automatic text classification has proven to be instrumental in identifying relevant articles for systematic reviews. Existing approaches use machine learning model training to generate classification algorithms for the article screening process but have limitations. Objective: We applied a network approach to assist in the article screening process for systematic reviews using predetermined article relationships (similarity). The article similarity metric is calculated using the MEDLINE elements title (TI), abstract (AB), medical subject heading (MH), author (AU), and publication type (PT). We used an article network to illustrate the concept of article relationships. Using the concept, each article can be modeled as a node in the network and the relationship between 2 articles is modeled as an edge connecting them. The purpose of our study was to use the article relationship to facilitate an interactive article recommendation process. Methods: We used 15 completed systematic reviews produced by the Drug Effectiveness Review Project and demonstrated the use of article networks to assist article recommendation. We evaluated the predictive performance of MEDLINE elements and compared our approach with existing machine learning model training approaches. The performance was measured by work saved over sampling at 95% recall (WSS95) and the F-measure (F1). We also used repeated analysis over variance and Hommel’s multiple comparison adjustment to demonstrate statistical evidence. Results: We found that although there is no significant difference across elements (except AU), TI and AB have better predictive capability in general. Collaborative elements bring performance improvement in both F1 and WSS95. With our approach, a simple combination of TI+AB+PT could achieve a WSS95 performance of 37%, which is competitive to traditional machine learning model training approaches (23%-41% WSS95). Conclusions: We demonstrated a new approach to assist in labor intensive systematic reviews. Predictive ability of different elements (both single and composited) was explored. Without using model training approaches, we established a generalizable method that can achieve a competitive performance.

  • Spelling checker for health care content. URL: http://www.freedigitalphotos.net/images/Other_health_and_bea_g278-Health_Definition_Magnifier_p88290.html.

    Context-Sensitive Spelling Correction of Consumer-Generated Content on Health Care

    Abstract:

    Background: Consumer-generated content, such as postings on social media websites, can serve as an ideal source of information for studying health care from a consumer’s perspective. However, consumer-generated content on health care topics often contains spelling errors, which, if not corrected, will be obstacles for downstream computer-based text analysis. Objective: In this study, we proposed a framework with a spelling correction system designed for consumer-generated content and a novel ontology-based evaluation system which was used to efficiently assess the correction quality. Additionally, we emphasized the importance of context sensitivity in the correction process, and demonstrated why correction methods designed for electronic medical records (EMRs) failed to perform well with consumer-generated content. Methods: First, we developed our spelling correction system based on Google Spell Checker. The system processed postings acquired from MedHelp, a biomedical bulletin board system (BBS), and saved misspelled words (eg, sertaline) and corresponding corrected words (eg, sertraline) into two separate sets. Second, to reduce the number of words needing manual examination in the evaluation process, we respectively matched the words in the two sets with terms in two biomedical ontologies: RxNorm and Systematized Nomenclature of Medicine -- Clinical Terms (SNOMED CT). The ratio of words which could be matched and appropriately corrected was used to evaluate the correction system’s overall performance. Third, we categorized the misspelled words according to the types of spelling errors. Finally, we calculated the ratio of abbreviations in the postings, which remarkably differed between EMRs and consumer-generated content and could largely influence the overall performance of spelling checkers. Results: An uncorrected word and the corresponding corrected word was called a spelling pair, and the two words in the spelling pair were its members. In our study, there were 271 spelling pairs detected, among which 58 (21.4%) pairs had one or two members matched in the selected ontologies. The ratio of appropriate correction in the 271 overall spelling errors was 85.2% (231/271). The ratio of that in the 58 spelling pairs was 86% (50/58), close to the overall ratio. We also found that linguistic errors took up 31.4% (85/271) of all errors detected, and only 0.98% (210/21,358) of words in the postings were abbreviations, which was much lower than the ratio in the EMRs (33.6%). Conclusions: We conclude that our system can accurately correct spelling errors in consumer-generated content. Context sensitivity is indispensable in the correction process. Additionally, it can be confirmed that consumer-generated content differs from EMRs in that consumers seldom use abbreviations. Also, the evaluation method, taking advantage of biomedical ontology, can effectively estimate the accuracy of the correction system and reduce manual examination time.

  • This is a royalty free image by pandpstock001 (http://www.freedigitalphotos.net/images/blank-medical-record-in-yellow-folder-photo-p259900).

    Building Data-Driven Pathways From Routinely Collected Hospital Data: A Case Study on Prostate Cancer

    Abstract:

    Background: Routinely collected data in hospitals is complex, typically heterogeneous, and scattered across multiple Hospital Information Systems (HIS). This big data, created as a byproduct of health care activities, has the potential to provide a better understanding of diseases, unearth hidden patterns, and improve services and cost. The extent and uses of such data rely on its quality, which is not consistently checked, nor fully understood. Nevertheless, using routine data for the construction of data-driven clinical pathways, describing processes and trends, is a key topic receiving increasing attention in the literature. Traditional algorithms do not cope well with unstructured processes or data, and do not produce clinically meaningful visualizations. Supporting systems that provide additional information, context, and quality assurance inspection are needed. Objective: The objective of the study is to explore how routine hospital data can be used to develop data-driven pathways that describe the journeys that patients take through care, and their potential uses in biomedical research; it proposes a framework for the construction, quality assessment, and visualization of patient pathways for clinical studies and decision support using a case study on prostate cancer. Methods: Data pertaining to prostate cancer patients were extracted from a large UK hospital from eight different HIS, validated, and complemented with information from the local cancer registry. Data-driven pathways were built for each of the 1904 patients and an expert knowledge base, containing rules on the prostate cancer biomarker, was used to assess the completeness and utility of the pathways for a specific clinical study. Software components were built to provide meaningful visualizations for the constructed pathways. Results: The proposed framework and pathway formalism enable the summarization, visualization, and querying of complex patient-centric clinical information, as well as the computation of quality indicators and dimensions. A novel graphical representation of the pathways allows the synthesis of such information. Conclusions: Clinical pathways built from routinely collected hospital data can unearth information about patients and diseases that may otherwise be unavailable or overlooked in hospitals. Data-driven clinical pathways allow for heterogeneous data (ie, semistructured and unstructured data) to be collated over a unified data model and for data quality dimensions to be assessed. This work has enabled further research on prostate cancer and its biomarkers, and on the development and application of methods to mine, compare, analyze, and visualize pathways constructed from routine data. This is an important development for the reuse of big data in hospitals.

  • Untitled.

    Analysis of PubMed User Sessions Using a Full-Day PubMed Query Log: A Comparison of Experienced and Nonexperienced PubMed Users

    Abstract:

    Background: PubMed is the largest biomedical bibliographic information source on the Internet. PubMed has been considered one of the most important and reliable sources of up-to-date health care evidence. Previous studies examined the effects of domain expertise/knowledge on search performance using PubMed. However, very little is known about PubMed users’ knowledge of information retrieval (IR) functions and their usage in query formulation. Objective: The purpose of this study was to shed light on how experienced/nonexperienced PubMed users perform their search queries by analyzing a full-day query log. Our hypotheses were that (1) experienced PubMed users who use system functions quickly retrieve relevant documents and (2) nonexperienced PubMed users who do not use them have longer search sessions than experienced users. Methods: To test these hypotheses, we analyzed PubMed query log data containing nearly 3 million queries. User sessions were divided into two categories: experienced and nonexperienced. We compared experienced and nonexperienced users per number of sessions, and experienced and nonexperienced user sessions per session length, with a focus on how fast they completed their sessions. Results: To test our hypotheses, we measured how successful information retrieval was (at retrieving relevant documents), represented as the decrease rates of experienced and nonexperienced users from a session length of 1 to 2, 3, 4, and 5. The decrease rate (from a session length of 1 to 2) of the experienced users was significantly larger than that of the nonexperienced groups. Conclusions: Experienced PubMed users retrieve relevant documents more quickly than nonexperienced PubMed users in terms of session length.

  • Cover Picture, Copyright 2014 Lumiata, Inc.

    A Web-Based Tool for Patient Triage in Emergency Department Settings: Validation Using the Emergency Severity Index

    Abstract:

    Background: We evaluated the concordance between triage scores generated by a novel Internet clinical decision support tool, Clinical GPS (cGPS) (Lumiata Inc, San Mateo, CA), and the Emergency Severity Index (ESI), a well-established and clinically validated patient severity scale in use today. Although the ESI and cGPS use different underlying algorithms to calculate patient severity, both utilize a five-point integer scale with level 1 representing the highest severity. Objective: The objective of this study was to compare cGPS results with an established gold standard in emergency triage. Methods: We conducted a blinded trial comparing triage scores from the ESI: A Triage Tool for Emergency Department Care, Version 4, Implementation Handbook to those generated by cGPS from the text of 73 sample case vignettes. A weighted, quadratic kappa statistic was used to assess agreement between cGPS derived severity scores and those published in the ESI handbook for all 73 cases. Weighted kappa concordance was defined a priori as almost perfect (kappa > 0.8), substantial (0.6 < kappa < 0.8), moderate (0.4 < kappa < 0.6), fair (0.2 < kappa< 0.4), or slight (kappa < 0.2). Results: Of the 73 case vignettes, the cGPS severity score matched the ESI handbook score in 95% of cases (69/73 cases), in addition, the weighted, quadratic kappa statistic showed almost perfect agreement (kappa = 0.93, 95% CI 0.854-0.996). In the subanalysis of 41 case vignettes assigned ESI scores of level 1 or 2, the cGPS and ESI severity scores matched in 95% of cases (39/41 cases). Conclusions: These results indicate that the cGPS is a reliable indicator of triage severity, based on its comparison to a standardized index, the ESI. Future studies are needed to determine whether the cGPS can accurately assess the triage of patients in real clinical environments.

  • An image from authors.

    Comprehensive Evaluation of Electronic Medical Record System Use and User Satisfaction at Five Low-Resource Setting Hospitals in Ethiopia

    Abstract:

    Background: Electronic medical record (EMR) systems are increasingly being implemented in hospitals of developing countries to improve patient care and clinical service. However, only limited evaluation studies are available concerning the level of adoption and determinant factors of success in those settings. Objective: The objective of this study was to assess the usage pattern, user satisfaction level, and determinants of health professional’s satisfaction towards a comprehensive EMR system implemented in Ethiopia where parallel documentation using the EMR and the paper-based medical records is in practice. Methods: A quantitative, cross-sectional study design was used to assess the usage pattern, user satisfaction level, and determinant factors of an EMR system implemented in Ethiopia based on the DeLone and McLean model of information system success. Descriptive statistical methods were applied to analyze the data and a binary logistic regression model was used to identify determinant factors. Results: Health professionals (N=422) from five hospitals were approached and 406 responded to the survey (96.2% response rate). Out of the respondents, 76.1% (309/406) started to use the system immediately after implementation and user training, but only 31.7% (98/309) of the professionals reported using the EMR during the study (after 3 years of implementation). Of the 12 core EMR functions, 3 were never used by most respondents, and they were also unaware of 4 of the core EMR functions. It was found that 61.4% (190/309) of the health professionals reported over all dissatisfaction with the EMR (median=4, interquartile range (IQR)=1) on a 5-level Likert scale. Physicians were more dissatisfied (median=5, IQR=1) when compared to nurses (median=4, IQR=1) and the health management information system (HMIS) staff (median=2, IQR=1). Of all the participants, 64.4% (199/309) believed that the EMR had no positive impact on the quality of care. The participants indicated an agreement with the system and information quality (median=2, IQR=0.5) but strongly disagreed with the service quality (median=5, IQR=1). The logistic regression showed a strong correlation between system use and dissatisfaction (OR 7.99, 95% CI 5.62-9.10) and service quality and satisfaction (OR 8.23, 95% CI 3.23-17.01). Conclusions: Health professionals’ use of the EMR is low and they are generally dissatisfied with the service of the implemented system. The results of this study show that this dissatisfaction is caused mainly and strongly by the poor service quality, the current practice of double documentation (EMR and paper-based), and partial departmental use of the system in the hospitals. Thus, future interventions to improve the current use or future deployment projects should focus on improving the service quality such as power infrastructure, user support, trainings, and more computers in the wards. After service quality improvement, other departments (especially inter-dependent departments) should be motivated and supported to use the EMR to avoid the dependency deadlock.

  • A screenshot of ECG diagnosis using the telesurveillance system. The ECG waveform and the corresponding classification suggestions are revealed on the screen. The suggested heartbeat classification is marked with a blue dot. Health professionals can make decisions using this information in clinical practice.

    A Telesurveillance System With Automatic Electrocardiogram Interpretation Based on Support Vector Machine and Rule-Based Processing

    Abstract:

    Background: Telehealth care is a global trend affecting clinical practice around the world. To mitigate the workload of health professionals and provide ubiquitous health care, a comprehensive surveillance system with value-added services based on information technologies must be established. Objective: We conducted this study to describe our proposed telesurveillance system designed for monitoring and classifying electrocardiogram (ECG) signals and to evaluate the performance of ECG classification. Methods: We established a telesurveillance system with an automatic ECG interpretation mechanism. The system included: (1) automatic ECG signal transmission via telecommunication, (2) ECG signal processing, including noise elimination, peak estimation, and feature extraction, (3) automatic ECG interpretation based on the support vector machine (SVM) classifier and rule-based processing, and (4) display of ECG signals and their analyzed results. We analyzed 213,420 ECG signals that were diagnosed by cardiologists as the gold standard to verify the classification performance. Results: In the clinical ECG database from the Telehealth Center of the National Taiwan University Hospital (NTUH), the experimental results showed that the ECG classifier yielded a specificity value of 96.66% for normal rhythm detection, a sensitivity value of 98.50% for disease recognition, and an accuracy value of 81.17% for noise detection. For the detection performance of specific diseases, the recognition model mainly generated sensitivity values of 92.70% for atrial fibrillation, 89.10% for pacemaker rhythm, 88.60% for atrial premature contraction, 72.98% for T-wave inversion, 62.21% for atrial flutter, and 62.57% for first-degree atrioventricular block. Conclusions: Through connected telehealth care devices, the telesurveillance system, and the automatic ECG interpretation system, this mechanism was intentionally designed for continuous decision-making support and is reliable enough to reduce the need for face-to-face diagnosis. With this value-added service, the system could widely assist physicians and other health professionals with decision making in clinical practice. The system will be very helpful for the patient who suffers from cardiac disease, but for whom it is inconvenient to go to the hospital very often.

  • Single-word text cloud created in TagCrowd from all free text comments.

    Web-Based Textual Analysis of Free-Text Patient Experience Comments From a Survey in Primary Care

    Abstract:

    Background: Open-ended questions eliciting free-text comments have been widely adopted in surveys of patient experience. Analysis of free text comments can provide deeper or new insight, identify areas for action, and initiate further investigation. Also, they may be a promising way to progress from documentation of patient experience to achieving quality improvement. The usual methods of analyzing free-text comments are known to be time and resource intensive. To efficiently deal with a large amount of free-text, new methods of rapidly summarizing and characterizing the text are being explored. Objective: The aim of this study was to investigate the feasibility of using freely available Web-based text processing tools (text clouds, distinctive word extraction, key words in context) for extracting useful information from large amounts of free-text commentary about patient experience, as an alternative to more resource intensive analytic methods. Methods: We collected free-text responses to a broad, open-ended question on patients’ experience of primary care in a cross-sectional postal survey of patients recently consulting doctors in 25 English general practices. We encoded the responses to text files which were then uploaded to three Web-based textual processing tools. The tools we used were two text cloud creators: TagCrowd for unigrams, and Many Eyes for bigrams; and Voyant Tools, a Web-based reading tool that can extract distinctive words and perform Keyword in Context (KWIC) analysis. The association of patients’ experience scores with the occurrence of certain words was tested with logistic regression analysis. KWIC analysis was also performed to gain insight into the use of a significant word. Results: In total, 3426 free-text responses were received from 7721 patients (comment rate: 44.4%). The five most frequent words in the patients’ comments were “doctor”, “appointment”, “surgery”, “practice”, and “time”. The three most frequent two-word combinations were “reception staff”, “excellent service”, and “two weeks”. The regression analysis showed that the occurrence of the word “excellent” in the comments was significantly associated with a better patient experience (OR=1.96, 95%CI=1.63-2.34), while “rude” was significantly associated with a worse experience (OR=0.53, 95%CI=0.46-0.60). The KWIC results revealed that 49 of the 78 (63%) occurrences of the word “rude” in the comments were related to receptionists and 17(22%) were related to doctors. Conclusions: Web-based text processing tools can extract useful information from free-text comments and the output may serve as a springboard for further investigation. Text clouds, distinctive words extraction and KWIC analysis show promise in quick evaluation of unstructured patient feedback. The results are easily understandable, but may require further probing such as KWIC analysis to establish the context. Future research should explore whether more sophisticated methods of textual analysis (eg, sentiment analysis, natural language processing) could add additional levels of understanding.

  • A processing pipeline that transforms verbal clinical handover information into electronic structured records automatically. H Suominen, 2014, Creative Commons – Attribution Alone.

    Benchmarking Clinical Speech Recognition and Information Extraction: New Data, Methods, and Evaluations

    Abstract:

    Background: Over a tenth of preventable adverse events in health care are caused by failures in information flow. These failures are tangible in clinical handover; regardless of good verbal handover, from two-thirds to all of this information is lost after 3-5 shifts if notes are taken by hand, or not at all. Speech recognition and information extraction provide a way to fill out a handover form for clinical proofing and sign-off. Objective: The objective of the study was to provide a recorded spoken handover, annotated verbatim transcriptions, and evaluations to support research in spoken and written natural language processing for filling out a clinical handover form. This dataset is based on synthetic patient profiles, thereby avoiding ethical and legal restrictions, while maintaining efficacy for research in speech-to-text conversion and information extraction, based on realistic clinical scenarios. We also introduce a Web app to demonstrate the system design and workflow. Methods: We experiment with Dragon Medical 11.0 for speech recognition and CRF++ for information extraction. To compute features for information extraction, we also apply CoreNLP, MetaMap, and Ontoserver. Our evaluation uses cross-validation techniques to measure processing correctness. Results: The data provided were a simulation of nursing handover, as recorded using a mobile device, built from simulated patient records and handover scripts, spoken by an Australian registered nurse. Speech recognition recognized 5276 of 7277 words in our 100 test documents correctly. We considered 50 mutually exclusive categories in information extraction and achieved the F1 (ie, the harmonic mean of Precision and Recall) of 0.86 in the category for irrelevant text and the macro-averaged F1 of 0.70 over the remaining 35 nonempty categories of the form in our 101 test documents. Conclusions: The significance of this study hinges on opening our data, together with the related performance benchmarks and some processing software, to the research and development community for studying clinical documentation and language-processing. The data are used in the CLEFeHealth 2015 evaluation laboratory for a shared task on speech recognition.

  • Image of pregnant woman, drawn by friend.  Gifted to our project for use, no attribution required.

    Balancing the Interests of Patient Data Protection and Medication Safety Monitoring in a Public-Private Partnership

    Abstract:

    Obtaining data without the intervention of a health care provider represents an opportunity to expand understanding of the safety of medications used in difficult-to-study situations, like the first trimester of pregnancy when women may not present for medical care. While it is widely agreed that personal data, and in particular medical data, needs to be protected from unauthorized use, data protection requirements for population-based studies vary substantially by country. For public-private partnerships, the complexities are enhanced. The objective of this viewpoint paper is to illustrate the challenges related to data protection based on our experiences when performing relatively straightforward direct-to-patient noninterventional research via the Internet or telephone in four European countries. Pregnant women were invited to participate via the Internet or using an automated telephone response system in Denmark, the Netherlands, Poland, and the United Kingdom. Information was sought on medications, other factors that may cause birth defects, and pregnancy outcome. Issues relating to legal controllership of data were most problematic; assuring compliance with data protection requirements took about two years. There were also inconsistencies in the willingness to accept nonwritten informed consent. Nonetheless, enrollment and data collection have been completed, and analysis is in progress. Using direct reporting from consumers to study the safety of medicinal products allows researchers to address a myriad of research questions relating to everyday clinical practice, including treatment heterogeneity in population subgroups not traditionally included in clinical trials, like pregnant women, children, and the elderly. Nonetheless, there are a variety of administrative barriers relating to data protection and informed consent, particularly within the structure of a public-private partnership.

  • This image was created by the authors using deidentified patient data. It represents a classifier picking out text.

    Prioritization of Free-Text Clinical Documents: A Novel Use of a Bayesian Classifier

    Abstract:

    Background: The amount of incoming data into physicians’ offices is increasing, thereby making it difficult to process information efficiently and accurately to maximize positive patient outcomes. Current manual processes of screening for individual terms within long free-text documents are tedious and error-prone. This paper explores the use of statistical methods and computer systems to assist clinical data management. Objective: The objective of this study was to verify and validate the use of a naive Bayesian classifier as a means of properly prioritizing important clinical data, specifically that of free-text radiology reports. Methods: There were one hundred reports that were first used to train the algorithm based on physicians’ categorization of clinical reports as high-priority or low-priority. Then, the algorithm was used to evaluate 354 reports. Additional beautification procedures such as section extraction, text preprocessing, and negation detection were performed. Results: The algorithm evaluated the 354 reports with discrimination between high-priority and low-priority reports, resulting in a bimodal probability distribution. In all scenarios tested, the false negative rates were below 1.1% and the recall rates ranged from 95.65% to 98.91%. In the case of 50% prior probability and 80% threshold probability, the accuracy of this Bayesian classifier was 93.50%, with a positive predictive value (precision) of 80.54%. It also showed a sensitivity (recall) of 98.91% and a F-measure of 88.78%. Conclusions: The results showed that the algorithm could be trained to detect abnormal radiology results by accurately screening clinical reports. Such a technique can potentially be used to enable automatic flagging of critical results. In addition to accuracy, the algorithm was able to minimize false negatives, which is important for clinical applications. We conclude that a Bayesian statistical classifier, by flagging reports with abnormal findings, can assist a physician in reviewing radiology reports more efficiently. This higher level of prioritization allows physicians to address important radiologic findings in a timelier manner and may also aid in minimizing errors of omission.

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Latest Submissions Open for Peer-Review:

View All Open Peer Review Articles
  • Design of Tele-Medical Controlled ECG Patch Monitor to assist patients during a heart attack

    Date Submitted: Aug 26, 2015

    Open Peer Review Period: Aug 26, 2015 - Oct 21, 2015

    Background: This paper deals with real time ubiquitous healthcare monitoring devices. Objective: In fact, the goal is not just render a medical service in hospitals and medical offices, but also provi...

    Background: This paper deals with real time ubiquitous healthcare monitoring devices. Objective: In fact, the goal is not just render a medical service in hospitals and medical offices, but also provide a reliable service during the normal daily life. Methods: In this purpose, a monitoring system for patients suffering from heart disease is designed. This system called "ECG Patch Monitor" enables cardiac data collection and analysis in real-world environments such as at work or at home. It is an adhesive device attached directly to the skin. ECG Patch Monitor aims to prevent heart attack or any other form of heart failure. It comprises an algorithm to detect cardiac abnormalities. Once an anomaly is detected, a SIM card implanted in the device alerts the healthcare center. ECG Patch Monitor is remotely assisted by the healthcare center through a platform. Indeed, ECG Patch Monitor can perform as a defibrillator in case of cardiac arrest by producing an electroshock. Also, the patient can respond to the call made by the cardiologist (in order to verify his health condition) via the loudspeaker installed in the device. Besides, ECG Patch Monitor can realize a drug auto-injection and may also locate the patient through the Global Positioning System (GPS) and allow the doctor to send an ambulance if necessary. Results: A patent application for the design of ECG Patch Monitor has been submitted to get a legal privilege. Conclusions: The system is able to detect any kind of cardiac anomaly precisely by the use of a statistical based algorithm named HBOS which gives the best results comparing to the other unsupervised anomaly detection algorithms and makes high performance with respect to a reduced error rate near to zero.

  • Computerized automated quantification of subcutaneous and visceral adipose tissue from CT scan

    Date Submitted: Jul 10, 2015

    Open Peer Review Period: Jul 11, 2015 - Sep 5, 2015

    Background: Computed Tomography (CT) scan is often viewed as one of the most accurate methods for measuring Visceral Adipose Tissue (VAT). However, measuring VAT and Subcutaneous Adipose Tissue (SAT)...

    Background: Computed Tomography (CT) scan is often viewed as one of the most accurate methods for measuring Visceral Adipose Tissue (VAT). However, measuring VAT and Subcutaneous Adipose Tissue (SAT) from CT is time-consuming and tedious process. Thus, evaluation or study of patients’ obesity during clinical trial scan is cumbersome and limiting. Objective: In order to resolve such problems, we propose an image-processing-based automated method for measuring the adipose tissue in the entire abdominal region. Methods: In this study, our proposed method detects SAT and VAT using the separation mask based on muscles of human body. The separation mask is the region that minimizes the unnecessary space between closet path and muscle area. Also, we made the correction mask based on bones and corrected the error in VAT. Results: In order to validate the method, we measured the volume of Total Adipose Tissue (TAT), SAT and VAT for a total of 100 CT data using automatic method and compared the result with the manual measurement results obtained by two experts. Dice’s Similarity Coefficient (DSC) between first manual measurement result and automatic one for TAT, SAT, VAT is respectively 0.99, 0.98 and 0.97. DSC between the results of second manual measurement and automatic one is 0.98, 0.98 and 0.97. Moreover, Intra-class Correlation Coefficient(ICC) between automatic method result and the results of the manual measurement by two experts indicates high reliability as ICC for the measuring items are all .99(P< .001). Conclusions: These results confirmed the accuracy and reliability of the proposed method. This method is expected to be convenient and useful in the clinical evaluation and study of obesity in patients who need to measure SAT and VAT.

Advertisement