Published on in Vol 7, No 3 (2019): Jul-Sep

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/14185, first published .
A Real-Time Automated Patient Screening System for Clinical Trials Eligibility in an Emergency Department: Design and Evaluation

A Real-Time Automated Patient Screening System for Clinical Trials Eligibility in an Emergency Department: Design and Evaluation

A Real-Time Automated Patient Screening System for Clinical Trials Eligibility in an Emergency Department: Design and Evaluation

Original Paper

Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States

Corresponding Author:

Yizhao Ni, PhD

Cincinnati Children's Hospital Medical Center

3333 Burnet Ave

Cincinnati, OH, 45229

United States

Phone: 1 5138034269

Email: yizhao.ni@cchmc.org


Background: One critical hurdle for clinical trial recruitment is the lack of an efficient method for identifying subjects who meet the eligibility criteria. Given the large volume of data documented in electronic health records (EHRs), it is labor-intensive for the staff to screen relevant information, particularly within the time frame needed. To facilitate subject identification, we developed a natural language processing (NLP) and machine learning–based system, Automated Clinical Trial Eligibility Screener (ACTES), which analyzes structured data and unstructured narratives automatically to determine patients’ suitability for clinical trial enrollment. In this study, we integrated the ACTES into clinical practice to support real-time patient screening.

Objective: This study aimed to evaluate ACTES’s impact on the institutional workflow, prospectively and comprehensively. We hypothesized that compared with the manual screening process, using EHR-based automated screening would improve efficiency of patient identification, streamline patient recruitment workflow, and increase enrollment in clinical trials.

Methods: The ACTES was fully integrated into the clinical research coordinators’ (CRC) workflow in the pediatric emergency department (ED) at Cincinnati Children’s Hospital Medical Center. The system continuously analyzed EHR information for current ED patients and recommended potential candidates for clinical trials. Relevant patient eligibility information was presented in real time on a dashboard available to CRCs to facilitate their recruitment. To assess the system’s effectiveness, we performed a multidimensional, prospective evaluation for a 12-month period, including a time-and-motion study, quantitative assessments of enrollment, and postevaluation usability surveys collected from the CRCs.

Results: Compared with manual screening, the use of ACTES reduced the patient screening time by 34% (P<.001). The saved time was redirected to other activities such as study-related administrative tasks (P=.03) and work-related conversations (P=.006) that streamlined teamwork among the CRCs. The quantitative assessments showed that automated screening improved the numbers of subjects screened, approached, and enrolled by 14.7%, 11.1%, and 11.1%, respectively, suggesting the potential of ACTES in streamlining recruitment workflow. Finally, the ACTES achieved a system usability scale of 80.0 in the postevaluation surveys, suggesting that it was a good computerized solution.

Conclusions: By leveraging NLP and machine learning technologies, the ACTES demonstrated good capacity for improving efficiency of patient identification. The quantitative assessments demonstrated the potential of ACTES in streamlining recruitment workflow and improving patient enrollment. The postevaluation surveys suggested that the system was a good computerized solution with satisfactory usability.

JMIR Med Inform 2019;7(3):e14185

doi:10.2196/14185

Keywords



Background

Clinical trials are experiments in biomedical research involving human subjects. These trials advance medical science and are a valuable step toward providing new treatments. According to ClinicalTrials.gov, there are 34,240 clinical trials actively recruiting subjects in the United States [1]. However, challenges with patient recruitment for clinical trials are recognized as major barriers to the timely and efficacious conduct of translational research [2-8]. In current practice, clinical trial staff (eg, clinical research coordinators; CRCs) manually screen patients for eligibility before approaching them for enrollment. The process includes reviewing the patients’ electronic health records (EHRs) for demographics and clinical conditions, collating and matching the information to trial requirements, and identifying eligible candidates based on the requirements [3]. One critical hurdle is the lack of an efficient method for detecting subjects who meet eligibility criteria [2,5,8]. Given the large volume of data documented in EHRs, it is labor-intensive for the staff to screen relevant information, particularly within the time frame needed. For patients presenting during clinical visits, screening would ideally take place early enough in the visits such that the eligible candidates could be approached for enrollment without prolonging their stay. The workflow not only poses a significant financial burden for an institution undertaking clinical research, but also hinders the successful completion of clinical studies if eligible candidates cannot be approached [9].

In recent years, automated patient screening for clinical trials has become an active area for research and development and several informatics-based approaches have been proposed. These approaches either (1) manually design rule-based triggers for a clinical trial (eg, International Classification of Diseases-9 codes) to identify patient cohorts [10-14] or (2) automatically match patterns (eg, symptoms and diseases) between clinical trial description and EHR information to identify potential trial-patient matches [15-22]. Rule-based triggers are widely used in current practice in the form of trial-specific best practice advisories, but their accuracy remains an issue [23]. Automated matching methods rely on advanced technologies such as natural language processing (NLP) to improve the accuracy of subject identification [15-22]. However, these applications are usually experimental and their performance in clinical practice remains unclear [24]. Few studies explicitly report patient screening efficiency in prospective settings. Consequently, even though manual screening is inefficient, it is currently a standard practice in conducting clinical trial research.

In our recent work, we developed an NLP- and machine learning–based system, Automated Clinical Trial Eligibility Screener (ACTES), to automate subject identification for clinical trials [18,19,25]. The system extracted patient demographics and clinical assessments (eg, diagnostic tests) from structured EHR data. It also identified patients’ clinical conditions and treatments (eg, symptoms, diseases, and surgery history) from unstructured clinical narratives using NLP and machine learning technologies. Leveraging information retrieval algorithms, the system matched the extracted content with the eligibility criteria to determine patients’ suitability for clinical trials. The ACTES addressed the problem that is cognitively challenging for humans because of the large volume of data that must be reviewed in a short time. In a gold standard-based retrospective evaluation of 13 pediatric trials, the system achieved statistically significant improvement in screening efficiency and suggested a potential reduction in staff workload [18]. It was further validated on a set of 55 pediatric oncology trials, where a similar reduction in screening effort was observed [19]. To test its generalizability on external data sources, the ACTES was submitted to the 2018 National NLP Clinical Challenges (Track 1) that aimed to automate identification of adult patients for 13 clinical trial criteria (eg, myocardial infarction and advanced cardiovascular disease) [26]. The ACTES achieved an overall performance of 90.3% (micro F-measure) that was placed in a statistical tie with the top 5 out of 101 systems [27]. Although the system achieved promising results in patient identification, the imperfection of NLP technologies in understanding language semantics (eg, word sense disambiguation) and syntax (eg, assertion detection) caused multiple types of false positive recommendations [18,19]. Additional study is therefore required to investigate their impact on system integration and end user satisfaction.

To this end, we integrated the ACTES into the institutional workflow to support real-time patient screening. To evaluate its effectiveness on patient recruitment, we implemented a multidimensional evaluation, including a time-and-motion study, quantitative assessments of enrollment, and postevaluation usability surveys. A time-and-motion study is a continuous, observational study where an observer watches the subject (eg, a CRC) performing a task and uses a timekeeping device to record the time taken to accomplish the task [28]. The methodology has been used to evaluate the efficiency of clinical activities to reduce redundant work and improve workflow [29-31]. Results of time-and-motion analysis can also identify positive and negative effects of new technologies during their workflow integration [32-34]. The postevaluation surveys were implemented with system usability scale (SUS), which is a standardized questionnaire measuring the users’ perceived usability on computerized solutions [35]. The SUS is a widely used and validated survey instrument and it has been applied to assess the usability of patient-oriented computerized programs in prior clinical studies [36-38].

Objective

This study sought to evaluate the ACTES’s impact on the institutional workflow, prospectively and comprehensively. We hypothesized that compared with the manual screening process, using EHR-based automated screening would improve efficiency of patient identification, streamline patient recruitment workflow, and increase enrollment in clinical trials. Specific aims of this study were (1) to evaluate the effects of ACTES on improving patient screening via an observational, randomized time-and-motion study, (2) to assess the system’s impact on patient recruitment using quantitative assessments of enrollment, and (3) to identify the system’s advantages and limitations with postevaluation usability surveys. This study is among the first to investigate real-time integration of the NLP- and machine learning-based patient screening into clinical practice. Our long-term objective is to develop an automated system that will contribute to a more efficient and scalable paradigm in clinical trial enrollment across health care institutions with an EHR in place.


Setting and Participants

The pediatric emergency department (ED) at Cincinnati Children’s Hospital Medical Center (CCHMC) is an urban, level 1 trauma center with more than 70,000 patient visits annually. The department is an appropriate place for many clinical studies because of the variety and complexity of presenting complaints and varied patient demographics [39]. The ED staffs 8 full-time CRCs (including a CRC manager) to recruit subjects for clinical studies from 8 am to midnight, 6 days a week, and from 8 am to 5 pm on Sundays. Owing to the unplanned nature of ED visits, CRCs have to manually screen and enroll patients during each visit, without an opportunity to preplan or sort. The average length of stay in the CCHMC ED is 3.4 hours. Given the fluctuating patient volumes in this busy clinical environment, although ample potential research subjects are presented, there is little time for the CRCs to repetitively review EHRs, locate clinical staff to answer questions regarding patients’ conditions or treatments, and approach eligible candidates for enrollment. For these reasons, in the study, we focused on the integration of ACTES into the ED. The EHR in use during the study period was the Epic Systems.

The ethics approval for this study was provided by the CCHMC institutional review board (study ID: 2013-4241). After system integration, we performed a prospective study between October 1, 2017 and September 30, 2018, which involved a total of 46,612 patient visits during CRC staffing time. A total of 7 CRCs consented to and participated in the study by using the ACTES during their workday and providing feedback. As the CRC manager supervised the staff and had little involvement in patient screening, he was excluded from our study.

Clinical Trials

During the study period, there were 6 clinical trials actively recruiting patients in the CCHMC ED. The trials required review of either structured data (eg, demographics, vital signs, medications, and procedure orders) or patients’ clinical conditions from unstructured narrative notes (eg, chief complaints, signs, and symptoms) or both for enrollment. The clinical trials covered a variety of diseases, including respiratory tract infection, traumatic brain injury, and serious bacterial infections. The summary of these clinical trials and their core eligibility criteria are presented in Multimedia Appendix 1.

Patient Recruitment With Automated Screening

We leveraged a human factors engineering framework to design the recruitment workflow with automated patient screening [40]. The process involved an iterative design of system modules with the CRC team using a series of group meetings. Figure 1 diagrams an overview of the patient recruitment workflow, where the ACTES modules are highlighted in blue. Details of the module functionalities can be found in our earlier publications [18,19,27].

Patient information was recorded routinely in the EHR as structured entries (eg, vital signs) and unstructured clinical notes (eg, signs, symptoms) as per standard clinical workflow. We did not modify either the content or the structure of how the clinical entries were created. The ACTES ran continuously on a secured, Health Insurance Portability and Accountability Act-compliant server to extract structured and unstructured entries from the EHR for current ED patients (process 1). Given the EHR information, the system first excluded patients whose structured entries did not meet trial inclusion requirements. The structured entries included age, sex, race, language, legal guardian presence, vital signs, acuity, medication, and procedure orders (Multimedia Appendix 1). The complete sets of codes (eg, Current Procedural Terminology codes) for medication and procedure orders were provided by the clinical trial investigators. For the remaining patients, the system identified relevant information (eg, symptoms) from unstructured clinical narratives using NLP technologies. Details of the NLP process have been specified in our earlier studies [18,19]. To summarize, the clinical narratives were first tokenized and lemmatized, where duplicate sentences and punctuations were removed. The system then identified relevant phrases (eg, symptom-related keywords) from the text and extracted their medical concepts from clinical terminologies, including concept unique identifiers from the Universal Medical Language System, Systematized Nomenclature of Medicine—Clinical Terms codes, and a standardized nomenclature for clinical drugs [41-43]. Assertion (negation, temporal, and experiencer) detection was applied to convert the extracted terms to the corresponding format. For example, the phrase to rule out pneumonia was converted to NEG_C0032285 in assertion detection. The same process was applied to identify phrases and medical concepts from unstructured trial requirements. Finally, information retrieval algorithms matched between the extracted terms and ranked patient candidates based on the degree of matching (process 2). The ranked list of patients along with their demographics and clinical information were displayed on a Web-based dashboard available to the CRCs (process 3). The information was refreshed at 10-min increments to accommodate real-time updates. Given the recommended patients as potential subjects for a clinical trial, the CRCs performed additional EHR screening to confirm the candidates’ eligibility before enrollment (process 4). If an eligible candidate was identified, the CRC would document the patient’s eligibility and approach him or her for enrollment before discharge (processes 5 and 6). If a patient was deemed to be not eligible, the CRC would briefly document the reason. The CRC documentation was fed to the active learner in real time (process 7). The module used active learning technologies to analyze the documentation and patient EHRs to find pertinent information associated with eligibility [18]. For instance, the active learner extracted an informative term skull fractures (concept unique ID: c0037304) automatically from the EHR of an eligible patient for the clinical trial M-TBI (Multimedia Appendix 1) to supplement the definition of head injury in the inclusion. This information was leveraged to adjust the trial criteria, which were used to match future candidates during patient identification (process 8).

Figure 1. The overview of patient recruitment workflow with automated patient screening. API- Application Programming Interface; ACTES: Automated Clinical Trial Eligibility Screener; CRC: Clinical Research Coordinator; EHR: Electronic Health Record.
View this figure

Prospective Evaluations

To assess the system’s impact on the CRC workflow, we performed a multidimensional, prospective evaluation that included a time-and-motion study, quantitative assessments of enrollment, and postevaluation usability surveys collected from the CRCs.

The Time-and-Motion Study

To evaluate the system effects on improving patient screening efficiency, we performed an observation-based, randomized time-and-motion study in the ED. One observer tracked how a CRC allocated his or her time during a 120-min observation section at 30-second increments. In each section, the observer shadowed the CRC to observe the patient recruitment workflow. Overall, 1 or 2 major activities that the CRC was engaged in were recorded in each 30-second period. At the end of the section, the observer calculated the percentage of time the CRC spent on each activity.

The list of activities performed by the CRCs was developed in our earlier study [9]. The major activities included patient screening, patient contact, performing procedures, waiting, and other activities, each of which has multiple subcategories. A research assistant independent of the CRC team was hired as the observer to avoid potential biases in activity documentation. The observer shadowed the CRCs step by step without conversation to mitigate the Hawthorne effect [44].

The study included 96 observation sections distributed evenly among CRCs and staff shifts within 4 1-month periods. Each 1-month period comprised 24 observation sections, where the ACTES was used to facilitate patient screening on 12 sections stratified sampled based on the CRCs and staff shifts. The 4 time periods covered the fall (October 2017), winter (February 2018), spring (April 2018), and summer (August-September 2018) to mitigate seasonal effects on patient recruitment. We compared the percentage of time spent on CRC activities (eg, patient screening) with and without using the ACTES. The statistical significance of the difference in time spent per activity was assessed using unpaired t test [45].

Quantitative Assessments of Enrollment

In the ED, potentially eligible candidates could be missed momently if the CRCs were busy screening and enrolling other subjects. We hypothesized that by improving efficiency of patient identification, the ACTES would subsequently improve patient recruitment. To this end, we calculated 3 enrollment statistics as follows: (1) patients screened, as defined by the number of patients for whom the CRCs reviewed a significant portion of the EHR (eg, demographics, chief complaints, and procedure orders), (2) patients approached, as defined by the number of patients physically approached by the CRCs for enrollment, and (3) patients enrolled, as defined by the number of patients enrolled for a trial. The statistics were aggregated on a weekly basis. The enrollment statistics were then compared with historical controls documented in the CRC study database that was routinely used to record screening and enrollment information. For each clinical trial, the enrollment statistics when using ACTES were compared with that of the same time period in the previous year when the ACTES was not in place. The results were assessed individually and in aggregate; unpaired t tests were used to evaluate the statistical significance of the difference in enrollment performance.

Postevaluation Usability Surveys

Usability is the effectiveness, efficiency, and satisfaction with which users can perform a specific set of tasks in a particular environment [46]. It is one of the most important factors that impact users’ adoption and meaningful use of health information technologies [47]. As our ultimate goal is to disseminate the ACTES across health care institutions, we evaluated the system usability periodically in the study to inform its future refinement.

After each 1-month time-and-motion evaluation, the CRCs were asked to complete a postevaluation usability survey, including the SUS and a set of open-ended questions. The templated usability survey is presented in Multimedia Appendix 2. The SUS comprised 10 statements on a 5-point agreement scale between strongly disagree and strongly agree [35]. On the basis of earlier research, a score of 68 is considered to be average with higher scores reflecting greater than average usability across comparable applications. The SUS results were analyzed quantitatively to assess the usability of ACTES over time. The open-ended questions were analyzed qualitatively to identify advantages and limitations of the ACTES and to refine the system design and user interface.


Time-and-Motion Study

Table 1 presents the percentage of time spent on CRC activities averaged over all observation sections. The CRCs spent 38.5% of time on electronic screening without the ACTES. The time was reduced statistically significantly to 25.6% when the ACTES was in place (P<.001). Figure 2 illustrates a regression analysis on time for electronic screening along the study days. Without using the ACTES, the screening time increased in the winter and decreased in the spring and summer. With using the system, the screening time decreased gradually, with a mild increase in the winter season.

Table 1. Percentage of time spent on clinical research coordinator activities with and without using automated patient screening.
Category and clinical research coordinator activitiesWith ACTESa, %Without ACTES, %
Patient screening


Electronic screening (browsing electronic health record or ACTES)25.6b38.5

In-person screening (with physician, nurse, and patient)1.52.1

Logging patient eligibility in study databases5.26.6

Nonelectronic screening (reviewing log sheet)0.2b0.4
Patient contact


Introducing study0.50.4

Consent procedures0.90.4

Unclassified patient contact0.00.3
Performing study procedures


Clinical research coordinator performing study procedures and collecting data (eg, interviews, sample collection)5.95.3
Waiting


Waiting for clinical procedures to be completed0.60.5

Waiting for sample collection to be completed1.5b0.5

Other unspecified waiting1.20.8
Other activities


Study-related admin tasks (eg, reviewing study packet, preparing supplies)15.8b10.9

Work-related conversations10.5b6.6

Miscellaneous work-related admin tasks4.74.6

Emails/Web browsing11.18.8

Walking7.16.3

Personal time (nonwork-related activities)7.66.9

aACTES: Automated Clinical Trial Eligibility Screener.

bThe difference between clinical research coordinator activities in a category is statistically significant at the .05 level.

Figure 2. The percentage of time on electronic screening along study days. ACTES: Automated Clinical Trial Eligibility Screener.
View this figure
Table 2. The average numbers of subjects screened, approached, and enrolled per week with and without automated patient screening.
Trial abbreviationWith automated screeningWithout automated screening
ScreenedApproachedEnrolledScreenedApproachedEnrolled
Biosignature29.4a2.01.225.32.01.4
CARPE-DIEM62.6a6.94.254.58.25.2
ED-STARS17.58.86.717.27.85.8
HealthyFamily52.4a39.0a4.344.133.84.1
M-TBI10.10.90.812.3b1.30.5
Torsion4.0a1.12.4a2.20.91.5
Average29.610.13.025.89.12.7

aThe enrollment statistics with automated screening is significantly higher than that without automation (P<.05).

bThe enrollment statistics with automated screening is significantly lower than that without automation (P<.05).

In addition to electronic screening, the overall patient screening time by CRCs was reduced from 47.6% without ACTES to 32.5% with ACTES (P<.001). The saved time was redirected to work-related activities, including waiting for sample collection (P=.03), study-related administrative tasks (P=.03), and work-related conversations (P=.006).

Quantitative Assessments of Enrollment

Table 2 shows the average numbers of subjects screened, approached, and enrolled per week with and without the automated patient screening. Compared with historical controls, using the ACTES resulted in more screened patients averaged over all trials (P=.08). The improvements were statistically significant for the majority of clinical trials. The use of ACTES also improved the numbers of approached and enrolled patients, although the difference was statistically significant for only a couple of clinical trials (HealthyFamily and Torsion).

Postevaluation Usability Surveys

Table 3 presents the SUS scores averaged over the CRCs after each time-and-motion evaluation. The total SUS score was 67.9 when ACTES was first in place, suggesting it to be an acceptable computerized application [35]. By the end of the study period, the score was improved to 80.0, which represented a good computerized solution. The ratings to individual SUS statements reflected different aspects of the system’s usability and the CRCs’ satisfaction in using the application.

Table 3. The average scores of system usability scale given by the clinical research coordinator participants.
StatementsFive-point scale (1-5)a, mean (SD)

FallbWintercSpringdSummere
1. I would like to use this system frequently.2.4 (1.1)3.2 (1.1)3.7 (0.9)3.2 (0.6)
2. I found the system unnecessarily complex.2.1 (1.0)1.8 (1.4)1.5 (0.5)1.4 (0.7)
3. I thought the system was easy to use.4.6 (0.5)4.5 (0.5)4.7 (0.5)4.7 (0.5)
4. I would need the support of a technician to use this system.1.7 (1.1)1.1 (0.4)1.2 (0.4)1.1 (0.4)
5. The various functions in the system were well integrated.3.3 (0.6)3.3 (1.1)3.8 (0.4)3.7 (0.7)
6. I thought there was too much inconsistency in this system.3.1 (1.1)3.6 (1.0)3.3 (0.7)2.1 (1.0)
7. Most people would learn to use this system very quickly.4.5 (0.8)4.5 (0.5)4.5 (0.5)4.4 (0.8)
8. I found the system very cumbersome to use.3.3 (1.3)2.3 (0.9)2.0 (1.0)1.9 (0.9)
9. I felt very confident using the system.4.0 (1.3)4.2 (0.5)4.7 (0.5)4.0 (1.2)
10. I needed to learn a lot of things before I could use this system.1.3 (0.4)1.6 (0.5)2.2 (1.3)1.4 (0.4)

a1 indicates strongly disagree and 5 strongly agree.

bOverall score of system usability scale (SUS): 67.9.

cOverall score of SUS: 72.5.

dOverall score of SUS: 78.0.

eOverall score of SUS: 80.0.


Principal Findings

Compared with traditional manual screening, using the ACTES significantly reduced the screening time by 34% (Table 1). The saved time was redirected to other activities such as administrative tasks and work-related conversations that streamlined teamwork among the CRCs. The regression analysis on the screening time illustrated the known seasonal effects on patient recruitment. Owing to an increase in patient volume during viral respiratory seasons (the fall and winter), the time increased without the ACTES, which was expected from prior time trends in the ED. In comparison, the time decreased gradually with the ACTES, reflecting the CRCs’ learning curve on adopting new technologies. Projecting the regression results to future data, we estimated to have a 50% reduction in screening effort when the CRCs fully adopt our system. These promising observations suggested continued benefits gained from automated patient screening. In addition, ACTES will enable the development of a continual, 24-h screening service, which could facilitate recruitment of subjects during nonstaffing periods (including approximately one-third of patient visits).

Compared with historical controls, the enrollment statistics with ACTES further confirmed its effectiveness on improving CRC screening efficiency (Table 2). We observed that automated screening was more useful for clinical trials with multiple conditions (eg, HealthyFamily) and vague eligibility description (eg, CARPE-DIEM and Torsion). In a busy clinical environment, it was difficult for the CRCs to memorize a variety of clinical conditions and match them to a large volume of patients. Use of ACTES could ease eligibility memorization and improve the screening efficiency, particularly in these complex studies. However, the system could be less helpful for the trials that included only demographics criteria (eg, ED-STARS) and for those that required chart reviewing of EHR information that is not available to the system (eg, imaging results required by M-TBI). In addition to improvement in screening efficiency, the system also showed potential to streamline recruitment workflow by improving patient approach and enrollment.

The postevaluation surveys demonstrated the usability of ACTES on several fronts. The system was easy to learn (Statement 3 in Table 3), easy to use (Statement 7), and its functions were well-integrated (Statement 5). All CRCs felt confident in using the system (Statement 9). In particular, the CRCs’ satisfaction in using the system improved over time, once they adapted to this new technology (Statement 1).

Areas of Improvement

Systematic error analyses have been performed on retrospective data in our previous research to identify limitations of the ACTES [18,19]. In this study, we focused on identifying areas of improvement based on the CRC feedback. The SUS suggested that there was inconsistency in system recommendation (Statement 6 in Table 3). This is because of the false positive recommendations made by the NLP technologies (eg, miss of negation detection), which has been identified as a limitation in our retrospective studies. To alleviate this problem, we have developed additional regular expressions for assertion detection and used bag-of-phrases matching technologies to balance sensitivity and specificity [27]. Advanced NLP algorithms will be explored in future iterations to improve the detection of semantic and temporal relations within the context.

In addition, the system was rated slightly cumbersome to use (Statement 8) when it was first in place. By analyzing feedback in the postevaluation surveys, we observed that it was because of the lack of functionalities on the dashboard (eg, a function for hiding clinical trials not actively enrolling on a day). Additional functions were implemented thereafter to meet the CRC needs.

Finally, the majority of CRCs indicated an information delay on patient recommendation (Question 3 in the open-ended questionnaire; Multimedia Appendix 2). We hypothesize that this is because of the lag in documentation by health care providers early in a patient visit, where the progress notes were not delivered to the ACTES in a timely fashion. For instance, a patient might be recommended for HealthyFamily hours after he or she had been triaged for asthma (an inclusion criterion). This could be because the physician filed the patient’s progress note after he or she was admitted, in which case the CRCs were never able to approach that candidate for enrollment. As shown in the literature, delayed documentation is a frequent finding in a high acuity and busy clinical environment [48,49]. As ACTES relies on data entered by EHR users, any strategies that facilitate timely clinical documentation will improve the system usability as well. Coordinating the clinical workflow to accelerate information delivery both for patient care and our system warrants further investigation.

Although the ACTES significantly improved patient screening efficiency, the problems described above occasionally delayed the CRCs’ decision making and negatively affected their satisfaction. Consequently, the users’ attitudes toward using the system remained slightly better than neutral (Statement 1 in Table 3). To improve the CRCs’ willingness of system use, the suggested areas of improvement have been adopted to inform our next development phase.

Limitations of the Study

One limitation of the study is that it included only 6 clinical trials running in a single clinical department. Although the included trials covered a variety of diseases, they generally did not contain complex logics (eg, criteria involving analysis of laboratory results). To assess its generalizability, we plan to integrate the system into other units (eg, oncology department) in our institution that conduct more complicated clinical studies. In addition, although the study demonstrated the benefits gained from automated patient screening, it did not assess the cost of system implementation because of the intermittent development cycle. In the future, we will perform appropriate cost-benefit analyses when implementing the system in other clinical units and health care institutions. Limited by the study length, the statistical power on quantitative assessments was not sufficiently high. To address this limitation, we will continue collecting enrollment statistics from the ED to generate power to detect significant differences. Finally, project planning and communication are in progress to evaluate the ACTES on a more diversified patient population (eg, adult patients), in multiple institutions, and with clinical data under different formats (eg, data from different vendor EHRs).

Conclusions

We designed and integrated an NLP- and a machine learning–based system, ACTES, into the ED and prospectively studied its impact on patient recruitment. In an observation-based, randomized time-and-motion study, the system demonstrated good capacity for improving efficiency of patient identification. The quantitative assessments demonstrated the potential of ACTES in streamlining recruitment workflow and improving patient enrollment. The postevaluation surveys suggested that the system was a good computerized solution with satisfactory usability. The promising results from our multidimensional evaluation confirmed the effectiveness of automated patient screening in prospective clinical settings. As such, we hypothesize that the ACTES, when rolled out for dissemination, will provide significant benefits to nationwide research networks and health care institutions in executing clinical research by harnessing the EHR data in real time.

Acknowledgments

Particular thanks go to William Stone, Anthony Coleman, Wayne Geers, and Vincent Evans for developing the real-time EHR data extraction programs. The authors also thank Olga Semenova for her support in providing the enrollment statistics.

This work was supported by the National Institutes of Health (grant numbers: 1R01LM012230, 1U01HG008666, and 5U18DP006134), and the Agency for Healthcare Research and Quality (grant number 1R21HS024983). YN was also supported by internal funds from CCHMC.

Authors' Contributions

YN conceptualized the study, coordinated the data extraction, developed the automated patient screening system, analyzed the results, created the tables and figures, and wrote the manuscript. MB conducted the time-and-motion study, consulted on data quality and cleaning, and contributed to the manuscript. SK provided specialist guidance on the study design, provided suggestions in system development and result analysis, and contributed to the manuscript. SLH provided specialist guidance on the study design, supervised the CRC team, coordinated the data extraction and result analysis, and contributed to the manuscript. JWD conceptualized the study, provided specialist guidance on data extraction, system development and result analysis, and contributed to the manuscript. All authors read and approved the final manuscript.

Conflicts of Interest

None declared.

Multimedia Appendix 1

The clinical trial descriptions and their core eligibility criteria.

PDF File (Adobe PDF File), 77KB

Multimedia Appendix 2

The templated postevaluation usability survey.

PDF File (Adobe PDF File), 84KB

  1. ClinicalTrials.gov.   URL: https://clinicaltrials.gov/ [accessed 2019-06-04]
  2. Cofield SS, Conwit R, Barsan W, Quinn J. Recruitment and retention of patients into emergency medicine clinical trials. Acad Emerg Med 2010 Oct;17(10):1104-1112 [FREE Full text] [CrossRef] [Medline]
  3. Embi PJ, Payne PR. Clinical research informatics: challenges, opportunities and definition for an emerging domain. J Am Med Inform Assoc 2009;16(3):316-327 [FREE Full text] [CrossRef] [Medline]
  4. Fletcher B, Gheorghe A, Moore D, Wilson S, Damery S. Improving the recruitment activity of clinicians in randomised controlled trials: a systematic review. BMJ Open 2012;2(1):e000496 [FREE Full text] [CrossRef] [Medline]
  5. Mitchell AP, Hirsch BR, Abernethy AP. Lack of timely accrual information in oncology clinical trials: a cross-sectional analysis. Trials 2014 Mar 25;15:92 [FREE Full text] [CrossRef] [Medline]
  6. Penberthy LT, Dahman BA, Petkov VI, DeShazo JP. Effort required in eligibility screening for clinical trials. J Oncol Pract 2012 Nov;8(6):365-370 [FREE Full text] [CrossRef] [Medline]
  7. Treweek S, Lockhart P, Pitkethly M, Cook JA, Kjeldstrøm M, Johansen M, et al. Methods to improve recruitment to randomised controlled trials: Cochrane systematic review and meta-analysis. BMJ Open 2013;3(2):pii: e002360 [FREE Full text] [CrossRef] [Medline]
  8. Winters ZE, Griffin C, Horne R, Bidad N, McCulloch P. Barriers to accrue to clinical trials and possible solutions. Br J Cancer 2014 Aug 12;111(4):637-639 [FREE Full text] [CrossRef] [Medline]
  9. Dexheimer JW, Tang H, Kachelmeyer A, Hounchell M, Kennebeck S, Solti I, et al. A time-and-motion study of clinical trial eligibility screening in a pediatric emergency department. Pediatr Emerg Care 2018 Oct 2. [CrossRef] [Medline]
  10. Beauharnais CC, Larkin ME, Zai AH, Boykin EC, Luttrell J, Wexler DJ. Efficacy and cost-effectiveness of an automated screening algorithm in an inpatient clinical trial. Clin Trials 2012 Apr;9(2):198-203 [FREE Full text] [CrossRef] [Medline]
  11. Butte AJ, Weinstein DA, Kohane IS. Enrolling patients into clinical trials faster using RealTime Recuiting. Proc AMIA Symp 2000:111-115 [FREE Full text] [Medline]
  12. Embi PJ, Jain A, Clark J, Bizjack S, Hornung R, Harris CM. Effect of a clinical trial alert system on physician participation in trial recruitment. Arch Intern Med 2005 Oct 24;165(19):2272-2277 [FREE Full text] [CrossRef] [Medline]
  13. Embi PJ, Jain A, Clark J, Harris CM. Development of an electronic health record-based clinical trial alert system to enhance recruitment at the point of care. AMIA Annu Symp Proc 2005:231-235 [FREE Full text] [Medline]
  14. Eubank MH, Hyman DM, Kanakamedala AD, Gardos SM, Wills JM, Stetson PD. Automated eligibility screening and monitoring for genotype-driven precision oncology trials. J Am Med Inform Assoc 2016 Dec;23(4):777-781 [FREE Full text] [CrossRef] [Medline]
  15. Heinemann S, Thüring S, Wedeken S, Schäfer T, Scheidt-Nave C, Ketterer M, et al. A clinical trial alert tool to recruit large patient samples and assess selection bias in general practice research. BMC Med Res Methodol 2011 Feb 15;11:16 [FREE Full text] [CrossRef] [Medline]
  16. IBM. 2016. IBM Watson Oncology Clinical Trial Matching   URL: http://www.ibm.com/smarterplanet/us/en/ibmwatson/clinical-trial-matching.html [accessed 2019-06-05]
  17. Ni Y, Beck AF, Taylor R, Dyas J, Solti I, Grupp-Phelan J, et al. Will they participate? Predicting patients' response to clinical trial invitations in a pediatric emergency department. J Am Med Inform Assoc 2016 Dec;23(4):671-680 [FREE Full text] [CrossRef] [Medline]
  18. Ni Y, Kennebeck S, Dexheimer JW, McAneney CM, Tang H, Lingren T, et al. Automated clinical trial eligibility prescreening: increasing the efficiency of patient identification for clinical trials in the emergency department. J Am Med Inform Assoc 2015 Jan;22(1):166-178 [FREE Full text] [CrossRef] [Medline]
  19. Ni Y, Wright J, Perentesis J, Lingren T, Deleger L, Kaiser M, et al. Increasing the efficiency of trial-patient matching: automated clinical trial eligibility pre-screening for pediatric oncology patients. BMC Med Inform Decis Mak 2015 Apr 14;15:28 [FREE Full text] [CrossRef] [Medline]
  20. Petkov VI, Penberthy LT, Dahman BA, Poklepovic A, Gillam CW, McDermott JH. Automated determination of metastases in unstructured radiology reports for eligibility screening in oncology clinical trials. Exp Biol Med (Maywood) 2013 Dec;238(12):1370-1378 [FREE Full text] [CrossRef] [Medline]
  21. Pressler TR, Yen P, Ding J, Liu J, Embi PJ, Payne PR. Computational challenges and human factors influencing the design and use of clinical research participant eligibility pre-screening tools. BMC Med Inform Decis Mak 2012 May 30;12:47 [FREE Full text] [CrossRef] [Medline]
  22. Treweek S, Pearson E, Smith N, Neville R, Sargeant P, Boswell B, et al. Desktop software to identify patients eligible for recruitment into a clinical trial: using SARMA to recruit to the ROAD feasibility trial. Inform Prim Care 2010;18(1):51-58 [FREE Full text] [CrossRef] [Medline]
  23. Embi PJ, Leonard AC. Evaluating alert fatigue over time to EHR-based clinical trial alerts: findings from a randomized controlled study. J Am Med Inform Assoc 2012 Jun;19(e1):e145-e148 [FREE Full text] [CrossRef] [Medline]
  24. Altman R, Brennan PF. S56: Featured Presentation - Informatics Year in Review. In: Proceedings of the Annual Symposium. 2015 Presented at: AMIA'15; November 14-18, 2015; San Francisco, CA.
  25. Cincinnati Children's Hospital Medical Center. 2015. Automated Clinical Trial Screening Eligibility Software Algorithm   URL: http://innovation.cincinnatichildrens.org/technologies/2015-0210 [accessed 2019-05-19]
  26. DBMI Portal. 2018. n2c2 2018 — Track 1: Cohort Selection for Clinical Trials   URL: https://portal.dbmi.hms.harvard.edu/projects/n2c2-t1/ [accessed 2019-06-05]
  27. Ni Y. Automated Clinical Trial Eligibility Screener. In: Proceedings of the N2C2/OHNLP Shared-Task and Workshop. 2018 Presented at: N2C2'18; November 2, 2018; San Francisco, CA.
  28. Wood LA. A time and motion study. J Coll Gen Pract 1962 Aug;5:379-381 [FREE Full text] [CrossRef] [Medline]
  29. Ampt A, Westbrook JI. Measuring nurses' time in medication related tasks prior to the implementation of an electronic medication management system. Stud Health Technol Inform 2007;130:157-167. [Medline]
  30. Gilbreth FB, Gilbreth LM. Motion Study for the Handicapped. London: Routledge & Sons; 1920.
  31. Hendrich A, Chow MP, Skierczynski BA, Lu Z. A 36-hospital time and motion study: how do medical-surgical nurses spend their time? Perm J 2008;12(3):25-34 [FREE Full text] [CrossRef] [Medline]
  32. Thorpe-Jamison PT, Culley CM, Perera S, Handler SM. Evaluating the impact of computer-generated rounding reports on physician workflow in the nursing home: a feasibility time-motion study. J Am Med Dir Assoc 2013 May;14(5):358-362. [CrossRef] [Medline]
  33. Westbrook JI, Li L, Georgiou A, Paoloni R, Cullen J. Impact of an electronic medication management system on hospital doctors' and nurses' work: a controlled pre-post, time and motion study. J Am Med Inform Assoc 2013;20(6):1150-1158 [FREE Full text] [CrossRef] [Medline]
  34. Yen K, Shane EL, Pawar SS, Schwendel ND, Zimmanck RJ, Gorelick MH. Time motion study in a pediatric emergency department before and after computer physician order entry. Ann Emerg Med 2009 Apr;53(4):462-8.e1. [CrossRef] [Medline]
  35. Brooke J. SUS: a retrospective. J Usability Stud 2013;8(2):29-40 [FREE Full text]
  36. Lewis JR. The system usability scale: past, present, and future. Int J Hum-Comput Int 2018 Mar 30;34(7):577-590. [CrossRef]
  37. Fritz F, Balhorn S, Riek M, Breil B, Dugas M. Qualitative and quantitative evaluation of EHR-integrated mobile patient questionnaires regarding usability and cost-efficiency. Int J Med Inform 2012 May;81(5):303-313. [CrossRef] [Medline]
  38. Meldrum D, Glennon A, Herdman S, Murray D, McConn-Walsh R. Virtual reality rehabilitation of balance: assessment of the usability of the Nintendo Wii(®) Fit Plus. Disabil Rehabil Assist Technol 2012 May;7(3):205-210. [CrossRef] [Medline]
  39. Taylor RG, Hounchell M, Ho M, Grupp-Phelan J. Factors associated with participation in research conducted in a pediatric emergency department. Pediatr Emerg Care 2015 May;31(5):348-352. [CrossRef] [Medline]
  40. Wickens CD, Lee JD, Liu Y, Gordon-Becker S. An Introduction to Human Factors Engineering. Second Edition. Upper Saddle River, New Jersey: Pearson Education; 2004.
  41. Bodenreider O. The unified medical language system (UMLS): integrating biomedical terminology. Nucleic Acids Res 2004 Jan 1;32(Database issue):D267-D270 [FREE Full text] [CrossRef] [Medline]
  42. de Silva TS, MacDonald D, Paterson G, Sikdar KC, Cochrane B. Systematized nomenclature of medicine clinical terms (SNOMED CT) to represent computed tomography procedures. Comput Methods Programs Biomed 2011 Mar;101(3):324-329. [CrossRef] [Medline]
  43. Nelson SJ, Zeng K, Kilbourne J, Powell T, Moore R. Normalized names for clinical drugs: RxNorm at 6 years. J Am Med Inform Assoc 2011;18(4):441-448 [FREE Full text] [CrossRef] [Medline]
  44. Roethlisberger FJ, Dickson WJ, Wright A, Pforzheimer CH, Western Electric Company. Management and the Worker: An Account of a Research Program Conducted by the Western Electric Company, Hawthorne Works, Chicago. Cambridge: Harvard University Press; 1939.
  45. McDonald JH. Handbook of Biological Statistics. Third Edition. Baltimore: Sparky House Publishing; 2014.
  46. Schoeffel R. The concept of product usability. ISO Bull 2003;34(3):6-7 [FREE Full text]
  47. Yen PY, Bakken S. Review of health information technology usability study methodologies. J Am Med Inform Assoc 2012;19(3):413-422 [FREE Full text] [CrossRef] [Medline]
  48. Sockolow PS, Liao C, Chittams JL, Bowles KH. Evaluating the impact of electronic health records on nurse clinical process at two community health sites. NI 2012 (2012) 2012;2012:381 [FREE Full text] [Medline]
  49. Gephart S, Carrington JM, Finley B. A systematic review of nurses' experiences with unintended consequences when using the electronic health record. Nurs Adm Q 2015;39(4):345-356. [CrossRef] [Medline]


ACTES: Automated Clinical Trial Eligibility Screener
CCHMC: Cincinnati Children’s Hospital Medical Center
CRC: clinical research coordinator
ED: emergency department
EHR: electronic health record
NLP: natural language processing
SUS: system usability scale


Edited by G Eysenbach; submitted 28.03.19; peer-reviewed by E Borycki, M Torii, S McRoy; comments to author 18.05.19; revised version received 07.06.19; accepted 12.06.19; published 24.07.19

Copyright

©Yizhao Ni, Monica Bermudez, Stephanie Kennebeck, Stacey Liddy-Hicks, Judith Dexheimer. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 24.07.2019.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on http://medinform.jmir.org/, as well as this copyright and license information must be included.