Published on in Vol 9, No 7 (2021): July

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/24022, first published .
A Tool for Evaluating Medication Alerting Systems: Development and Initial Assessment

A Tool for Evaluating Medication Alerting Systems: Development and Initial Assessment

A Tool for Evaluating Medication Alerting Systems: Development and Initial Assessment

Original Paper

1Black Dog Institute, Randwick, NSW, Australia

2The University of Sydney, Faculty of Medicine and Health, School of Medical Sciences, Biomedical Informatics and Digital Health, Sydney, Australia

3Univ Lille, CHU Lille, ULR 2694, METRICS: Évaluation des Technologies de santé des Pratiques médicales, Lille, France

4INSERM, CHU Lille, CIC-IT/Evalab 1403, Centre d’Investigation Clinique, Lille, France

5University of New South Wales, Randwick, Australia

6Sydney Local Health District, Sydney, Australia

7Royal Adelaide Hospital, Adelaide, Australia

8Hunter New England Local Health District, Newcastle, Australia

9Macquarie University Hospital, Sydney, Australia

Corresponding Author:

Wu Yi Zheng, PhD

Black Dog Institute

Hospital Rd

Prince of Wales Hospital

Randwick, NSW, 2031

Australia

Phone: 61 422510718

Email: wuyi.zheng@unsw.edu.au


Background: It is well known that recommendations from electronic medication alerts are seldom accepted or acted on by users. Key factors affecting the effectiveness of medication alerts include system usability and alert design. Thus, human factors principles that apply knowledge of human capabilities and limitations are increasingly used in the design of health technology to improve the usability of systems.

Objective: This study aims to evaluate a newly developed evidence-based self-assessment tool that allows the valid and reliable evaluation of computerized medication alerting systems. This tool was developed to be used by hospital staff with detailed knowledge of their hospital’s computerized provider order entry system and alerts to identify and address potential system deficiencies. In this initial assessment, we aim to determine whether the items in the tool can measure compliance of medication alerting systems with human factors principles of design, the tool can be consistently used by multiple users to assess the same system, and the items are easy to understand and perceived to be useful for assessing medication alerting systems.

Methods: The Tool for Evaluating Medication Alerting Systems (TEMAS) was developed based on human factors design principles and consisted of 66 items. In total, 18 staff members recruited across 6 hospitals used the TEMAS to assess their medication alerting systems. Data collected from participant assessments were used to evaluate the validity, reliability, and usability of the TEMAS. Validity was assessed by comparing the results of the TEMAS with those of prior in-house evaluations. Reliability was measured using Krippendorff α to determine agreement among assessors. A 7-item survey was used to determine usability.

Results: The participants reported mostly negative (n=8) and neutral (n=7) perceptions of alerts in their medication alerting system. However, the validity of the TEMAS could not be directly tested, as participants were unaware of any results from prior in-house evaluations. The reliability of the TEMAS, as measured by Krippendorff α, was low to moderate (range 0.26-0.46); however, participant feedback suggests that individuals’ knowledge of the system varied according to their professional background. In terms of usability, 61% (11/18) of participants reported that the TEMAS items were generally easy to understand; however, participants suggested the revision of 22 items to improve clarity.

Conclusions: This initial assessment of the TEMAS allowed the identification of its components that required modification to improve usability and usefulness. It also revealed that for the TEMAS to be effective in facilitating a comprehensive assessment of a medication alerting system, it should be completed by a multidisciplinary team of hospital staff from both clinical and technical backgrounds to maximize their knowledge of systems.

JMIR Med Inform 2021;9(7):e24022

doi:10.2196/24022

Keywords



Background

Human factors is the scientific discipline that applies knowledge of human capabilities and limitations to improve the usability of systems, while reducing the potential for errors [1,2]. For decades, human factors research has been integral to the continuous improvement and innovation in industries outside of health care, such as aviation and automobile industries, with human performance limitations and human-system interactions taken into account when designing new technology [3-5]. For example, the failure to apply good human factors principles when designing aircraft and in-vehicle displays has been shown to lead to confusion and errors [3,6].

In recent years, the incorporation of human factors principles into the design of technology in health care has received increasing attention. Numerous studies have aimed to assess and improve clinical decision support in the form of electronic medication alerts [7-10], as it is well known that most recommendations from these alerts are not accepted or acted on by prescribers [11-14]. Excessive display of clinically irrelevant alerts can lead to alert fatigue, where important safety-critical information is ignored by clinicians (eg, doctors, pharmacists, and nurses) [14]. Studies have also investigated the factors influencing alert acceptance and found the following key factors affect the effectiveness of medication alerts: the usability of medication alerting systems, display of alerts, textual information included in alerts, and prioritization of alerts [15-21]. Furthermore, compared with poorly designed alerts, well-designed alerts using human factors principles resulted in faster work, fewer prescribing errors, less workload, and improved usability for prescribers [22,23].

However, what constitutes a well-designed medication safety alert and how compliance with human factors principles can be assessed and improved remain unclear. The Instrument for Evaluating Human Factors Principles in Medication-Related Decision Support Alerts (I-MeDeSA) was developed to evaluate compliance of drug-drug interaction alerts with human factors principles of design [10]. Comprising 26 items with binary scoring (ie, a score of 1 assigned to a yes response and 0 for a no response), the I-MeDeSA assesses compliance of electronic medication alerts with nine human factors principles of design, including alarm philosophy, placement, visibility, prioritization, color, learnability and confusability, text-based information, proximity of task components being displayed, and corrective actions [9]. Initially validated in the United States [10] and used in subsequent studies [7-9,24], several flaws with I-MeDeSA have been identified, including ambiguous item wording; arbitrary allocation of scores to human factors principles; and the need for more concrete definitions, clearer rationale for each item, and more explicit examples [7,8,24]. In our attempt to use I-MeDeSA to evaluate computerized alerts in Australian systems, we found many of the items to be irrelevant to Australian configurations [7], namely, items that assumed systems implemented more than one level of alert severity and multiple alert types. Thus, we set out to develop an evidence-based self-assessment tool that allows the valid and reliable evaluation of computerized medication alerting systems, in terms of their compliance with human factors principles. Our goal was to develop a tool that could be used by hospital staff with detailed knowledge of the hospital’s computerized provider order entry (CPOE) system and alerts (eg, a CPOE pharmacist who assisted in the building and configuration of the system) to identify and address deficient areas. This tool can also be used to facilitate the selection of the most user-friendly and functional medication alerting systems during the procurement process. With the increased adoption of digital health technology, a standardized tool using human factors principles to assess clinical decision support alerts, a crucial component of CPOE systems, would maximize alert acceptance and effectiveness and, therefore, broaden the potential safety benefits of medication-related alerts.

Objectives

In this paper, we report the development of the Tool for Evaluating Medication Alerting Systems (TEMAS) and our initial attempts to assess its validity, reliability, and usability. In particular, we set out to determine whether (1) the items measure the compliance of medication alerting systems with human factors principles of design, (2) the tool can be consistently used by multiple users to assess the same system, and (3) the items are easy to understand and perceived to be useful for assessing medication alerting systems.


Development of the TEMAS

The pioneering work by Marcilly et al [25] identified 168 usability flaws related to general usability principles and medication-related alerting functions. A detailed description of each principle and its derivation can be found in a systematic qualitative review [25]. In summary, flaws specific to medication-related alerting functions were grouped into six categories, including low signal-to-noise ratio (eg, alerts are irrelevant or redundant), problems with alert content (eg, information required to make a decision is missing), nontransparency of alert functions (eg, no information on the alert severity scale), timing and display issues (eg, alert not displayed at the right moment to support decision-making), alert distribution issues (eg, alert not displayed to the right clinician), and problems with alert features (eg, no feature for reconsidering an alert later) [25]. Usability flaws were then matched with 58 design principles identified in the literature and two additional principles [26]. A usability flaw was matched with a design principle if it was in direct violation of the principle [26].

The TEMAS was developed by transforming each design principle into a checklist item, using usability flaws identified by Marcilly et al [25] to corroborate the accuracy of each item. Multimedia Appendix 1 includes some example design principles and their corresponding items in the TEMAS. Following this mapping process, the TEMAS consists of 66 items (Table 1), which fall into six meta-principles: (1) signal-to-noise ratio, (2) ability to support collaborative work, (3) ability to fit clinicians’ workflow and mental model, (4) display of relevant data within the alert, (5) transparency of system rules to the user, and (6) the inclusion of actionable tools within the alert. Each TEMAS item has two response options (ie, yes and no), with space provided for free-text comments. Before distributing the TEMAS to study participants, members of the research team, including experts in human factors, medication safety, digital health, and assessment tool development, checked and provided feedback on TEMAS items; however, pilot testing was not conducted with end users.

Table 1. Meta-principles assessed by the Tool for Evaluating Medication Alerting Systems (n=66).
Meta-principleItems, n (%)Example question
Optimize the signal-to-noise ratio17 (26)Does the alerting system use an evidence-based drug knowledge base to trigger alerts?
Support collaborative work6 (9)Does the alerting system trigger alerts to the appropriate team member (eg, medication administration alerts are triggered for nurses)?
Fit the clinicians’ workflow and mental model16 (24)Does the alerting system display alerts instantly (ie, no lag time)?
Display relevant data within the alert10 (15)Does the alert include information on the cause of the unsafe event (eg, medication name and dose)?
Ensure the system rules are transparent to the user6 (9)Does the alerting system inform users about the customization options available (eg, turning some alerts off)?
Include actionable tools within the alert11 (17)Does the alert provide a function for the user to modify an order?

Participants and Study Sites

To identify potential participants for the initial evaluation of the TEMAS, a member of the research team at each study site nominated staff members at their hospital with relevant knowledge of their CPOE system and alerts (eg, a CPOE pharmacist responsible for maintaining the system). The study intended to recruit at least two participants from each site. Nominated staff members were contacted by email, and those who expressed an interest in taking part in the study were sent a participant information sheet and consent form. After submitting a signed participant information sheet and consent form, participants received a TEMAS pack. This pack included a copy of the TEMAS and a 7-item survey. Participants were asked to return completed TEMAS packs to the researchers via email or mail.

The study sites are presented in Table 2. In total, 18 participants across the 6 sites used the TEMAS to assess the medication alerting system at their hospital. Participants included pharmacists (n=11), clinical pharmacologists (n=2), nurses (n=2), doctors (n=2), and a business analyst. Participants were part of the CPOE system implementation team at their hospital or were responsible for maintaining or updating the system. On average, participants had 5.1 (SD 2.9) years of experience using their CPOE system, and as shown in Table 2, Cerner Powerchart and DXC Technology’s MedChart were the most frequently assessed systems.

Table 2. Study sites and number of participants (n=18).
Study siteParticipants, n (%)CPOEa system in use
John Hunter Hospital (NSWb)2 (11)DXC Medchart
St Vincent’s Hospital, Sydney (NSW)2 (11)DXC Medchart
Macquarie University Hospital (NSW)2 (11)TrakCare
Concord Repatriation General Hospital (NSW)5 (28)Cerner Powerchart
Royal North Shore Hospital (NSW)4 (22)Cerner Powerchart
Queen Elizabeth Hospital (South Australia)3 (17)Sunrise EMRc

aCPOE: computerized provider order entry.

bNSW: New South Wales.

cEMR: electronic medical record.

Study Design and Data Analysis

The evaluation consisted of assessing three components: the validity, reliability, and usability of the TEMAS. Participants were asked to independently use the TEMAS to evaluate the medication alerting system in use at their hospital and then complete a 7-item survey.

To assess validity, the survey included a free-text item on the perceived effectiveness of the alerts in the CPOE system and asked for supporting information or evidence with their response (eg, information on alert override rates, any formal or informal feedback received from users, and results from any in-house user surveys). Data collected from this item were analyzed by categorizing responses according to their positive or negative valence. Supporting information provided by participants was compared with TEMAS results to check whether the shortcomings of the alerting system identified by the TEMAS were consistent with those identified by in-house evaluations carried out by the hospitals.

To assess reliability, we compared the responses of participants working at the same hospital. Krippendorff α was calculated to determine interrater reliability.

To assess usability, participants were given the opportunity to provide feedback on each TEMAS item to indicate whether an item was difficult to understand or was not useful (Figure 1). In addition, participants completed a usability survey (Multimedia Appendix 2), which collected basic demographic information, data on the ease of use using a five-point Likert scale (eg, item 1: I thought the TEMAS was easy to use), and free-text comments on the tool. The Likert-scale items were adapted from the system usability scale [27].

Figure 1. Feedback options for each Tool for Evaluating Medication Alerting Systems item to assess usability.
View this figure

Ethical Clearance

This study was approved by the Hunter New England Human Research Ethics Committee (reference no: HREC/18/HNE/237). In addition, research governance approval was obtained from each study site.


Validity of the TEMAS

Participants gave mixed responses with regard to the perceived effectiveness of the alerts in their CPOE system. Of the 17 responses to this item (1 participant did not respond to this item), eight were negative, seven were neutral, and two were positive (Textbox 1).

Selected comments of participants on the perceived effectiveness of alerts.

Positive

  • “I believe they’re reasonably effective, as they target the conditions that are ‘no-nos’” [Participant #1]
  • “The alerts are coming from MIMS [Monthly Index of Medical Specialties] Australia and I believe their documentation is thorough.” [Participant #8]

Neutral

  • “Somewhat effective. Pharmacists review quite a number of alerts via verification of medications, whilst there is a theoretical risk, there may not be many actual incidents.” [Participant #3]

Negative

  • “Not very effective as prescribers have alert fatigue.” [Participant #2]
  • “Poor; time consuming; click fatigue; alert fatigue; irrelevant alerts (e.g. non-current meds).” [Participant #6]
  • “Too many alerts, hard to take out after we put in.” [Participant #7]
Textbox 1. Selected comments of participants on the perceived effectiveness of alerts.

However, no participant provided evidence to support their personal assessment of alerts in their hospital’s system; that is, participants were unaware if their hospital collected meaningful data on the effectiveness of medication alerts in their CPOE system:

Most of the effect i.e. override rates etc. we don’t know
[Participant #9]
Has much room for improvement based on the evaluation factors in TEMAS however have no figures or paper to back it up
[Participant #4]

Reliability

Table 3 presents Krippendorff α, which reflect interrater reliability among participants at each study site. To account for the missing data, α were also calculated for items with valid responses only (ie, a response of yes or no).

Table 3. Interrater reliability among participants from each study site (n=6).
SiteAll responses, Krippendorff α (95% CI)Valid responses, Krippendorff α (95% CI)
1.30 (0.06-0.53).32 (0.07-0.53)
2.46 (0.25-0.67).49 (0.27-0.68)
3.39 (0.32-0.45).47 (0.39-0.55)
4.26 (0.17-0.35).32 (0.21-0.42)
5.40 (0.28-0.51).49 (0.37-0.62)
6.38 (0.16-0.60) .38 (0.16-0.60)

At the individual TEMAS item level, more than 10 items at 3 study sites did not receive a valid response from all participants working in those sites. They commented that they did not have the relevant knowledge to answer some items:

Not sure whether a doctor is able to make changes to the order and I’m not aware what their interface looks like.
[Participant #4]
Unsure - medical officer questions.
[Participant #5]

Usability

Approximately 39% (7/18) of participants thought that the TEMAS was easy to use, with roughly 60% (3/5) of participants reporting that it was easy to understand. Approximately 41% (7/17) of participants found it to be a useful tool for identifying areas for improvement in their medication alerting system (Table 4).

Table 4. Usability of the Tool for Evaluating Medication Alerting Systems.
Survey itemParticipants who selected strongly agreea or agreeb, n (%)Participants who selected neutralc, n (%)Participants who selected strongly disagreed or disagreee, n (%)Average scoreRange (lower limit-upper limit)
I thought the TEMASf was easy to use. (n=18)7 (39)8 (44)3 (17)3.24 (1-5)
I thought the items in the TEMAS were easy to understand. (n=18)11 (61)4 (22)3 (17)3.53 (2-5)
I thought the TEMAS was useful in helping me to identify areas for improvement in my alerting system. (n=17)g7 (41)8 (47)2 (12)3.34 (1-5)

aStrongly agree was rated 5.

bAgree was rated 4.

cNeutral was rated 3.

dDisagree was rated 2.

eStrongly disagree was rated 1.

fTEMAS: Tool for Evaluating Medication Alerting Systems.

gOne participant did not provide a response to this question.

Of the 66 TEMAS items, 33 (50%) were reported by at least one participant as difficult to understand due to item wording. However, only 15% (10/66) of items confused multiple participants (Table 5). Reasons provided by participants on why items were difficult to understand included a lack of clarity in the meaning of the item and their inability to provide a yes or no response (Table 5). Furthermore, 20% (13/66) of items were reported to be not useful by participants. However, only the item on whether the alerting system provided explanations on the classification of alert severity was deemed not useful by multiple participants (n=2 participants).

With regard to responses to free-text questions in the usability survey, participants provided additional comments on how the design of TEMAS could be improved, and other possible users of the tool:

All of the questions are yes/no, either it is or it isn’t - whereas in some cases it might be partially implemented. The questions are also worded such that a “no” answer to any question is a negative, and something should be done about it.
[Participant #1]
Think the target audience is unclear. Only some of these items can be optimised at a hospital level. Most of the issues are hard-coded and would need to be addressed by the vendor.
[Participant #9]
Needs to be amended as it’s unclear who i.e. IT people or clinical staff the TEMAS is aimed at. These groups require very different language
[Participant #14]
Table 5. Items in the Tool for Evaluating Medication Alerting Systems reported to be difficult to understand by multiple participants and example participant responses (n=18).
TEMASa itembParticipants, n (%)cExample participant response
A4. Does the alerting system overcome missing data and reconcile multiple entries to trigger relevant alerts (eg, does the alerting system avoid using dated or unreliable data?)6 (33)“Extremely broad question” [Participant #10]
A9. Does the alerting system refrain from triggering an alert if a corrective action has already been taken?5 (28)“Not sure what ‘corrective action’ means” [Participant #9]
E6. Does the alerting system inform users of the unsafe events that are checked?4 (22)“What is an unsafe event, and where would this be defined?” [Participant #11]
A3. Does the alerting system use multiple sources (eg, patient record, laboratory result repository, and pharmacy) to trigger alerts?3 (17)“I am not sure what this is asking” [Participant #7]
A13. Does the alerting system group multiple recommendations for patients with comorbidities?3 (17)“Don’t think our system has this capability” [Participant #11]
A12. Does the alerting system prioritize alerts according to severity?2 (11)“This depends on what you mean by ‘prioritise’” [Participant #11]
D1. Does the alert include information on the cause of the unsafe event (eg, medication name and dose)?2 (11)“Unsure how to answer” [Participant #12]
D5. Does the alert include relevant patient information and provide a link for users to obtain further patient information?2 (11)“Example of patient info? Lab results?” [Participant #13]
F1. Does the alert provide a function for the user to modify an order?2 (11)“Only doctors can modify orders. Difficult for other professions to answer” [Participant #12]
F10. Does the alerting system allow users to remove alerts that are irrelevant or outdated?2 (11)“Difficult to classify as Y or N” [Participant #12]

aTEMAS: Tool for Evaluating Medication Alerting Systems.

bThe letter and number preceding each item indicates section and item number, respectively.

cThe values do not sum to 100% as they are not mutually exclusive.


Principal Findings

In this study, we developed a self-assessment tool for medication alerting systems and aimed to evaluate the validity, reliability, and usability of the TEMAS; however, this proved difficult. The validity of the TEMAS could not be directly tested, as participants in the study were not aware of any in-house system evaluations carried out by the hospitals. As a result, participants reported that there was a lack of evaluation data to support their subjective assessment of the system. The reliability of the TEMAS, as measured by Krippendorff α, was low to moderate; however, feedback from users indicated that their knowledge of systems was highly variable. In terms of usability, according to the responses to a survey item, the majority of participants agreed that TEMAS items were easy to understand, although participants identified a number of items that needed improvement.

Several methods are used by hospitals to monitor and evaluate alert effectiveness, including the establishment of review committees consisting of pharmacists and doctors [28-30], development of visual analytic dashboards [13], and collection of end user feedback [31]. A key finding from this study was that no participating hospital had a systematic program in place to gather data on the effectiveness of medication alerts in their CPOE system. Although the view of participants on alerts in their systems were mostly negative, there was a lack of evaluation data to support these subjective assessments. Thus, the validity of the TEMAS could not be directly assessed. Future assessments of the TEMAS should consider applying a different participant screening process whereby only hospitals with available evaluation data are included. However, upon examining the TEMAS items considered to be not useful by study participants, only one item was deemed not useful by multiple users (n=2), suggesting that the content of the TEMAS was relevant in assessing medication alerting systems. Further evaluations of the TEMAS should be conducted in hospitals with in-house data on the effectiveness of alerts in their CPOE system.

Less than half of the participants indicated that the TEMAS was easy to use (7/18, 39%) and useful in identifying areas in the system for improvement (7/17, 41%), with more participants selecting neutral for these survey questions. This likely reflects that some TEMAS items needed improvement, which prevented respondents from fully endorsing the usability of the TEMAS. In response to the feedback received on individual TEMAS items, 33% (22/66) of items were modified to improve clarity and reduce ambiguities. To avoid confusion and misunderstanding due to the use of unsuitable terms (eg, corrective action, item A9; Table 6) and poor item wording (eg, item D1, Table 6), edits were made to the original TEMAS, taking into account participant comments on why they were unable to provide a response (eg, “I am not sure what this is asking” [Participant #5]). We also included examples to provide further clarification of the meaning of each item (Table 6). In response to feedback on difficulties in selecting a yes or no response for some items (eg, only some alerts provide clinically appropriate recommendations and suggest alternatives), the revised version of the TEMAS (Multimedia Appendix 3) included partial as an additional response option for each item. In addition, a note has been included to advise users that, depending on the local context, a response of no or partial to TEMAS items does not automatically indicate a weakness in the system.

Table 6. Examples of the revised Tool for Evaluating Medication Alerting Systems items.
Original itemaRevised itemExample to clarify the meaning of the item
A9. Does the alerting system refrain from triggering an alert if a corrective action has already been taken?Does the alerting system refrain from triggering more alerts if the alert recommendation has already been followed?The system refrains from triggering an alert if drug monitoring actions are already in place.
D1. Does the alert include information on the cause of the unsafe event (eg, medication name and dose)?

Does the alert include information on why the alert was triggered?Medication names, dosages, and severity of interactions are included in drug-drug interaction alerts.
E6. Does the alerting system inform users of the unsafe events that are checked?Does the alerting system inform users of the types of orders that will trigger alerts?Clicking on a “more information” link in the help page informs the user that both order sentences and free-text orders can trigger alerts.

aThe letter and number preceding each item indicates section and item number, respectively.

The reliability of the TEMAS was shown to be poor, likely reflecting the different levels of system knowledge possessed by the participants. Recruiting participants with equivalent, in-depth knowledge of their hospital’s medication alerting system proved difficult. Usually, one staff member possessed extensive knowledge of the hospital’s system (eg, a CPOE pharmacist), whereas other staff members within the same organization had more specialized knowledge of the hospital’s system (eg, a medical officer or clinical pharmacist). It may be that reliability was affected by differences in clinical practice settings, where staff members from different specialties use different functions of the system and have different views and understanding of the system based on their everyday use. Responses received from users suggest that the TEMAS may be more appropriately used by a team instead of an individual. For example, a participant in a pharmacist role was unsure of items related to prescribing medications, thus deferring these items to medical officers. There was also a suggestion to include system vendors in the evaluation process as “most of the issues are hard-coded and would need to be addressed by the vendor” [Participant #9]. Thus, evaluations carried out by a team consisting of representatives of system users from all clinical backgrounds would allow a more comprehensive evaluation of the alerting system. During this process, different parts of the TEMAS could initially be assigned to different team members based on their role and relevant expertise in the hospital (eg, prescribers are assigned to the fit the clinician’s workflow and mental model section).

The TEMAS is not dissimilar to a heuristic evaluation, which is a usability inspection method driven by experts to assess a design or product’s usability [32]. In heuristic analysis, a number of usability experts typically conduct an independent assessment of a product or interface and note usability violations, which are then amalgamated into a master list of usability problems. Using this approach, the identification of usability violations is highly dependent on the expertise of raters, with human factors or usability expertise associated with higher number of violations being detected. This is in contrast to the TEMAS, where we suggest users work as a team, not independently, to complete their alert assessment. This is because items cover a range of system aspects that are unlikely to be known to a single individual. We also suggest that the completion of the TEMAS should not be limited to usability experts but rather a multidisciplinary team of hospital end users (eg, pharmacists, doctors, nurses, and information technology professionals), each contributing their unique knowledge in the evaluation of a medication alerting system.

Limitations

Our initial evaluation of the TEMAS had several limitations. First, we experienced difficulties in recruiting participants with in-depth knowledge of their hospital’s medication alerting system. Knowledge of some participants was role specific, limiting their capacity to complete the TEMAS, which impacted the interrater reliability. Future assessments of the TEMAS could use a team of system experts with varying expertise from different professional backgrounds. Second, we did not recruit a site with in-house evaluation data of their medication alerting system, thus limiting our ability to assess the validity of the TEMAS. As a result, findings derived from using the TEMAS to assess the strengths and weaknesses of medication alerting systems should be interpreted with caution and within the context of the organization. Furthermore, the TEMAS is designed to assess medication alerting systems in inpatient care and is likely to require some modification if it is to be used in other settings, such as pharmacy or outpatient settings. Finally, the TEMAS was not piloted with prospective end users before distribution to study sites; however, research team members with expertise in human factors, medication safety, and digital health checked and provided feedback on TEMAS items.

Conclusions

On basis of the usability flaws matched to human factors design principles, the TEMAS was developed for hospitals to self-assess medication alerts in their CPOE system with the goal of improving the effectiveness of these alerts. This initial evaluation allowed the identification of components of the TEMAS that required modification to improve usability and usefulness, leading to changes to items and the addition of examples and a response option. To be effective in facilitating a comprehensive evaluation, we found that the TEMAS should be completed by a team of multidisciplinary hospital staff from both clinical and technical backgrounds. This study was integral to the evolution of the TEMAS and established a revised version ready for use. As a next step, the updated TEMAS will be trialed by teams of users to assess their medication alerting systems and to compare the assessment results of the TEMAS with the I-MeDeSA.

Acknowledgments

The authors would like to thank Professor Sarah Hilmer for her assistance with participant recruitment and Honorary Associate Professor Peter Hibbert for his advice on the development of this assessment tool. The researchers working on this project were funded by the National Health and Medical Research Council Partnership grant 1134824.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Design principles and corresponding Tool for Evaluating Medication Alerting Systems items.

DOCX File , 23 KB

Multimedia Appendix 2

Usability survey.

DOCX File , 18 KB

Multimedia Appendix 3

The Tool for Evaluating Medication Alerting Systems (revised).

DOCX File , 41 KB

References

  1. Baysari M, Clay-Williams R, Loveday T. A human factors resource for health professionals and health services staff. In: The Human Factors and Ergonomics Society of Australia, The Australian Institute of Health Innovation. Australia: Macquarie University, The University of Sydney and the NSW Clinical Excellence Commission; 2019:1-70.
  2. Phansalkar S, Edworthy J, Hellier E, Seger DL, Schedlbauer A, Avery AJ, et al. A review of human factors principles for the design and implementation of medication safety alerts in clinical information systems. J Am Med Inform Assoc 2010 Sep 01;17(5):493-501 [FREE Full text] [CrossRef] [Medline]
  3. Akamatsu M, Green P, Bengler K. Automotive technology and human factors research: past, present, and future. Int J Veh Technol 2013 Sep 04;2013:1-27. [CrossRef]
  4. Safety Behaviours: Human Factors for Pilots, 2nd Edition. Australia: Civil Aviation Safety Authority Australia; 2019.
  5. Salas E, Maurino D, Curtis M. Chapter 1 - Human factors in aviation: an overview. In: Salas E, Maurino D, editors. Human Factors in Aviation (Second Edition). San Diego, CA: Academic Press; 2010:3-19.
  6. Palmer R. Applying human factors principles in aviation displays: a transition from analog to digital cockpit displays in the CP140 Aurora Aircraft. University of Tennessee, Knoxville. 2007.   URL: https://trace.tennessee.edu/cgi/viewcontent.cgi?article=1217&context=utk_gradthes [accessed 2021-06-25]
  7. Baysari MT, Lowenstein D, Zheng WY, Day RO. Reliability, ease of use and usefulness of I-MeDeSA for evaluating drug-drug interaction alerts in an Australian context. BMC Med Inform Decis Mak 2018 Oct 05;18(1):83 [FREE Full text] [CrossRef] [Medline]
  8. Lowenstein D, Zheng WY, Burke R, Kenny E, Sandhu A, Makeham M, et al. Do user preferences align with human factors assessment scores of drug-drug interaction alerts? Health Informatics J 2020 Mar 11;26(1):563-575 [FREE Full text] [CrossRef] [Medline]
  9. Phansalkar S, Zachariah M, Seidling HM, Mendes C, Volk L, Bates DW. Evaluation of medication alerts in electronic health records for compliance with human factors principles. J Am Med Inform Assoc 2014 Oct 01;21(e2):332-340 [FREE Full text] [CrossRef] [Medline]
  10. Zachariah M, Phansalkar S, Seidling HM, Neri PM, Cresswell KM, Duke J, et al. Development and preliminary evidence for the validity of an instrument assessing implementation of human-factors principles in medication-related decision-support systems--I-MeDeSA. J Am Med Inform Assoc 2011 Dec 01;18 Suppl 1(Supplement 1):62-72 [FREE Full text] [CrossRef] [Medline]
  11. Edrees H, Amato M, Wong A, Seger D, Bates D. High-priority drug-drug interaction clinical decision support overrides in a newly implemented commercial computerized provider order-entry system: override appropriateness and adverse drug events. J Am Med Inform Assoc 2020 Jun 01;27(6):893-900 [FREE Full text] [CrossRef] [Medline]
  12. Ash JS, Sittig DF, Campbell EM, Guappone KP, Dykstra RH. Some unintended consequences of clinical decision support systems. AMIA Annu Symp Proc 2007 Oct 11:26-30 [FREE Full text] [Medline]
  13. Simpao AF, Ahumada LM, Desai BR, Bonafide CP, Gálvez JA, Rehman MA, et al. Optimization of drug-drug interaction alert rules in a pediatric hospital's electronic health record system using a visual analytics dashboard. J Am Med Inform Assoc 2015 Mar 15;22(2):361-369. [CrossRef] [Medline]
  14. van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc 2006 Mar 01;13(2):138-147. [CrossRef]
  15. Abramson EL, Patel V, Pfoh ER, Kaushal R. How physician perspectives on e-prescribing evolve over time. A case study following the transition between EHRs in an outpatient clinic. Appl Clin Inform 2016 Oct 26;7(4):994-1006 [FREE Full text] [CrossRef] [Medline]
  16. Ash JS. Some unintended consequences of information technology in health care: the nature of patient care information system-related errors. J Am Med Inform Assoc 2003 Nov 21;11(2):104-112. [CrossRef]
  17. Brown CL, Mulcaster HL, Triffitt KL, Sittig DF, Ash JS, Reygate K, et al. A systematic review of the types and causes of prescribing errors generated from using computerized provider order entry systems in primary and secondary care. J Am Med Inform Assoc 2017 Mar 01;24(2):432-440 [FREE Full text] [CrossRef] [Medline]
  18. Khajouei R, Jaspers MW. The impact of CPOE medication systems' design aspects on usability, workflow and medication orders: a systematic review. Methods Inf Med 2010;49(1):3-19. [CrossRef] [Medline]
  19. Kuperman GJ, Bobb A, Payne TH, Avery AJ, Gandhi TK, Burns G, et al. Medication-related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc 2007 Jan 01;14(1):29-40. [CrossRef]
  20. Seidling HM, Phansalkar S, Seger DL, Paterno MD, Shaykevich S, Haefeli WE, et al. Factors influencing alert acceptance: a novel approach for predicting the success of clinical decision support. J Am Med Inform Assoc 2011;18(4):479-484 [FREE Full text] [CrossRef] [Medline]
  21. McCoy AB, Waitman LR, Lewis JB, Wright JA, Choma DP, Miller RA, et al. A framework for evaluating the appropriateness of clinical decision support alerts and responses. J Am Med Inform Assoc 2012 May 01;19(3):346-352 [FREE Full text] [CrossRef] [Medline]
  22. Russ AL, Chen S, Melton BL, Johnson EG, Spina JR, Weiner M, et al. A novel design for drug-drug interaction alerts improves prescribing efficiency. Joint Comm J Qual Patient Saf 2015 Sep;41(9):396-405. [CrossRef]
  23. Russ A, Zillich A, Melton B, Russell SA, Chen S, Spina JR, et al. Applying human factors principles to alert design increases efficiency and reduces prescribing errors in a scenario-based simulation. J Am Med Inform Assoc 2014 Oct;21(e2):287-296 [FREE Full text] [CrossRef] [Medline]
  24. Cho I, Lee J, Han H, Phansalkar S, Bates D. Evaluation of a Korean version of a tool for assessing the incorporation of human factors into a medication-related decision support system: the I-MeDeSA. Appl Clin Inform 2017 Dec 21;05(02):571-588. [CrossRef]
  25. Marcilly R, Ammenwerth E, Vasseur F, Roehrer E, Beuscart-Zéphir MC. Usability flaws of medication-related alerting functions: a systematic qualitative review. J Biomed Inform 2015 Jun;55:260-271 [FREE Full text] [CrossRef] [Medline]
  26. Marcilly R, Ammenwerth E, Roehrer E, Niès J, Beuscart-Zéphir MC. Evidence-based usability design principles for medication alerting systems. BMC Med Inform Decis Mak 2018 Jul 24;18(1):69 [FREE Full text] [CrossRef] [Medline]
  27. Bangor A, Kortum PT, Miller JT. An empirical evaluation of the system usability scale. Int J Hum-Comput Int 2008 Jul 30;24(6):574-594. [CrossRef]
  28. Bhakta S, Colavecchia A, Haines L, Varkey D, Garey K. A systematic approach to optimize electronic health record medication alerts in a health system. Am J Health Syst Pharm 2019 Apr 08;76(8):530-536. [CrossRef] [Medline]
  29. Kawamanto K, Flynn M, Kukhareva P, ElHalta D, Hess R, Gregory T, et al. A pragmatic guide to establishing clinical decision support governance and addressing decision support fatigue: a case study. AMIA Annu Symp Proc 2018;2018:624-633 [FREE Full text] [Medline]
  30. Zenziper Y, Kurnik D, Markovits N, Ziv A, Shamiss A, Halkin H, et al. Implementation of a clinical decision support system for computerized drug prescription entries in a large tertiary care hospital. Isr Med Assoc J 2014 May;16(5):289-294 [FREE Full text] [Medline]
  31. van Camp PJ, Kirkendall ES, Hagedorn PA, Minich T, Kouril M, Spooner SA, et al. Feedback at the Point of Care to Decrease Medication Alert Rates in an Electronic Health Record. Pediatr Emerg Care 2020 Jul;36(7):e417-e422. [CrossRef] [Medline]
  32. Lau F, Kuziemsky C. Handbook of eHealth Evaluation: An Evidence-Based Approach. Victoria, BC: University of Victoria; 2017.


CPOE: computerized provider order entry
I-MeDeSA: Instrument for Evaluating Human Factors Principles in Medication-Related Decision Support Alerts
TEMAS: Tool for Evaluating Medication Alerting Systems


Edited by G Eysenbach; submitted 01.09.20; peer-reviewed by A Russ, K Huat, R Ologeanu-Taddei, J Bagby; comments to author 08.10.20; revised version received 04.11.20; accepted 03.06.21; published 16.07.21

Copyright

©Wu Yi Zheng, Bethany Van Dort, Romaric Marcilly, Richard Day, Rosemary Burke, Sepehr Shakib, Young Ku, Hannah Reid-Anderson, Melissa Baysari. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 16.07.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on https://medinform.jmir.org/, as well as this copyright and license information must be included.