This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on http://medinform.jmir.org/, as well as this copyright and license information must be included.
Clinical decision support systems (CDSSs) are an integral component of today’s health information technologies. They assist with interpretation, diagnosis, and treatment. A CDSS can be embedded throughout the patient safety continuum providing reminders, recommendations, and alerts to health care providers. Although CDSSs have been shown to reduce medical errors and improve patient outcomes, they have fallen short of their full potential. User acceptance has been identified as one of the potential reasons for this shortfall.
The purpose of this paper was to conduct a critical review and task analysis of CDSS research and to develop a new framework for CDSS design in order to achieve user acceptance.
A critical review of CDSS papers was conducted with a focus on user acceptance. To gain a greater understanding of the problems associated with CDSS acceptance, we conducted a task analysis to identify and describe the goals, user input, system output, knowledge requirements, and constraints from two different perspectives: the machine (ie, the CDSS engine) and the user (ie, the physician).
Favorability of CDSSs was based on user acceptance of clinical guidelines, reminders, alerts, and diagnostic suggestions. We propose two models: (1) the user acceptance and system adaptation design model, which includes optimizing CDSS design based on user needs/expectations, and (2) the input-process-output-engagemodel, which reveals to users the processes that govern CDSS outputs.
This research demonstrates that the incorporation of the proposed models will improve user acceptance to support the beneficial effects of CDSSs adoption. Ultimately, if a user does not accept technology, this not only poses a threat to the use of the technology but can also pose a threat to the health and well-being of patients.
The Agency for Healthcare Research and Quality [
Wendt et al [
The theory behind user acceptance and its impact on the adoption of technology has been thoroughly described. The purpose of this paper was to conduct a review of the literature in order to evaluate our hypothesis that meaningful engagement of physicians in the design and development of CDSSs with transparent decision-making processes will result in higher acceptance rates.
A search of MEDLINE/PubMed, CINAHL, PsycInfo, IEEE Xplore, and Web of Science was conducted using the keywords “clinical decision support,” “decision support acceptance,” and “user acceptance.” No timeframe limits were included for any database, and the language filters were set to English studies only. In our initial search, we found 186 papers. After removal of duplicates, 150 studies remained. To be included in this review, the papers had to match the following inclusionary criteria: investigate human interaction with a CDSS and evaluate user acceptance using the TAM questionnaire, focus groups, or interviews. Papers were excluded if the focus was on decision support systems that did not include clinical care or if they did not empirically investigate user acceptance. Title and abstract review eliminated 111 studies. The remaining 39 studies underwent a full-text review, resulting in a final count of 14 studies that met inclusion criteria. The search results are summarized in
Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Diagram.
Study findings were categorized as either showing favorable or unfavorable responses to CDSSs. The favorable and unfavorable categorization was based on interpretations of focus groups and interviews conducted by the researchers of the reviewed papers. Additionally, the type of CDSS was noted for each of the reviewed papers. If a study used the TAM questionnaire, the results were summarized separately.
To gain a greater understanding of the problems associated with CDSSs, we conducted a task analysis. Using past research, the task analysis helped identify and describe the goals, user input, system output, knowledge requirements, and constraints from two different perspectives. We considered the perspective of the machine (ie, the CDSS engine). We also considered the perspective of the user (ie, the physician). The literature review and task analysis served as the basis for designing CDSS models that improved user acceptance.
The results of 14 articles were evaluated. The 11 articles that qualitatively evaluated user acceptance of CDSSs can be found in
Of the studies reviewed, three used the TAM questionnaire for assessing user acceptance of CDSSs (
Summary of user acceptance related to clinical decision support systems (CDSSs) from previous studies (N=11).
Study | Favorable response to CDSS | Unfavorable response to CDSS | CDSS Description |
Bergman & Fors (2005) [ |
Can save time and provide structure | Not suitable to workflow and there is the risk of becoming dependent | CDSS for medical diagnosis of psychiatric diseases |
Curry & Reed (2011) [ |
Concept was supported | Interference with workflow and questionable validity | Prompts for adhering to diagnostic imaging guidelines |
Gadd et al (1998) [ |
Easy to use, limits the need for data entry, accurate, and relevant | Benefits are lost because it takes so long to use | Internet-based system that interactively presents clinical practice guidelines at point of care |
Johnson et al (2014) [ |
Longitudinal acceptance behavior, perceived ease of use, and perceived usefulness | Computer literacy, user satisfaction, and general optimism | Clinical reminders and alerts for patients with asthma, diabetes, hypertension, and hyperlipidemia |
Rosenbloom et al (2004) [ |
Can improve efficiency and quality of care; enhances education | Senior physicians did not think it was necessary | CDSS for computerized order entry system |
Rousseau et al (2003) [ |
Use of “active” CDSS can bridge the gap between own practice and best practice | Clinicians found it to be difficult to use and unhelpful clinically | CDSS for chronic disease in general practice |
Shibl et al (2013) [ |
Performance expectancy, usefulness, and effort expectancy | Trust in CDSS and need for the system | No specified CDSS; responses based on past and present experiences with multiple CDSSs |
Sousa et al (2015) [ |
Belief that the suggestions were good for the patient | Low confidence in the evidence | CDSS for nursing care plan |
Terraz et al (2005) [ |
Ease of use and easy access to information | Information that is presented is already known | Guidelines for colonoscopies |
Wallace et al (1995) [ |
Can improve patient outcomes | Alerts are ignored because there is not enough time to dedicate to forming an appropriate response | CDSS to standardize administration of supplemental oxygen |
Zheng et al (2005) [ |
Improves performance leading to better care, easy to use, and efficient | Iterative advisories, lack of relevance, a lot of data entry, and disruptive | Clinical reminders for chronic diseases and preventive care |
Results of the technology acceptance model (TAM) questionnaire from prior studies evaluating user acceptance of CDSSs.
Study | Buenestado et al (2013) [ |
Heselmans et al (2012) [ |
Peleg et al (2009) [ |
|
CDSS description | Computerized clinical guidelines and protocols for asthma in children | Reminders and alerts for evidenced-based guidelines | Guideline-based decision support system for diabetic patient foot problems | |
Participant description | 8 pediatricians | 39 Dutch-speaking family physicians | 8 family physicians | |
Seven-point scale | Seven-point scale | Five-point scale | ||
Perceived usefulness | 5.80 (1.24) | 4.00 (1.37) | 4.00 (0.71) | |
Perceived ease of use | 6.17 (0.92) | 5.02 (1.41) | 4.40 (0.59) | |
Attitude toward using | 6.21 (0.59) | 4.84 (0.97) | N/A | |
Behavioral intention to use | 5.71 (1.24) | 5.91 (1.33) | 4.88 (0.23) |
aThe scores are based on a Likert scale (1=totally disagree; 5 or 7=totally agree).
Task analysis is conducted to stay updated with the changing professional practice (ie, health information technology) [
The goal of a CDSS is to supplement the physician as the sole information processor in clinical decision making and thereby aid in the reduction of medical errors. Yet, there is still much room for improvement. In part, this shortcoming may be due to the lack of physician acceptance of the CDSSs in supplementing their decision making. To get a better understanding of the challenges in creating clinical decision processes, we first consider what information goes into this process.
A CDSS is based on an input-process-output (IPO) model. The inputs for the CDSS process include patient-specific information such as diagnoses, medications, symptoms, laboratory data, demographics, and other clinically relevant information. The inputs for knowledge-based CDSSs are often determined by clinical guidelines, whereas non-knowledge-based CDSSs use the most relevant information assessed by algorithm performance.
The CDSS process takes two different forms: knowledge based and non-knowledge based [
When CDSSs offer clinical suggestions, the support, evidence, clinical guideline, or algorithm for those suggestions is not provided. The inputs for knowledge-based CDSSs are often determined by clinical guidelines, whereas non-knowledge-based CDSSs use the most relevant information assessed by algorithm performance. In both cases, the physician is not aware of the inputs or processes the CDSS utilizes. Thus, the CDSS is a black box to the physician.
Physicians make clinical decisions based on the same patient information in addition to social structures (acceptable behavior as determined by peer groups), institutions (the requirement to act according to mandated practices), and individual morality in decision making. One can conjecture that difficulties arise when automating such a complex network of inputs that could never by fully encapsulated or realized by a machine.
The output from CDSSs and physicians are a result of the methods employed for processing the inputs. The output may be a diagnosis, procedure, prescription, etc. Ideally, in conditions where the computer and the physician are presented with the same information, the output from the CDSS should mirror the physician’s decision.
The level of control the CDSS has with regards to the output is inversely related to the level of control the user has over the output. A CDSS can be passive in situations where they only “highlight” information for the user, but do not request acknowledgment or action [
Physicians are more likely to accept a CDSS if the system matches their own decision-making processes. Forster [
Incorporating these two approaches into CDSSs can be challenging. Even though the heuristics that mediate decision processes are simple; the complexity of the cognitive infrastructure underlying heuristic operations can be difficult to implement. Still, Forster [
Clark [
Holland and colleagues [
The system must also have two types of knowledge structures: mental models and condition-action rules. Holland and colleagues [
To successfully implement and use CDSSs, these mental models have to be identified. Hayek [
The major limitation of CDSSs is that scaffolding cannot be fully captured by computers. The environmental, clinical, and social constraints in which physicians practice are difficult to include as inputs into a CDSS. In addition, reproducing a physician’s tacit knowledge through mental models and condition-action rules is a formidable objective. Additionally, physicians must be able to support their decision and are skeptical of recommendations or claims that lack supporting evidence or transparency. The fact that CDSSs do not reveal how output decisions are made may be a driving force behind the lack of users’ acceptance.
Studies have revealed that responses to CDSSs can be unfavorable when resulting improvements in patient outcomes are inconsistent [
If a user finds a product frustrating or perceives that the purpose of the product is to limit autonomy, the user may not use the product or do so inappropriately [
Based on the UTAUT, user expectations need to be taken into consideration for technology to be accepted [
Peleg et al [
Developers of CDSSs have attempted to bottle-up the decision-making capacity of physicians and place that knowledge into a computer. Current methods to achieve this feat employ rules and machine learning algorithms. However, the lack of user acceptance has impeded CDSS use. Research has shown that consideration of users’ needs and expectations in the design of the CDSS may help overcome this obstacle. We argue that this approach is only part of the solution.
We propose that CDSSs move away from the black-box process to a more transparent method within the IPO model. Simply put, tell the physician how the computer is making the decision. If the computer can become part of scaffolded knowledge, the physician may view the computer as an aid rather than a threat or hindrance. Research supports the idea that the rules governing alerts be specified to practitioners and the information be presented based on users’ needs and expectations [
We propose two models to improve CDSSs development that may lead to increased utilization resulting in improved patient outcomes. First, is the user acceptance and system adaptation design (UASAD) model that aims to involve end users early in the design and throughout the development of CDSSs. Second, is replacing the current IPO model of CDSS development with the input-process-output-engage (IPOE) model that serves to “engage” the physician through CDSS process transparency.
The UASAD model demands early end-user involvement in CDSS development. User needs and expectations need to be fully realized prior to the development of a CDSS. Another consideration is to evaluate system preparedness to ensure that users can trust the security and privacy of the system. Prototypic designs should undergo an iterative design process following rigorous usability testing in a laboratory and natural setting (ie, pilot study) to ensure that the system works within the cognitive and environmental constraints with which the user functions.
Finally, user acceptance should be evaluated to ensure that the system is used appropriately. If user acceptance is not achieved above a predefined threshold, the CDSS should be reevaluated from the point of view of user needs and expectations. It should also be subjected to adaptive redesign. This process should iterate until user acceptance exceeds a predefined threshold. To illustrate this process, we have developed a UASAD model (
The IPOE model offers users a window into the black-box IPO process. Through “engage” physicians will see how the CDSS is making decisions. The IPOE window will be called “engage” because it will present users with the rules that the machine followed to generate the output (
A limitation of the IPOE model is that in order for the model to work successfully, the physician has to understand the process. Processes that utilize a machine learning algorithm, such as neural networks, do not provide rules. Therefore, it is challenging to make all processes transparent.
Physicians’ tendencies to incorrectly process challenging decisions usually lead to bad clinical decisions. Most practicing physicians tend to make decisions out of their own medical experience, whereas others pursue medical consultations and filtering through the jargon of relevant research. The most effective physician, though, is the one who has the ability to utilize his clinical judgment coupled by the computerized decision support tools to leverage the power of CDSSs. Most clinicians exhibit bias when it comes to medical information that they know and, therefore, they typically focus on things that would agree with the specific clinical outcome that they want to see in their patients. Therefore, the context of using a CDSS is mandated by the efforts to decrease medical errors by utilizing existing knowledge and technology. These systems are a result of long-term scientific research to build efficient tools for physicians to supplement their clinical experience. Physicians should look at CDSSs as an added value to make the best decisions in their day-to-day practice and to better serve their patients. These systems seek to reduce medical errors by enabling the practicing physicians to make informed decisions that are both accurate and precise.
A user acceptance and system adaptation design (UASAD) model. CDSS: clinical decision support system; UTAUT: unified theory of acceptance and use of technology.
The input-process-output-engage (IPOE) model.
Implementation of CDSSs has demonstrated increased efficiency, reduced medical errors, and improved outcomes, but they continue to fall short of their full potential [
clinical decision support systems
input-process-output
input-process-output-engage
technology acceptance model
user acceptance and system adaptation design
unified theory of acceptance and use of technology
The authors would like to express appreciation to Dr Marla Broadfoot, PhD, and Ms Victoria Rand for their contribution to this paper.
None declared.