Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Monday, March 11, 2019 at 4:00 PM to 4:30 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Advertisement

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Published on 18.04.18 in Vol 6, No 2 (2018): Apr-Jun

Preprints (earlier versions) of this paper are available at http://preprints.jmir.org/preprint/8912, first published Sep 06, 2017.

This paper is in the following e-collection/theme issue:

    Review

    Reasons For Physicians Not Adopting Clinical Decision Support Systems: Critical Analysis

    1Carolina Health Informatics Program, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States

    2School of Nursing, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States

    3Health Informatics Graduate Program, College of Saint Scholastica, Duluth, MN, United States

    4Hamad Medical Coperation, Doha, Qatar

    Corresponding Author:

    Saif Khairat, PhD, MPH

    Carolina Health Informatics Program

    University of North Carolina at Chapel Hill

    428 Carrington Hall

    Chapel Hill, NC, 27514

    United States

    Phone: 1 9198435413

    Email: saif@unc.edu


    ABSTRACT

    Background: Clinical decision support systems (CDSSs) are an integral component of today’s health information technologies. They assist with interpretation, diagnosis, and treatment. A CDSS can be embedded throughout the patient safety continuum providing reminders, recommendations, and alerts to health care providers. Although CDSSs have been shown to reduce medical errors and improve patient outcomes, they have fallen short of their full potential. User acceptance has been identified as one of the potential reasons for this shortfall.

    Objective: The purpose of this paper was to conduct a critical review and task analysis of CDSS research and to develop a new framework for CDSS design in order to achieve user acceptance.

    Methods: A critical review of CDSS papers was conducted with a focus on user acceptance. To gain a greater understanding of the problems associated with CDSS acceptance, we conducted a task analysis to identify and describe the goals, user input, system output, knowledge requirements, and constraints from two different perspectives: the machine (ie, the CDSS engine) and the user (ie, the physician).

    Results: Favorability of CDSSs was based on user acceptance of clinical guidelines, reminders, alerts, and diagnostic suggestions. We propose two models: (1) the user acceptance and system adaptation design model, which includes optimizing CDSS design based on user needs/expectations, and (2) the input-process-output-engagemodel, which reveals to users the processes that govern CDSS outputs.

    Conclusions: This research demonstrates that the incorporation of the proposed models will improve user acceptance to support the beneficial effects of CDSSs adoption. Ultimately, if a user does not accept technology, this not only poses a threat to the use of the technology but can also pose a threat to the health and well-being of patients.

    JMIR Med Inform 2018;6(2):e24

    doi:10.2196/medinform.8912

    KEYWORDS



    Introduction

    The Agency for Healthcare Research and Quality [1] promotes a systems approach that aims “to catch human errors before they occur or block them from causing harm.” Clinical decision support systems (CDSSs) are at the forefront of this aim. A CDSS provides alerts, reminders, prescribing recommendations, therapeutic guidelines, image interpretation, and diagnostic assistance. Although studies have shown that CDSSs reduce medical errors and improve outcomes, they also demonstrate that CDSSs fall short of their full potential [2-9]. Research has attempted to narrow in on the cause of this shortfall. Coiera [10] identified provider’s lack of willingness and ability to use the technological system as one of the primary reasons.

    Wendt et al [11] discussed several factors that may be related to the acceptance of CDSSs, including the relevance of the information provided by the system, perceived validity of the system, and the work and time expended on using the system. These factors are similar to those defined by Davis [9] in the technology acceptance model (TAM) and later refined by Venkatesh et al [12] in the unified theory of acceptance and use of technology (UTAUT). These models offer a potential explanation for how expectations of performance, effort, social influences, and facilitating conditions are determinants of user acceptance and technology usage [12]. Using the TAM, Van Schaik et al [13] evaluated a gastroenterology referral CDSS. The system assisted primary care providers by suggesting an appropriate subspecialty referral (medical vs surgical), prioritizing urgency, and offering real-time booking [13]. They found that physicians rated acceptance based on the potential merits of the system rather than their experience with the computer system [13]. This finding was concordant with Venkatesh et al’s [12] proposal of the UTAUT model, in which they demonstrated that performance expectancy is the strongest predictor of user acceptance of technology.

    The theory behind user acceptance and its impact on the adoption of technology has been thoroughly described. The purpose of this paper was to conduct a review of the literature in order to evaluate our hypothesis that meaningful engagement of physicians in the design and development of CDSSs with transparent decision-making processes will result in higher acceptance rates.


    Methods

    Critical Review

    A search of MEDLINE/PubMed, CINAHL, PsycInfo, IEEE Xplore, and Web of Science was conducted using the keywords “clinical decision support,” “decision support acceptance,” and “user acceptance.” No timeframe limits were included for any database, and the language filters were set to English studies only. In our initial search, we found 186 papers. After removal of duplicates, 150 studies remained. To be included in this review, the papers had to match the following inclusionary criteria: investigate human interaction with a CDSS and evaluate user acceptance using the TAM questionnaire, focus groups, or interviews. Papers were excluded if the focus was on decision support systems that did not include clinical care or if they did not empirically investigate user acceptance. Title and abstract review eliminated 111 studies. The remaining 39 studies underwent a full-text review, resulting in a final count of 14 studies that met inclusion criteria. The search results are summarized in Figure 1.

    Figure 1. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Diagram.
    View this figure

    Study findings were categorized as either showing favorable or unfavorable responses to CDSSs. The favorable and unfavorable categorization was based on interpretations of focus groups and interviews conducted by the researchers of the reviewed papers. Additionally, the type of CDSS was noted for each of the reviewed papers. If a study used the TAM questionnaire, the results were summarized separately.

    Task Analysis

    To gain a greater understanding of the problems associated with CDSSs, we conducted a task analysis. Using past research, the task analysis helped identify and describe the goals, user input, system output, knowledge requirements, and constraints from two different perspectives. We considered the perspective of the machine (ie, the CDSS engine). We also considered the perspective of the user (ie, the physician). The literature review and task analysis served as the basis for designing CDSS models that improved user acceptance.


    Results

    Critical Review

    The results of 14 articles were evaluated. The 11 articles that qualitatively evaluated user acceptance of CDSSs can be found in Table 1 and the three articles that quantitatively evaluated user acceptance of CDSSs using TAM can be found in Table 2. Favorable and unfavorable responses for the aspects of clinical guidelines, reminders, and diagnostic CDSSs were recorded. Favorable responses were due to ease of system use, perceived time savings, and perceived usefulness of the systems in improving care delivery and overall patient health [14]. Users with higher computer skills were reported to have greater acceptance; however, the majority of users had an unfavorable acceptance response [15]. These unfavorable responses were often related to workflow interference, questionable validity of the systems, excessive disturbances caused by the systems, and lack of efficiency. More specifically, the workflow constraints were related to the CDSSs causing excessive alerts, increased time in computer handling, and decreased face-to-face time with patients [14,16,17].

    Of the studies reviewed, three used the TAM questionnaire for assessing user acceptance of CDSSs (Table 2). The CDSSs in these studies included an evaluation of two different computerized clinical guideline systems and one that offered reminders and alerts for evidenced-based guidelines. The ranges on perceived usefulness all overlap, but on the CDSSs with the highest perceived usefulness it also has the highest perceived ease of use. Overall, the TAM questionnaire revealed moderate user acceptance on all scales. In terms of the relationship between user acceptance of CDSSs and patient safety, none of the reviewed papers evaluated this topic. However, Bergman and Fors [15] found that the use of the technology was relatively low when user acceptance was low.

    Table 1. Summary of user acceptance related to clinical decision support systems (CDSSs) from previous studies (N=11).
    View this table
    Table 2. Results of the technology acceptance model (TAM) questionnaire from prior studies evaluating user acceptance of CDSSs.
    View this table

    Task Analysis

    Task analysis is conducted to stay updated with the changing professional practice (ie, health information technology) [29]. Task analysis applied to representative populations strengthen health systems by systematically evaluating the skills, knowledge, and behavior of clinicians that impact clinical practice [30]. The use of CDSS in health care has introduced new dynamics to practice and requires task analysis to understand the perception of users to this new technology. For that reason, conducting a task analysis will improve adoption levels. A task analysis includes goals, input, process, and output. The next sections discuss the purpose of each task analysis stage.

    Goals

    The goal of a CDSS is to supplement the physician as the sole information processor in clinical decision making and thereby aid in the reduction of medical errors. Yet, there is still much room for improvement. In part, this shortcoming may be due to the lack of physician acceptance of the CDSSs in supplementing their decision making. To get a better understanding of the challenges in creating clinical decision processes, we first consider what information goes into this process.

    Input

    A CDSS is based on an input-process-output (IPO) model. The inputs for the CDSS process include patient-specific information such as diagnoses, medications, symptoms, laboratory data, demographics, and other clinically relevant information. The inputs for knowledge-based CDSSs are often determined by clinical guidelines, whereas non-knowledge-based CDSSs use the most relevant information assessed by algorithm performance.

    Process

    The CDSS process takes two different forms: knowledge based and non-knowledge based [31]. Knowledge-based systems are governed by a set of rules. Non-knowledge-based systems, on the other hand, use a computer as the central processing unit to learn from historical information. As a result, these systems typically utilize machine learning algorithms.

    When CDSSs offer clinical suggestions, the support, evidence, clinical guideline, or algorithm for those suggestions is not provided. The inputs for knowledge-based CDSSs are often determined by clinical guidelines, whereas non-knowledge-based CDSSs use the most relevant information assessed by algorithm performance. In both cases, the physician is not aware of the inputs or processes the CDSS utilizes. Thus, the CDSS is a black box to the physician.

    Physicians make clinical decisions based on the same patient information in addition to social structures (acceptable behavior as determined by peer groups), institutions (the requirement to act according to mandated practices), and individual morality in decision making. One can conjecture that difficulties arise when automating such a complex network of inputs that could never by fully encapsulated or realized by a machine.

    Output

    The output from CDSSs and physicians are a result of the methods employed for processing the inputs. The output may be a diagnosis, procedure, prescription, etc. Ideally, in conditions where the computer and the physician are presented with the same information, the output from the CDSS should mirror the physician’s decision.

    The level of control the CDSS has with regards to the output is inversely related to the level of control the user has over the output. A CDSS can be passive in situations where they only “highlight” information for the user, but do not request acknowledgment or action [32]. An example would be presenting abnormal laboratory values in a red font and normal laboratory values in a black font. Active CDSSs act independently and provide suggestions to guide the physician’s behavior [32]. An example would be a system that provides diagnostic assistance. The type of output then depends on the goal orientation of a task (eg, diagnoses, medication alerts, and clinical guidelines for preventive care).

    Knowledge

    Physicians are more likely to accept a CDSS if the system matches their own decision-making processes. Forster [33] described how humans quickly act on information by using bounded and ecological rationality. Bounded rationality is based on the use of simple heuristics, allowing for fast, real-time decision making [34]. Ecological rationality is based on rational beliefs of things in a given environmental setting where conditions are fluid. Forster [33] argued that both bounded and ecological rationality need to be present in machine learning to mimic the human decision processes.

    Incorporating these two approaches into CDSSs can be challenging. Even though the heuristics that mediate decision processes are simple; the complexity of the cognitive infrastructure underlying heuristic operations can be difficult to implement. Still, Forster [33] argued that machine learning algorithms can be improved by incorporating the principles of bounded and ecological rationality. To carry out this task, Forster [33] suggested that a decision-making machine should have (1) a set of ad hoc rules (or biases) to act on and (2) a set of ecologically viable environmental factors to consider.

    Clark [35] extended this idea of mediating decision processes by bounded and ecological rationality through a concept he referred to as scaffolding. Clark [35] posited that human reasoning involves three aspects: (1) individual reasoning cast by some form of fast, pattern-completing style of computation (ie, bounded rationality); (2) substantial problem-solving work offloaded onto external structures and processes (eg, social and institutional structures); and (3) public language used as a means of coordinating social structures and mediating individual thought. Thus, decision making and cognition are largely dependent on the capacity to dissipate reasoning throughout the environment to reduce individual workload.

    Holland and colleagues [36] added additional elements that can be useful to understand physician decision making. These elements provide a cognitive framework for problem solving, which includes two distinct schemas: pragmatic reasoning schema and problem schema. Pragmatic reasoning schemas are clusters of abstract inferential rules that characterize relations over general classes of object kinds, event relationships, and problem goals. Problem schemas are used by experts to solve routine problems, where an expert retrieves an appropriate problem schema and provides it with problem-specific parameters.

    The system must also have two types of knowledge structures: mental models and condition-action rules. Holland and colleagues [36] assert that “mental models are transient, dynamic representations of particular, unique situations. They exist only implicitly, corresponding to the organized, multifaceted description of the current situation and the expectations that flow from it.” A condition-action rule can be thought of as an IF (condition)...THEN (action) statement. Together, these knowledge structures allow the mental schemas to operate in order to solve problems.

    To successfully implement and use CDSSs, these mental models have to be identified. Hayek [37] stated that knowledge is not given to anyone in its entirety. This statement legitimizes why CDSSs are so important. In theory, CDSSs lessen the cognitive resources a physician needs to make decisions.

    Constraints

    The major limitation of CDSSs is that scaffolding cannot be fully captured by computers. The environmental, clinical, and social constraints in which physicians practice are difficult to include as inputs into a CDSS. In addition, reproducing a physician’s tacit knowledge through mental models and condition-action rules is a formidable objective. Additionally, physicians must be able to support their decision and are skeptical of recommendations or claims that lack supporting evidence or transparency. The fact that CDSSs do not reveal how output decisions are made may be a driving force behind the lack of users’ acceptance.


    Discussion

    Means for Solving User Acceptance of Clinical Decision Support Systems

    Studies have revealed that responses to CDSSs can be unfavorable when resulting improvements in patient outcomes are inconsistent [6]. Also, some studies have reported incidents of patient harm associated with CDSS implementation [38]. Despite these findings, limited research has formally evaluated the impact of user acceptance. Based on our comprehensive review of the literature, we have found both favorable and unfavorable user acceptance to CDSSs.

    If a user finds a product frustrating or perceives that the purpose of the product is to limit autonomy, the user may not use the product or do so inappropriately [39]. Vashitz et al [40] explains the consequence of loss of autonomy as reactance. Reactance is an unpleasant motivational state whereby people react to situations to retain freedom and autonomy. Reactance may exist when physicians feel threatened by clinical reminders for fear that they are losing autonomy and freedom of choice in the presence of such systems. Physicians may have the perception that these systems are meant to replace or degrade their clinical duties. Vashitz et al [40] describe how unsolicited advice may lead to a reactance state if the advice contradicts a person’s original impression of choice options.

    Based on the UTAUT, user expectations need to be taken into consideration for technology to be accepted [12]. Therefore, in the design of CDSSs, the human element cannot be ignored. Reminders and alerts should be presented in such a way that the user does not find them threatening or obtrusive. User needs and expectations of a CDSS should be evaluated early and throughout the development lifecycle. For instance, Gadd [18] observed enhanced usability and usefulness by implementing usability testing in the early phases of CDSS development. They evaluated an evolving prototype of the system and observed user interactions over a 3-month period. In a series of sessions, they focused on evaluating user interactions with different sets of system features such as screen layout, input/output, and links to educational materials. Finally, they considered the user feedback on system recommendations in the design process. Compelling suggestions for system enhancements made by users during the earlier sessions influenced system development of features that were evaluated in later sessions.

    Peleg et al [28] discussed the development process of their CDSS, where clinically knowledgeable users worked alongside the developers to design and implement the CDSS. They also used a lifecycle model user-centered design and evaluation process for evaluating the users’ goals/expectations, workflow, environmental constraints, and tasks. Finally, they conducted usability testing (ie, heuristic evaluation of user-interface, keystroke-level modeling, and cognitive walkthroughs) prior to implementation.

    Developers of CDSSs have attempted to bottle-up the decision-making capacity of physicians and place that knowledge into a computer. Current methods to achieve this feat employ rules and machine learning algorithms. However, the lack of user acceptance has impeded CDSS use. Research has shown that consideration of users’ needs and expectations in the design of the CDSS may help overcome this obstacle. We argue that this approach is only part of the solution.

    We propose that CDSSs move away from the black-box process to a more transparent method within the IPO model. Simply put, tell the physician how the computer is making the decision. If the computer can become part of scaffolded knowledge, the physician may view the computer as an aid rather than a threat or hindrance. Research supports the idea that the rules governing alerts be specified to practitioners and the information be presented based on users’ needs and expectations [41].

    Proposal of Models to Gain User Acceptance

    We propose two models to improve CDSSs development that may lead to increased utilization resulting in improved patient outcomes. First, is the user acceptance and system adaptation design (UASAD) model that aims to involve end users early in the design and throughout the development of CDSSs. Second, is replacing the current IPO model of CDSS development with the input-process-output-engage (IPOE) model that serves to “engage” the physician through CDSS process transparency.

    The UASAD model demands early end-user involvement in CDSS development. User needs and expectations need to be fully realized prior to the development of a CDSS. Another consideration is to evaluate system preparedness to ensure that users can trust the security and privacy of the system. Prototypic designs should undergo an iterative design process following rigorous usability testing in a laboratory and natural setting (ie, pilot study) to ensure that the system works within the cognitive and environmental constraints with which the user functions.

    Finally, user acceptance should be evaluated to ensure that the system is used appropriately. If user acceptance is not achieved above a predefined threshold, the CDSS should be reevaluated from the point of view of user needs and expectations. It should also be subjected to adaptive redesign. This process should iterate until user acceptance exceeds a predefined threshold. To illustrate this process, we have developed a UASAD model (Figure 2). The purpose of the model is to include the user as the focal point of the design process of CDSS.

    The IPOE model offers users a window into the black-box IPO process. Through “engage” physicians will see how the CDSS is making decisions. The IPOE window will be called “engage” because it will present users with the rules that the machine followed to generate the output (Figure 3). Therefore, the user can make informed decisions when determining to accept or deny outputs. “Engage” will display the input, process, and output that led to the CDSS’s decision. The physician will then be able to evaluate the relevancy, validity, supporting evidence, and strength of a recommendation. Therefore, this system becomes a component of the physician’s scaffolded knowledge and enables them to act more confidently in accepting the technology and its role in their decision-making processes.

    A limitation of the IPOE model is that in order for the model to work successfully, the physician has to understand the process. Processes that utilize a machine learning algorithm, such as neural networks, do not provide rules. Therefore, it is challenging to make all processes transparent.

    Why Do We Make Bad Decisions?

    Physicians’ tendencies to incorrectly process challenging decisions usually lead to bad clinical decisions. Most practicing physicians tend to make decisions out of their own medical experience, whereas others pursue medical consultations and filtering through the jargon of relevant research. The most effective physician, though, is the one who has the ability to utilize his clinical judgment coupled by the computerized decision support tools to leverage the power of CDSSs. Most clinicians exhibit bias when it comes to medical information that they know and, therefore, they typically focus on things that would agree with the specific clinical outcome that they want to see in their patients. Therefore, the context of using a CDSS is mandated by the efforts to decrease medical errors by utilizing existing knowledge and technology. These systems are a result of long-term scientific research to build efficient tools for physicians to supplement their clinical experience. Physicians should look at CDSSs as an added value to make the best decisions in their day-to-day practice and to better serve their patients. These systems seek to reduce medical errors by enabling the practicing physicians to make informed decisions that are both accurate and precise.

    Figure 2. A user acceptance and system adaptation design (UASAD) model. CDSS: clinical decision support system; UTAUT: unified theory of acceptance and use of technology.
    View this figure
    Figure 3. The input-process-output-engage (IPOE) model.
    View this figure

    Conclusion

    Implementation of CDSSs has demonstrated increased efficiency, reduced medical errors, and improved outcomes, but they continue to fall short of their full potential [2-9]. We believe this key shortcoming may partly be due to the lack of physician acceptance. In the past, CDSS designs have not incorporated input from physicians and do not reveal their decision-making processes. Consequently, many physicians are hesitant to accept CDSSs leading to suboptimal implementation. Here we propose two models for designing CDSSs with the goal of improving efficacy and physician acceptance. One model, UASAD, focuses on including the physician in the design process by examining user needs and expectations and usability of prototypic designs. The other model, IPOE, extends the existing IPO framework by adding an “engage” stage that displays the CDSS process to the physician. This approach allows the physician to include the CDSS as a component of their decisions while maintaining professional autonomy. There is still considerable work to be done for validating these models, yet user acceptance appears to be pertinent for successful CDSS use. Ultimately, if a physician does not accept the technology, it not only poses a threat to the use of the technology but can also pose a threat to the health and well-being of patients.

    Acknowledgments

    The authors would like to express appreciation to Dr Marla Broadfoot, PhD, and Ms Victoria Rand for their contribution to this paper.

    Conflicts of Interest

    None declared.

    References

    1. Patient Safety Network. Patient safety primer: systems approach   URL: https://psnet.ahrq.gov/primers/primer/21/systems-approach [WebCite Cache]
    2. Belard A, Buchman T, Forsberg J, Potter BK, Dente CJ, Kirk A, et al. Precision diagnosis: a view of the clinical decision support systems (CDSS) landscape through the lens of critical care. J Clin Monit Comput 2017 Apr;31(2):261-271. [CrossRef] [Medline]
    3. Castaneda C, Nalley K, Mannion C, Bhattacharyya P, Blake P, Pecora A, et al. Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine. J Clin Bioinforma 2015;5:4 [FREE Full text] [CrossRef] [Medline]
    4. Damberg C, Timbie J, Bell D, Hiatt L, Smith A, Schneider E. RAND Corporation. 2012. Developing a framework for establishing clinical decision support meaningful use objectives for clinical specialties   URL: https://www.rand.org/pubs/technical_reports/TR1129.html [accessed 2018-02-23] [WebCite Cache]
    5. Eberhardt J, Bilchik A, Stojadinovic A. Clinical decision support systems: potential with pitfalls. J Surg Oncol 2012 Apr 01;105(5):502-510. [CrossRef] [Medline]
    6. Jaspers MW, Smeulers M, Vermeulen H, Peute LW. Effects of clinical decision-support systems on practitioner performance and patient outcomes: a synthesis of high-quality systematic review findings. J Am Med Inform Assoc 2011 May 01;18(3):327-334 [FREE Full text] [CrossRef] [Medline]
    7. Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005 Apr 02;330(7494):765 [FREE Full text] [CrossRef] [Medline]
    8. Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med 2003 Jun 23;163(12):1409-1416. [CrossRef] [Medline]
    9. Davis FD. User acceptance of information technology: system characteristics, user perceptions and behavioral impacts. Int J Man Mach Stud 1993 Mar;38(3):475-487 [FREE Full text] [CrossRef]
    10. Coiera E. Clinical decision support systems. In: Guide to Health Informatics. 2nd ed. Boca Raton, FL: CRC Press; 2003:331-344.
    11. Wendt T, Knaup-Gregori P, Winter A. Decision support in medicine: a survey of problems of user acceptance. Stud Health Technol Inform 2000;77:852-856. [Medline]
    12. Venkatesh, Morris, Davis, Davis. User acceptance of information technology: toward a unified view. MIS Quarterly 2003;27(3):425. [CrossRef]
    13. Van Schaik P, Flynn D, Van Wersch A, Douglass A, Cann P. The acceptance of a computerised decision-support system in primary care: A preliminary investigation. Behav Info Technol 2004 Sep;23(5):321-326. [CrossRef]
    14. Zheng K, Padman R, Johnson MP, Engberg J, Diamond HH. An adoption study of a clinical reminder system in ambulatory care using a developmental trajectory approach. Stud Health Technol Inform 2004;107(Pt 2):1115-1119. [Medline]
    15. Bergman LG, Fors UGH. Computer-aided DSM-IV-diagnostics - acceptance, use and perceived usefulness in relation to users' learning styles. BMC Med Inform Decis Mak 2005 Jan 07;5:1 [FREE Full text] [CrossRef] [Medline]
    16. Curry L, Reed MH. Electronic decision support for diagnostic imaging in a primary care setting. J Am Med Inform Assoc 2011 May 01;18(3):267-270 [FREE Full text] [CrossRef] [Medline]
    17. Zheng K, Padman R, Johnson MP, Diamond HS. Understanding technology adoption in clinical care: clinician adoption behavior of a point-of-care reminder system. Int J Med Inform 2005 Aug;74(7-8):535-543. [CrossRef] [Medline]
    18. Gadd CS, Baskaran P, Lobach DF. Identification of design features to enhance utilization and acceptance of systems for Internet-based decision support at the point of care. Proc AMIA Symp 1998:91-95 [FREE Full text] [Medline]
    19. Johnson M, Zheng K, Padman R. Modeling the longitudinality of user acceptance of technology with an evidence-adaptive clinical decision support system. Decision Support Systems 2014 Jan;57:444-453 [FREE Full text] [CrossRef]
    20. Rosenbloom ST, Talbert D, Aronsky D. Clinicians' perceptions of clinical decision support integrated into computerized provider order entry. Int J Med Inform 2004 Jun 15;73(5):433-441. [CrossRef] [Medline]
    21. Rousseau N, McColl E, Newton J, Grimshaw J, Eccles M. Practice based, longitudinal, qualitative interview study of computerised evidence based guidelines in primary care. BMJ 2003 Feb 08;326(7384):314 [FREE Full text] [Medline]
    22. Shibl R, Lawley M, Debuse J. Factors influencing decision support system acceptance. Decis Support Syst 2013 Jan;54(2):953-961. [CrossRef]
    23. Sousa VEC, Lopez KD, Febretti A, Stifter J, Yao Y, Johnson A, et al. Use of simulation to study nurses' acceptance and nonacceptance of clinical decision support suggestions. Comput Inform Nurs 2015 Oct;33(10):465-472 [FREE Full text] [CrossRef] [Medline]
    24. Terraz O, Wietlisbach V, Jeannot J, Burnand B, Froehlich F, Gonvers J, et al. The EPAGE internet guideline as a decision support tool for determining the appropriateness of colonoscopy. Digestion 2005;71(2):72-77 [FREE Full text] [CrossRef] [Medline]
    25. Wallace CJ, Metcalf S, Zhang X, Kinder AT, Greenway L, Morris AH. Cost effective computerized decision support: tracking caregiver acceptance at the point of care. Proc Annu Symp Comput Appl Med Care 1995:810-813 [FREE Full text] [Medline]
    26. Buenestado D, Elorz J, Pérez-Yarza EG, Iruetaguena A, Segundo U, Barrena R, et al. Evaluating acceptance and user experience of a guideline-based clinical decision support system execution platform. J Med Syst 2013 Apr;37(2):9910. [CrossRef] [Medline]
    27. Heselmans A, Aertgeerts B, Donceel P, Geens S, Van de Velde S, Ramaekers D. Family physicians' perceptions and use of electronic clinical decision support during the first year of implementation. J Med Syst 2012 Dec;36(6):3677-3684. [CrossRef] [Medline]
    28. Peleg M, Shachak A, Wang D, Karnieli E. Using multi-perspective methodologies to study users' interactions with the prototype front end of a guideline-based decision support system for diabetic foot care. Int J Med Inform 2009 Jul;78(7):482-493. [CrossRef] [Medline]
    29. Fullerton JT. A task analysis of American nurse-midwifery practice. J Nurse Midwifery 1987;32(5):291-296. [Medline]
    30. Oshio S, Johnson P, Fullerton J. The 1999-2000 task analysis of American nurse-midwifery/midwifery practice. J Midwifery Womens Health 2002;47(1):35-41. [Medline]
    31. Berner E, La Lande T. Overview of clinical decision support systems. In: Berner ES, editor. Clinical Decision Support Systems: Theory and Practice. New York: Springer Science+ Business Media; 2007:3-22.
    32. Musen M, Shortliffe E, Cimino J. Clinical decision-support systems. In: Biomedical Informatics. New York: Springer Science; May 06, 2006:698-736.
    33. Forster M. How do simple rules fit to reality in a complex world? Mind Mach 1999;9(4):543-564. [CrossRef]
    34. Simon H. Models of Bounded Rationality: Behavioral Economics and Business Organization. Volume 2. Cambridge, MA: The MIT Press; 1982.
    35. Clark A. Being There: Putting Brain, Body, and World Together Again. Cambridge, MA: MIT press; 1998.
    36. Holland J, Holyoak K, Nisbett R, Thagard P. A framework for induction. In: Induction: Processes of Inference, Learning, and Discovery. Cambridge, MA: MIT press; 1989:1-28.
    37. Hayek F. The use of knowledge in society. Am Econ Rev 1945;35(4):519-530 [FREE Full text]
    38. Harrison MI, Koppel R, Bar-Lev S. Unintended consequences of information technologies in health care--an interactive sociotechnical analysis. J Am Med Inform Assoc 2007;14(5):542-549 [FREE Full text] [CrossRef] [Medline]
    39. Walter Z, Lopez MS. Physician acceptance of information technologies: role of perceived threat to professional autonomy. Decis Support Syst 2008 Dec;46(1):206-215. [CrossRef]
    40. Vashitz G, Meyer J, Parmet Y, Peleg R, Goldfarb D, Porath A, et al. Defining and measuring physicians' responses to clinical reminders. J Biomed Inform 2009 Apr;42(2):317-326 [FREE Full text] [CrossRef] [Medline]
    41. Stultz JS, Nahata MC. Computerized clinical decision support for medication prescribing and utilization in pediatrics. J Am Med Inform Assoc 2012;19(6):942-953 [FREE Full text] [CrossRef] [Medline]


    Abbreviations

    CDSS: clinical decision support systems
    IPO: input-process-output
    IPOE: input-process-output-engage
    TAM: technology acceptance model
    UASAD: user acceptance and system adaptation design
    UTAUT: unified theory of acceptance and use of technology


    Edited by G Eysenbach; submitted 06.09.17; peer-reviewed by Y Gong, R Agrawal, L Zhou, T Saheb, KC Wong, R Robinson; comments to author 23.11.17; revised version received 02.03.18; accepted 19.03.18; published 18.04.18.

    ©Saif Khairat, David Marc, William Crosby, Ali Al Sanousi. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 18.04.2018.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on http://medinform.jmir.org/, as well as this copyright and license information must be included.