Published on in Vol 13 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/63703, first published .
Patients’ Experienced Usability and Satisfaction With Digital Health Solutions in a Home Setting: Instrument Validation Study

Patients’ Experienced Usability and Satisfaction With Digital Health Solutions in a Home Setting: Instrument Validation Study

Patients’ Experienced Usability and Satisfaction With Digital Health Solutions in a Home Setting: Instrument Validation Study

1Outpatient Division, Amsterdam University Medical Center, Meibergdreef 9, Amsterdam, the Netherlands

2Department of Medical Psychology, Amsterdam University Medical Center, University of Amsterdam, Amsterdam, the Netherlands

3Digital Health, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands

4Quality of Care, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands

5Personalized Medicine, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands

6Amsterdam Institute for Infection and Immunity, Amsterdam, the Netherlands

7Department of Medical Informatics, Amsterdam University Medical Center, University of Amsterdam, Amsterdam, the Netherlands

8Department of Nephrology, Amsterdam University Medical Center, Amsterdam, the Netherlands

9Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Republic of Singapore

Corresponding Author:

Susan J Oudbier, MD


Background: The field of digital health solutions (DHS) has grown tremendously over the past years. DHS include tools for self-management, which support individuals to take charge of their own health. The usability of DHS, as experienced by patients, is pivotal to adoption. However, well-known questionnaires that evaluate usability and satisfaction use complex terminology derived from human-computer interaction and are therefore not well suited to assess experienced usability of patients using DHS in a home setting.

Objective: This study aimed to develop, validate, and assess an instrument that measures experienced usability and satisfaction of patients using DHS in a home setting.

Methods: The development of the “Experienced Usability and Satisfaction with Self-monitoring in the Home Setting” (GEMS) questionnaire followed several steps. Step I consisted of assessing the content validity, by conducting a literature review on current usability and satisfaction questionnaires, collecting statements and discussing these in an expert meeting, and translating each statement and adjusting it to the language level of the general population. This phase resulted in a draft version of the GEMS. Step II comprised assessing its face validity by pilot testing with Amsterdam University Medical Center’s patient panel. In step III, psychometric analysis was conducted and the GEMS was assessed for reliability.

Results: A total of 14 items were included for psychometric analysis and resulted in 4 reliable scales: convenience of use, perceived value, efficiency of use, and satisfaction.

Conclusions: Overall, the GEMS questionnaire demonstrated its reliability and validity in assessing experienced usability and satisfaction of DHS in a home setting. Further refinement of the instrument is necessary to confirm its applicability in other patient populations in order to promote the development of a steering mechanism that can be applied longitudinally throughout implementation, and can be used as a benchmarking instrument.

JMIR Med Inform 2025;13:e63703

doi:10.2196/63703

Keywords



The number of digital health solutions (DHS) has increased rapidly, with the potential to significantly enhance the way health care is delivered [1]. DHS include, among others, tools for self-management of clinical data such as blood pressure measurements, for medication adherence, and for education on health-related behaviours such as diet, smoking, and exercise [2]. These tools present the opportunity to increase access to health care and optimize disease management, and they ultimately aim to alleviate health care expenditure [3]. Self-management, as per the World Health Organization, encompasses the capacity of individuals to support and sustain their own health, prevent diseases, and cope with illness and disability, whether independently or with the assistance of a health care professional (HCP) [4,5]. The use of DHS serves a dual purpose in patient self-management: (1) facilitating proactive engagement of individuals in their health journey to optimize treatment outcomes and (2) enhancing prevention of negative health outcomes [6,7]. Consequently, ensuring accessibility and adoption of DHS among target users is crucial for effective implementation [8]. The experienced usability of DHS is pivotal to their adoption, especially for individuals with disabilities or those living with chronic diseases who need to make frequent use of a DHS within their care journey [9-11]. Measuring DHS usability and patient satisfaction is crucial to understand and improve accessibility and use of DHS, thereby fostering patient engagement.

The international organization for standardization defines usability, as comprising effectiveness, efficiency, and satisfaction, given a specific user in a context [12]. In the context of DHS, effectiveness refers to the capacity for thorough and accurate task completion, such as logging into a patient portal or setting personal preferences for medication reminders. Efficiency, on the other hand, involves accomplishing these tasks with minimal effort. Finally, satisfaction is expressed as the comfort and acceptability experienced by patients when using a DHS tool. Usability is often measured by (validated) usability and satisfaction questionnaires, as they allow efficient collection and structured assessment of data from a large number of individual users [13,14]. Usability questionnaires originate from the field of human-computer interaction and user-centered design and have emerged as a means to evaluate the effectiveness, efficiency, and satisfaction of interactive systems, particularly software and digital interfaces from the perspective of end users [15]. Therefore, existing well-known and applied usability questionnaires, such as the System Usability Scale (SUS) and mHealth App Usability Questionnaire (MAUQ) apply software terminology such as the “various functions in this system,” or “navigation between screens” [16-18]. These statements are difficult to interpret for individuals lacking familiarity with software terminology, particularly for patients with low levels of digital literacy [19]. These statements are therefore not suited to measure the usability of self-management tools in healthcare practice by all users.

In addition, introducing DHS in a self-management care journey may increase disparities, as it requires particular skills to use it that comprise both health and digital literacy [20]. In terms of patient characteristics, patients with high health literacy, a higher educational level, and patients who are familiar with DHS find it easier to use these tools [21]. Variability in digital literacy skills among patients are well-recognized, posing challenges in its utilization [22]. Comprehensive research on the specific patient groups for which DHS is relevant, and our understanding of usability in this domain are still in the nascent stages. Disparities arising within groups due to the utilization of technology might lead to one group adopting the technology, while the other group opts not to use it. With the increasing availability and reliance on DHS [26], these tools should be usable for the majority of the patient population. Evaluations of patient experiences with DHS should therefore also be accessible to diverse groups of patients. Thus, to optimize health outcomes and to deliver high quality care, evaluating patients’ experienced DHS usability and satisfaction in a home setting is imperative for health care organizations and HCPs [1,23]. In order to ensure patient inclusivity, a general and accessible instrument is needed, which can be applied as a steering mechanism, deployed at multiple points in time to measure usability and satisfaction of DHS in a home setting.

The aim of this study is to develop, validate, and assess the reliability of an instrument that measures experienced usability of and satisfaction with DHS use, taking digital (language) literacy into account. When developing the Experienced Usability and Satisfaction with Self-monitoring in the Home Setting (GEMS) questionnaire, our goal is to find a middle ground between innovation and familiarity, drawing from established statements and questionnaires while tailoring them to be able to evaluate patients experiences with DHS from an inclusive perspective. In doing so, we aim to advance DHS implementation and expand our understanding of end users’ needs, for efficient, effective and satisfied DHS use.


Ethical Considerations

The Medical Ethical Committee of Amsterdam University Medical Center (Academic Medical Center) declared that this study was not subject to the Medical Research Involving Human Subject Act and that further approval was not required (W22 291 # 22.352).

GEMS Questionnaire Development

To develop and validate the questionnaire “Gebruiksvriendelijkheid en Ervaring met Monitoren in de ThuisSetting,” translated as Experienced Usability and Satisfaction with self-monitoring in the Home Setting, we followed several steps, as depicted in Figure 1.

Figure 1. Flowchart of the development of experienced usability and satisfaction with digital health solutions in a home setting.

Step I: Content Validity - Collecting User Experience Statements

To design the GEMS questionnaire, we first searched for published literature on user experience questionnaires in the context of DHS in PubMed using the keywords “Digital Health Solutions,” “Digital Health Technologies,” “Self-Management tools,” “Digital health apps,” “mHealth apps,” AND (“Usability” OR “Satisfaction”) [24]. We searched for questionnaires that measured end-user experiences, and restricted our search to studies published in the last 5 years due to the rapidly evolving nature of the field.

After the literature review, an expert meeting was held, for which we invited several usability experts in the field. We went through the domains and statements from the validated questionnaires retrieved from the literature search. The outcome of this meeting was a list of requirements for domains with items that should be included in the GEMS questionnaire. This is in line with the 6 domains of usability, according to the general guidelines for usability assessment [12,25]: “Effectiveness,” “Efficiency,” “Satisfaction,” “Learnability,” “Perceived value,” and “Privacy and Security Issues.”

After the selection of the items during the expert meeting, we translated the items that were only available in English into Dutch. We applied a forward-backward translation (English to Dutch) procedure for each item. This procedure was executed by 2 people who were native proficiency speakers of both Dutch and English (DPN and Stephanie Medlock). A formal assessment of each item’s linguistic complexity using the Common European Framework of Reference for Language was conducted, including translating items as required to B1 level, by an expert that had experience in making patient instructions accessible (Marieke van Maanen) [26,27]. Items from 6 individual (validated) questionnaires were collected (Table S1 in Multimedia Appendix 1). In addition, insights from the article of the authors Berkman and Karahoca [28] were integrated into the process, as they describe that the change in sensitivity of a scale varies due to the responses, while in human-computer interaction, a scale is expected to be sensitive to the differences between systems instead of people. This insight enriched the questionnaire development with current research findings and best practices in usability metrics. We therefore maintained the item scores consistent with the current scoring methodology across responses. This has resulted in sufficient differentiation at the system level; however, further refinement is required to optimize the scoring of the GEMS.

Step II: Face Validity - Pilot Testing, Item Selection, and Adaptation

We recruited participants to take part in the evaluation of (1) the questionnaire itself, and (2) the evaluation of DHS using the draft GEMS instrument (Figure S1 in Multimedia Appendix 1). Round I consisted of an appreciative inquiry, to get feedback from stakeholders, to ensure that the instrument reflected their perspectives and values and that questions were understandable [29]. We presented the questionnaire to the patient panel from the Amsterdam University Medical Center (n=8; Table S2 in Multimedia Appendix 1). After this round, an expert meeting including all authors (and Thomas Engelsma) was held to make adjustments to the language and wording of the questions.

Step III: Construct Validity - Psychometric Analysis

Round II consisted of the validation of the questionnaire by applying it with users of two self-management tools within the Amsterdam University Medical Center patient portal, which are available from the electronic health record for patients under the nephrology department: (1) entering home measurements of kidney transplant patients’ vital statistics such as blood pressure, pulse, and temperature and (2) medication reminders. Patients were included when they participated in home measurements, or in the use of medication reminders, could read and understand the Dutch language, and downloaded the app from the patient portal in order to use one of these functionalities. Patients were invited to participate in this study by their HCP (physician or nurse practitioner). Informed consent of the participants was provided online (e-consent). Patients who agreed to participate were contacted by a researcher (SJO or a supportive researcher) to administer the GEMS questionnaire by email. Data were collected using Castor EDC [30]. Patients who did not return the questionnaire or did not fully complete the questionnaire received a reminder after 2 weeks, and, if necessary, a phone call after 4 weeks. After psychometric analysis, an expert meeting was held to discuss the findings, and if necessary, adjustments were made to the instrument.

Assessing Acceptability

The data from the questionnaire were analyzed using SPSS statistics (version 28.0.1.1, IBM) [31]. Respondents who missed more than one item of the GEMS were removed from the data set. Records missing other data, such as demographics, that were not part of the core of the GEMS questionnaire were not excluded. All items were recoded so that “1” was the most negative value on the Likert scale. In order to be able to perform factor analysis, the questions with scales ranging from “1-10” were recoded to “1‐5” (1 and 2 were recoded to 1, 3 and 4 recoded to 2, and so on). The question with a Likert scale from “1-7” was recoded to “1‐5,” where the extremes are taken together (1 and 2 were recoded to 1; 6 and 7 were recoded to 5).

The Single Ease Questions (SEQ) is a single-item measure that assesses the complexity of a task for a user, such as entering home blood pressure measurements into the patient portal [32,33]. The SEQ aligns with the main features available in the system [33]. The different tasks that patients have to fulfil for the two separate DHS are difficult to compare, as logging into the system is the only task that is consistent across our analyses. Consequently, in psychometric evaluations, only the question regarding the ease or difficulty of “logging into the system” was included for both DHS assessments. For items where the nonresponse rate reached or exceeded 90%, it was inferred that patients chose not to answer the respective question. Consequently, the item in question was deemed unnecessary and was subsequently removed from the GEMS questionnaire [34]. With regard to the distribution of item scores, a skewness of 90% was considered to indicate redundancy for inclusion of the item in the GEMS questionnaire [34].

Assessing Construct Validity

An item correlation analysis was performed using the Spearman rank-order correlation coefficient. All items were compared with each other to find inter-item overlap, with a score of rs>0.70 meaning that there could be singularity. Prior to performing a factor analysis, we tested whether the data set was suitable by assessing the Kaiser-Meyer-Olkin test of sampling adequacy (>0.60), and Bartlett test of sphericity (α<.05) [35,36]. A principal component analysis (PCA) with direct Oblimin rotation was used for factor analysis (FA). In addition, a scree plot was made of the PCA results. The number of values above the scree plateau were taken as the number of factors the items contributed to. In case of no clear scree plateau, a threshold of 1.0 was used.

Assessing Reliability and Internal Consistency

For all factors, extracted with PCA, the reliability and internal consistency were assessed by using the Cronbach α (>0.70) and item-total correlations (>0.40). Per factor, the items were dropped one by one to see whether items had to be removed to increase the Cronbach α to the threshold of 0.70. Finally, the items were scrutinised in an expert meeting (SJO, LWDP, DPN, SAN, HJM, and EMAS) using the results of the aforementioned analyses to determine which items were to be dropped and which should remain. In addition, we assigned labels to the constructs.


Step I: Content Validity

In evaluations of DHS, researchers readily access numerous validated questionnaires from the literature, using them as tools for assessing usability and satisfaction in order to improve the product or system. Drawing from our literature review, the SUS is the most widely used usability evaluation instrument in the digital health industry [10,11]. For a long time, it has been a standard procedure to evaluate the usability of digital technology using general benchmarking tools, which has led to the adoption of generic tools like the SUS [11]. However, this questionnaire was developed in the early stages of the human-computer interaction field, at a time when digital health did not yet exist [16,37]. Newer questionnaires in the field such as the MAUQ and eHealth UsaBility Benchmarking Instrument try to be more specific within their domain; however, these questionnaires are still extensive, not easy to deploy, and using terminology derived from human-computer interaction [11,18]. In addition, as questionnaires such as SUS and Usability Metric for User Experience (UMUX) are primarily designed for software development, they use complex software-related terminology, such as functionalities of a system, that is often not understood by the general population [11].

We excluded statements regarding software interaction due to their complexity, which could potentially hinder understanding. We collected 14 unique statements from the identified questionnaires [12,25]. We chose to incorporate the 4-item UMUX (with Likert scale 1‐5), along with SEQ (Likert scale 1‐7). To include learnability, we added a question from the SUS on whether patients had to learn a lot about the specific DHS before they could use it (Likert scale 1‐5). Regarding perceived value, we added 2 questions from the MAUQ on whether the DHS contributed to the patient’s health, and whether patients had the feeling that the DHS improved health care (both Likert scale 1‐7). Finally, for perceived value, we added a question from Timmermans et al [38] on whether using the DHS reminded patients of being sick (Likert scale 1‐5). To assess privacy and security, we added a question from Timmermans et al [38] (Likert scale 1‐5). Regarding satisfaction, we opted to include the Net Promoter Score (NPS; Likert scale 1‐10), the Customer Satisfaction Score (CSAT; Likert scale 1‐5), and continued use, as we aimed to investigate whether satisfaction had an influence on continued use and vice versa (Likert scale 1‐10). We added demographics such as gender, age, educational level, and health literacy [39,40]. At a later stage, we also added one question on digital literacy. The final GEMS questionnaire for validation consisted of 14 items (Table S4 in Multimedia Appendix 1).

Step II: Face Validity

In total, 92 patients participated in the validation: 65.2% (n=58) were male, 38% (n=35) were aged between 40 and 59 years, and 32.6% (n=30) had a higher professional education (Table S3 in Multimedia Appendix 1). A total of 92 patients were included for the psychometric analysis. All items presented to patients had a response rate of over 95%. For item skewness, no score was answered more than 90% for any of the answered questions. In the distribution of scores, we noticed that the highest value not applicable was entered with 10.9% on Q5 (question 5; “Q#” represents the questions involved in this study). The highest missing value with 17.4% was on Q13. No items of the GEMS were removed. Not all patients completed the question about digital literacy as this question was added to the demographics later (n=43). Patients’ remarks and suggestions for improvement mainly focused on Q5, with some patients being unfamiliar with the nondigital method of filling in home measurements on paper. Therefore, some patients were unable to fill in this question. In addition, with Q8, patients indicated that the disease process is much more intense for some people than others, and that this question is difficult to answer in the home setting (Table 1).

Table 1. Description of each measurement instrument found in explorative literature search.
Measurement instrumentAbbreviationAuthorItems, nPopulation validatedScaleReference where questionnaire has been used in health care context
Usability
Questionnaire for User Interaction SatisfactionQUISChin et al [41]27150 users1‐9 [42,43]
System Usability ScaleSUSBrooke [16]10184, aimed to include a diverse range of participants1‐5 [44-46]
mHealth App Usability QuestionnaireMAUQZhou et al [18]20128, majority included were students with a bachelor’s degree1‐7 [47,48]
The Usability Metric for User ExperienceUMUXFinstad [49]4255, not extensively described1‐7 [50]
Poststudy System Usability QuestionnairePSSUQLewis [37]
Lewis [51]
1648, and 210 in second validation study1‐7 [44,52]
Technology Acceptance Model questionnaireTAMDavis [53]12107 users1‐7 [54]
User version of the Mobile App Rating ScaleuMARSStoyanov et al [55]20164 young people1‐5 [56,57]
Mobile app rating scaleMARSTerhorst et al [58]231299 mobile health apps1‐5 [59]
eHealth Usability Benchmarking InstrumentHUBBIBroekhuis et al [11]18148 persons1‐5 [60]
Satisfaction
Net Promoter ScoreNPSReichheld [61]
Mekonnen [62]
1Not described1‐10 [63,64]
Client Satisfaction QuestionnaireCSQ-8Larsen et al [65]8Different populations, also in health care setting1‐4 [66]
Patient satisfaction questionnaire IIIPSQ-IIIWare et al [67]50Various populations, in individuals with various medical conditions1‐5 [68]
Other
Single Ease QuestionnaireSEQNielsen and Molich [25]1Not described1‐7 [69]

Step III: Construct Validity

Spearman’s rank correlation coefficient indicated Q8 as redundant as it showed a negative correlation on almost all items. The calculated UMUX score was also taken into consideration but did not show a significant correlation with items other than its own questions (Q1-Q4). None of the items was extremely skewed. Since none of the items were completed by less than 95% of the respondents, all items were included for psychometric analyses. The data set consisted of 14 items that were used for psychometric analysis (Table S4 in Multimedia Appendix 1 presents the Dutch original items). Kaiser-Meyer-Olkin was 0.72, and Bartlett Test of Sphericity was P<.01. PCA suggested a 5-factor solution. However, the fifth factor had an eigenvalue of 1.05, and we, therefore, decided to not include this factor. Q1 did not load to any factor. Common factor analysis using 4 factors with a factor loading threshold of 0.40 resulted in Q1 and Q5 not loading to any factors. Q7 cross loaded into factors 3 and 4. Q7 was dropped from factor 4 because this lowered the Cronbach α. Q8, Q10, and Q13 were also dropped because these items lowered the Cronbach α for the respective factor. As shown in Table 2, item-total correlation was considered sufficient (>0.40) for all items. Factors 1 and 3 had the lowest Cronbach α (0.66 and 0.67, respectively) and factors 2 and 4 the highest (0.77 and 0.78, respectively ).

Table 2. Results of the GEMS validation.
Item descriptionNAa≥25%rsb>0.70CFAc loadingITCdCronbach αe
Factor 1: Convenience of use (Cronbach α of scale=0.66; 95% CIf 0.49‐0.78)
Q2: “Using [this DHS]gis a frustrating experience.”h
Het is vervelend om [digitale tool] te gebruiken.
i0.850.52
Q6: “I needed to learn a lot of things before I could get going with [this DHS].”h
Ik moest veel over [digitale tool] leren voordat ik het goed kon gebruiken.
0.850.52
Factor 2: Satisfaction (Cronbach α of scale=0.77; 95% CI 0.67‐0.84)
Q11: “Overall, how satisfied were you with [DHS]?”h
Hoe tevreden bent u over digitale tool?
−0.610.600.70
Q12: “How likely is it that you would recommend [DHS] to a friend or colleague?”h
Hoe waarschijnlijk is het dat u [digitale tool] aan iemand anders die deze zorg nodig heeft aanraadt?
−0.590.630.67
Q14: “I would use [this DHS] again.”h
Hoe waarschijnlijk is het dat u de [digitale tool] blijft gebruiken?
−0.440.620.70
Factor 3: Perceived value (Cronbach α of scale=0.67; 95% CI 0.51‐0.79)
Q7: “The [DHS] would be useful for my health and well-being.”h
Het gebruik van [digitale tool] draagt bij aan mijn gezondheid.
0.500.53
Q9: “The [DHS] improved my access to health care services.”
Ik denk dat [digitale tool] de zorg verbetert.
0.530.53
Factor 4: Efficiency in use (Cronbach α of scale=0.78; 95% CI 0.67‐0.86)
Q3: “[This DHS] is easy to use.”h
[Digitale tool] is makkelijk te gebruiken.
−0.620.65
Q4: “I have to spend too much time correcting things with [this DHS].”h
Ik ben te veel tijd kwijt aan het gebruik [van digitale tool].
−0.430.65

aNA: “I do not know or not applicable” responses ≥25%.

brs:Spearman rank correlation coefficient between items >0.70.

cCFA: confirmatory factor analysis

dITC: item-total correlation.

eCronbach α of scale if item is deleted.

fSee Baumgartner and Chung [29].

gDHS: Digital Health Solution.

hOriginal English item from questionnaire.

iNot applicable.

After PCA, a collaborate expert meeting was held to determine the most appropriate labels for these factors based on existing usability terminology: convenience of use, perceived value, efficiency of use, and satisfaction. These constructs are known in the field of human-computer interaction. A more complete definition of the 4 factors applied to the home setting are shown in Textbox 1. The final constructs of the GEMS are outlined in Figure 2.

Textbox 1. Textbox 1. Description of the constructs of the GEMS questionnaire.

Constructs and their explanations

Convenience of use

This highlights the ease and comfort with which users can interact with the digital health solutions at home. Convenience of use is a component of usability, emphasizing aspects that contribute to making the user experience more convenient, pleasant, and smooth [70]. This means tailoring it to fit to patient preferences and expectations for self-management at home.

Perceived value

Perceived value refers to the extent to which a system or product fulfills users’ needs and goals, addressing the pragmatic utility it offers to its intended users [70]. It encompasses the relevance and value of the digital health solutions features and functionalities in addressing user requirements in a home setting. In a health care setting, perceived value ultimately determines the practical utility and adoption of the digital health solutions by patients [71,72].

Efficiency of use

In a home setting, efficiency of use highlights how quickly users can perform tasks in a digital health solutions once they are familiar with it. Efficiency of use is influenced by factors such as learnability, memorability, and error prevention, as it pertains to how quickly and effortlessly users can achieve their goals when using a self-management tool in a home setting [12].

Satisfaction

According to International Organization for Standardization 9241, satisfaction is referred to as the degree to which users experience comfort and have positive attitudes toward using the product [12]. For self-management tools, satisfaction goes beyond mere functionality and usability, extending to factors such as efficacy, empowerment, and emotional well-being [73].

Figure 2. Visual abstract of final results and named constructs of Experienced Usability and Satisfaction With Self-Monitoring in the Home Setting Questionnaire.

Principal Findings

Our aim, was to develop a steering instrument that enables the measurement of usability and satisfaction at various stages of adoption, with constructs that are relevant for a home setting, adapted to the language proficiency of the general population, and which might serve as a benchmarking instrument for usability and satisfaction with DHS. Following the initial translation phase of this study, it became evident that the items of the GEMS were easy to understand for patients. Although we designed the questionnaire for a broad population, our evaluation revealed that the majority of study participants had a higher level of education. In research, it is a known challenge to reach those with lower health and digital literacy levels for evaluation [74]. The applicability of the DHS varies depending on the specific needs and characteristics of different users. The GEMS questionnaire has been tailored to a B1 language proficiency level, which enhances its accessibility. However, there is a risk of obtaining biased outcomes of the GEMS depending on the demographic profile (eg, age, education, digital literacy, and health literacy) of the respondents. Therefore, collecting these demographic data are essential to understand if DHS users with different profiles assess the experienced usability and satisfaction differently. Gaining these insights may help in ensuring tailorization of the DHS to the user needs based on GEMS outcomes. This necessitates further refinement of the DHS to ensure its suitability across diverse populations.

Internal consistency of the GEMS was sufficient and factor analysis confirmed 4 factors, to which we have assigned the following labels: convenience of use, perceived value, efficiency of use, and satisfaction. Internal consistency of the GEMS, as measured with the Cronbach α, was slightly lower compared with the minimum value of 0.7 [75]. A possible explanation could lie, in our sample characteristics, as several participants also used similar applications, such as smartwatches that provided reminders. This dual usage could have influenced their responses, leading to expressed preferences or aversions towards the usage of medication reminders.

Given that the NPS was integrated into our satisfaction metric within the GEMS questionnaire, we opted to use the raw NPS as a component within our scoring scale. This approach involves incorporating the absolute values of promoters, passives, and detractors, rather than calculating the traditional NPS by subtracting the percentage of detractors from the percentage of promoters [76]. In a manner similar to the SUS questionnaire, we reversed the scales in our questionnaire to enhance reliability and validity. This approach serves several key purposes: (1) mitigating response bias, (2) maintaining participant attention and engagement, (3) ensuring balance and consistency within the questionnaire, and (4) detecting random responses on the questions by participants [16]. For the factors and questions derived from the factor analysis, we carefully examined whether reversed scaling was still present in the questionnaire. We concluded that reversed scaling was still present in 2 out of the 4 constructs.

For the statements in the GEMS questionnaire, we decided to adopt, translate, and adapt the statements from the UMUX and adjust them to using DHS in a home setting. However, in some cases, we have labeled the factors differently from those in the UMUX. Specifically, the statement “It is frustrating to use this digital tool” is classified under “Convenience of use” in the GEMS questionnaire, while it is categorized under “Satisfaction” in the UMUX. The interrelationship with the other questions in GEMS aligns more closely with the definition of convenience. We decided to address the experiences related to the context in which the DHS are used, specifically the deployment of DHS in a home setting. First, the difficulty in using the technology due to lack of digital literacy or misunderstanding of terminology. Second, ease of use, as the primary concern in a home setting is how conveniently the DHS can be integrated into daily routines. In addition, we translated and modified the UMUX question “I spend too much time correcting things with this system” to make it applicable at a higher conceptual level. The revised question no longer concerns the correction of things (errors), but instead evaluates whether the DHS is usable within its intended context [28].

Closing the feedback loop between patients and HCP through the utilization of DHS represents a pivotal strategy in enhancing health care delivery with DHS. By enabling self-management of patients through communication and data exchange, digital tools foster a collaborative environment where patients can actively participate in their care and providers can make informed decisions [77]. Incorporating the GEMS questionnaire as part of a comprehensive evaluation of DHS may enhance usability and satisfaction, contributing to adoption and the overall effectiveness of the DHS in improving health outcomes. The GEMS is therefore of relevance and value to HCPs, decision makers, health insurance companies, and public health institutions. The outcomes of the GEMS can assist these stakeholders to identify important issues as perceived by patients, and to develop strategies to address these issues and improve the quality of their DHS.

Strengths and Limitations

The strength of the GEMS questionnaire lies in the convergence of the four factors: convenience of use, perceived value, efficiency of use, and satisfaction, its concise questionnaire format, its adaptation to the language proficiency of the general population, and its utility as a steering tool as it can be used longitudinally in DHS implementation. The main strength of this study is that we applied a 4-step structured methodology to develop the GEMS questionnaire, consisting of both qualitative and quantitative evaluation phases. We also included 2 functionalities of our electronic health records in our evaluation in order to assure that the GEMS is applicable to a range of self-management tools. One of the limitations of this study is that a subset of patients may have been unable to participate in these (digital) evaluations due to requirements such as internet access, concentration, self-confidence, and proficient reading skills. We recognize that these evaluations cannot be used without considering potential issues of inequality [78]. According to the literature, this can be due to several reasons. First, the DHS may currently not be usable enough, for instance, by not involving the users during the design phase [79]. Second, health care professionals might be unfamiliar with the technology and not offering these tools to all patients [80]. Third, patients may feel having inadequate knowledge to use these tools [81], or have low (digital) literacy and therefore unable to use the tool [82]. Hence, we recommend further evaluating and refining the GEMS questionnaire in populations characterized by low (digital) literacy. Currently, we are conducting such a validation study within a demographic comprising individuals with low socioeconomic status and chronic obstructive pulmonary disease using a self-management tool. For these groups, we will conduct the evaluation on paper, using concept cards and translating the questions to graphics that visually support the questions [83]. By adopting this method, we aim to facilitate a comprehensive understanding of usability and satisfaction tailored to the needs and preferences of this specific population.

Because we used statements from various questionnaires, during the initial validation phase of the GEMS, some questions had different Likert scales. In order to ensure consistency in the analysis, the scales were converted. As a result, this might impact the interpretation of results, as the participants may interpret and respond to the items differently due to an expanded or contracted range of options [84,85]. Literature supports rescaling of 5- and 7-point scales for comparison, although it is noted that these scales may produce higher mean scores compared with a 10-point scale [84]. Finally, If the GEMS is used in another cultural setting, correct linguistic and cultural translation is needed to ensure content validity [86]. In order to facilitate this, an ongoing study is being conducted to assess a German translation of the GEMS questionnaire.

Conclusion

The GEMS questionnaire, comprising 9 items, has demonstrated its reliability and validity in assessing the usability and satisfaction of DHS within a home environment. It offers valuable insights into patient experiences with self-management tools, covering aspects of convenience of use, perceived value, efficiency of use and satisfaction. This development and validation study has been conducted with patient populations using medication reminders and home measurements. Further refinement is necessary in order to confirm the efficacy and applicability of the GEMS questionnaire in patient populations with low digital literacy. Using the GEMS questionnaire as a steering metric reflects a dedication to improving usability and satisfaction within DHS. In conclusion, the GEMS may promote development of a robust DHS , which enriches experienced usability and satisfaction and augments the efficacy of the DHS, thereby fostering positive health outcomes.

Acknowledgments

The authors would like to thank all experts who participated in the development rounds of the GEMS; Marieke van Maanen for her expertise on the adjustment of the items to the language proficiency of patients; Dr. Stephanie Medlock for contributing to the translation of the items; Hugo van Mens for his expertise on the Usability Metric for User Experience; Thomas Engelsma for the final expert meeting and questionnaire development process; and Ro Glasius for questionnaire support.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Flowchart of the inclusion process, original English or Dutch items and final version of translated Dutch version of GEMS, original GEMS, and demographics of included sample for validation (n=92).

PDF File, 221 KB

  1. Marwaha JS, Landman AB, Brat GA, Dunn T, Gordon WJ. Deploying digital health tools within large, complex health systems: key considerations for adoption and implementation. NPJ Digit Med. Jan 27, 2022;5(1):13. [CrossRef] [Medline]
  2. Li R, Liang N, Bu F, Hesketh T. The effectiveness of self-management of hypertension in adults using mobile health: systematic review and meta-analysis. JMIR Mhealth Uhealth. Mar 27, 2020;8(3):e17776. [CrossRef] [Medline]
  3. van de Vijver S, Tensen P, Asiki G, et al. Digital health for all: how digital health could reduce inequality and increase universal health coverage. Digit Health. 2023;9:20552076231185434. [CrossRef] [Medline]
  4. WHO guideline on self-care interventions for health and well-being, 2022 revision. World Health Organization. 2022. URL: https://www.who.int/publications/i/item/9789240052192 [Accessed 2024-12-16]
  5. Global strategy on digital health 2020-2025. World Health Organization; 2021. URL: https://www.who.int/docs/default-source/documents/gs4dhdaa2a9f352b0445bafbc79ca799dce4d.pdf [Accessed 2024-12-16]
  6. Kraus S, Jones P, Kailer N, Weinmann A, Chaparro-Banegas N, Roig-Tierno N. Digital transformation: an overview of the current state of the art of research. Sage Open. Jul 2021;11(3):21582440211047576. [CrossRef]
  7. Moqri M, Herzog C, Poganik JR, et al. Biomarkers of aging for the identification and evaluation of longevity interventions. Cell. Aug 2023;186(18):3758-3775. [CrossRef]
  8. Maqbool B, Herold S. Potential effectiveness and efficiency issues in usability evaluation within digital health: a systematic literature review. J Syst Softw. Feb 2024;208:111881. [CrossRef]
  9. Jaspers MWM. A comparison of usability methods for testing interactive health technologies: methodological aspects and empirical evidence. Int J Med Inform. May 2009;78(5):340-353. [CrossRef]
  10. Maramba I, Chatterjee A, Newman C. Methods of usability testing in the development of eHealth applications: a scoping review. Int J Med Inform. Jun 2019;126:95-104. [CrossRef] [Medline]
  11. Broekhuis M, van Velsen L, Hermens H. Assessing usability of eHealth technology: a comparison of usability benchmarking instruments. Int J Med Inform. Aug 2019;128:24-31. [CrossRef] [Medline]
  12. ISO 9241-11:20:2018 ergonomics of human-system interaction - part 11: usability: definitions and concepts. International Organization for Standardization. 2018. URL: https://www.iso.org/standard/63500.html [Accessed 2024-12-16]
  13. Albert B, Tullis T. Measuring the User Experience: Collecting, Analyzing, and Presenting UX Metrics. Morgan Kaufmann; 2022. ISBN: 0128180811
  14. Simola S, Hörhammer I, Xu Y, et al. Patients’ experiences of a national patient portal and its usability: cross-sectional survey study. J Med Internet Res. Jun 30, 2023;25:e45974. [CrossRef] [Medline]
  15. ISO/IEC 25002:2024: systems and software engineering — systems and software quality requirements and evaluation (square) — quality model overview and usage. International Organization for Standardization. 2024. URL: https://www.iso.org/standard/78175.html [Accessed 2024-12-16]
  16. Brooke J. SUS-a quick and dirty usability scale. In: Usability Evaluation In Industry. Taylor & Francis; 1996:189-194.
  17. Soltanzadeh L, Babazadeh Sangar A, Majidzadeh K. The review of usability evaluation methods on tele health or telemedicine systems. Front Health Inform. 2022;11(1):112. [CrossRef]
  18. Zhou L, Bao J, Setiawan IMA, Saptono A, Parmanto B. The mHealth App Usability Questionnaire (MAUQ): development and validation study. JMIR Mhealth Uhealth. Apr 11, 2019;7(4):e11500. [CrossRef] [Medline]
  19. Keogh A, Brennan C, Johnston W, et al. Six-month pilot testing of a digital health tool to support effective self-care in people with heart failure: mixed methods study. JMIR Form Res. Mar 1, 2024;8:e52442. [CrossRef] [Medline]
  20. Smith B, Magnani JW. New technologies, new disparities: the intersection of electronic health and digital health literacy. Int J Cardiol. Oct 1, 2019;292:280-282. [CrossRef] [Medline]
  21. Albert NM, Dinesen B, Spindler H, et al. Factors associated with telemonitoring use among patients with chronic heart failure. J Telemed Telecare. Feb 2017;23(2):283-291. [CrossRef] [Medline]
  22. Fitzpatrick PJ. Improving health literacy using the power of digital communications to achieve better health outcomes for patients and practitioners. Front Dig Health. 2023;5:1264780. [CrossRef] [Medline]
  23. Mathews SC, McShea MJ, Hanley CL, Ravitz A, Labrique AB, Cohen AB. Digital health: a path to validation. NPJ Digit Med. 2019;2(1):38. [CrossRef] [Medline]
  24. White J. PubMed 2.0. Med Ref Serv Q. 2020;39(4):382-387. [CrossRef] [Medline]
  25. Nielsen J, Molich R. Heuristic evaluation of user interfaces. Presented at: CHI ’90: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; Apr 1-5, 1990; Seattle, WA. [CrossRef]
  26. Council of Europe. Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Cambridge University Press; 2001. ISBN: 0521005310
  27. Figueras N. The CEFR, a lever for the improvement of language professionals in Europe. The Mod Lang J. Dec 2007;91(4):673-675. [CrossRef]
  28. Berkman MI, Karahoca D. Re-assessing the Usability Metric for User Experience (UMUX) scale. J Usability Stud. 2016;11(3). URL: https://uxpajournal.org/assessing-usability-metric-umux-scale/ [Accessed 2024-12-12]
  29. Baumgartner TA, Chung H. Confidence Limits for Intraclass Reliability Coefficients. Meas Phys Educ Exerc Sci. Sep 2001;5(3):179-188. [CrossRef]
  30. Castor electronic data capture. Castor EDC. 2019. URL: https://castoredc.com [Accessed 2024-11-17]
  31. SPSS Statistics 28.0.0 - IBM documentation. IBM. URL: https://www.ibm.com/docs/en/spss-statistics/28.0.0 [Accessed 2024-12-16]
  32. Sauro J, Dumas JS. Comparison of three one-question, post-task usability questionnaires. Presented at: CHI ’09: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; Apr 4-9, 2009; Boston, MA. [CrossRef]
  33. Sauro J, Lewis JR. Quantifying the User Experience: Practical Statistics for User Research. Morgan Kaufmann; 2016. [CrossRef] ISBN: 0128025484
  34. de Vet HCW, Terwee CB, Mokkink LB, Knol DL. Measurement in Medicine: A Practical Guide. Cambridge University Press; 2011. ISBN: 1139497812
  35. Bartlett MS. Tests of significance in factor analysis. Br J Stat Psychol. Jun 1950;3(2):77-85. [CrossRef]
  36. Kaiser HF, Michael WB. Little Jiffy factor scores and domain validities. Educ Psychol Meas. Jul 1977;37(2):363-365. [CrossRef]
  37. Lewis JR. Psychometric evaluation of the post-study system usability questionnaire: the PSSUQ. Proc Hum Factors Soc Annu Meet. Oct 1992;36(16):1259-1260. [CrossRef]
  38. Timmermans I, Meine M, Szendey I, et al. Remote monitoring of implantable cardioverter defibrillators: patient experiences and preferences for follow-up. Pacing Clin Electrophysiol. Feb 2019;42(2):120-129. [CrossRef] [Medline]
  39. Chew LD, Griffin JM, Partin MR, et al. Validation of screening questions for limited health literacy in a large VA outpatient population. J Gen Intern Med. May 2008;23(5):561-566. [CrossRef] [Medline]
  40. Fransen MP, Van Schaik TM, Twickler TB, Essink-Bot ML. Applicability of internationally available health literacy measures in the Netherlands. J Health Commun. 2011;16 Suppl 3:134-149. [CrossRef] [Medline]
  41. Chin JP, Diehl VA, Norman LK. Development of an instrument measuring user satisfaction of the human-computer interface. Presented at: CHI ’88: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; May 15-19, 1988; Washington, DC. [CrossRef]
  42. Salmani H, Nahvijou A, Sheikhtaheri A. Smartphone-based application for self-management of patients with colorectal cancer: development and usability evaluation. Support Care Cancer. Apr 2022;30(4):3249-3258. [CrossRef] [Medline]
  43. Dinari F, Bahaadinbeigy K, Moulaei K, Nemati A, Ershad Sarabi R. Designing and evaluating a mobile-based self-care application for patients with gastrointestinal cancer to manage chemotherapy side effects. Med J Islam Repub Iran. 2022;36:14. [CrossRef] [Medline]
  44. Bhanvadia SB, Brar MS, Delavar A, et al. Assessing usability of smartwatch digital health devices for home blood pressure monitoring among glaucoma patients. Informatics (MDPI). Dec 2022;9(4):79. [CrossRef] [Medline]
  45. Stamm-Balderjahn S, Bernert S, Rossek S. Promoting patient self-management following cardiac rehabilitation using a web-based application: a pilot study. D Health. 2023;9:20552076231211546. [CrossRef] [Medline]
  46. Bostrøm K, Børøsund E, Varsi C, et al. Digital self-management in support of patients living with chronic pain: feasibility pilot study. JMIR Form Res. Oct 23, 2020;4(10):e23893. [CrossRef] [Medline]
  47. Moorthy P, Weinert L, Harms BC, Anders C, Siegel F. German version of the mHealth app usability questionnaire in a cohort of patients with cancer: translation and validation study. JMIR Hum Factors. Nov 1, 2023;10:e51090. [CrossRef] [Medline]
  48. Fedkov D, Berghofen A, Weiss C, et al. Efficacy and safety of a mobile app intervention in patients with inflammatory arthritis: a prospective pilot study. Rheumatol Int. Dec 2022;42(12):2177-2190. [CrossRef] [Medline]
  49. Finstad K. The usability metric for user experience. Interact Comput. Sep 2010;22(5):323-327. [CrossRef]
  50. Sobnath D, Philip N, Kayyali R, Nabhani-Gebara S, Pierscionek B, Raptopoulos A. Mobile self-management application for COPD patients with comorbidities: ausability study. Presented at: 2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom); Sep 14-16, 2016; Munich, Germany. [CrossRef]
  51. Lewis JR. Psychometric evaluation of the PSSUQ using data from five years of usability studies. Int J Hum Comput Interact. Sep 2002;14(3-4):463-488. [CrossRef]
  52. Bakogiannis C, Tsarouchas A, Mouselimis D, et al. A patient-oriented app (ThessHF) to improve self-care quality in heart failure: from evidence-based design to pilot study. JMIR Mhealth Uhealth. Apr 13, 2021;9(4):e24271. [CrossRef] [Medline]
  53. Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. Sep 1989;13(3):319. [CrossRef]
  54. Greer DB, Abel WM. Exploring feasibility of mHealth to manage hypertension in rural black older adults: a convergent parallel mixed method study. Pat Prefer Adherence. 2022;16:2135-2148. [CrossRef] [Medline]
  55. Stoyanov SR, Hides L, Kavanagh DJ, Wilson H. Development and validation of the user version of the Mobile Application Rating Scale (uMARS). JMIR Mhealth Uhealth. Jun 10, 2016;4(2):e72. [CrossRef] [Medline]
  56. Agher D, Sedki K, Despres S, Albinet JP, Jaulent MC, Tsopra R. Encouraging behavior changes and preventing cardiovascular diseases using the prevent connect mobile health app: conception and evaluation of app quality. J Med Internet Res. Jan 20, 2022;24(1):e25384. [CrossRef] [Medline]
  57. Wong W, Ming D, Pateras S, et al. Outcomes of end-user testing of a care coordination mobile app with families of children with special health care needs: simulation study. JMIR Form Res. Aug 28, 2023;7:e43993. [CrossRef] [Medline]
  58. Terhorst Y, Philippi P, Sander LB, et al. Validation of the Mobile Application Rating Scale (MARS). PLoS ONE. 2020;15(11):e0241480. [CrossRef]
  59. Oakley-Girvan I, Yunis R, Fonda SJ, et al. Usability evaluation of mobile phone technologies for capturing cancer patient-reported outcomes and physical functions. Dig Health. 2023;9:20552076231186515. [CrossRef] [Medline]
  60. Barbarossa F, Amabili G, Margaritini A, et al. Design, development, and usability evaluation of a dashboard for supporting formal caregivers in managing people with dementia. Presented at: PETRA ’23: Proceedings of the 16th International Conference on PErvasive Technologies Related to Assistive Environments; Jul 5-7, 2023; Corfu, Greece. [CrossRef]
  61. Reichheld FF. The one number you need to grow. Harv Bus Rev. Dec 2003;81(12):46-54. [Medline]
  62. Mekonnen A. The ultimate question: driving good profits and true growth. J Target Meas Anal Mark. Jul 2006;14(4):369-370. [CrossRef]
  63. Jonker LT, Plas M, de Bock GH, Buskens E, van Leeuwen BL, Lahr MMH. Remote home monitoring of older surgical cancer patients: perspective on study implementation and feasibility. Ann Surg Oncol. Jan 2021;28(1):67-78. [CrossRef] [Medline]
  64. Gelbman BD, Reed CR. An integrated, multimodal, digital health solution for chronic obstructive pulmonary disease: prospective observational pilot study. JMIR Form Res. Mar 17, 2022;6(3):e34758. [CrossRef] [Medline]
  65. Larsen DL, Attkisson CC, Hargreaves WA, Nguyen TD. Assessment of client/patient satisfaction: development of a general scale. Eval Program Plann. 1979;2(3). [CrossRef]
  66. Shiraishi M, Kamo T, Kumazawa R, et al. A multicenter, prospective, observational study to assess the satisfaction of an integrated digital platform of online medical care and remote patient monitoring in Parkinson’s disease. Neurology & Clinical Neurosc. May 2023;11(3):152-163. [CrossRef]
  67. Ware JE, Snyder MK, Wright WR, Davies AR. Defining and measuring patient satisfaction with medical care. Eval Program Plann. 1983;6(3-4):247-263. [CrossRef] [Medline]
  68. Temple-Oberle C, Yakaback S, Webb C, Assadzadeh GE, Nelson G. Effect of smartphone app postoperative home monitoring after oncologic surgery on quality of recovery: a randomized clinical trial. JAMA Surg. Jul 1, 2023;158(7):693-699. [CrossRef] [Medline]
  69. Nautiyal S, Shrivastava A. Designing a WhatsApp inspired healthcare application for older adults: a focus on ease of use. Comp Hum Interact Res Appl. 2023. [CrossRef]
  70. Shaw N, Sergueeva K. Convenient or useful? consumer adoption of smartphones for mobile commerce. 2016. Presented at: DIGIT 2016 Proceedings; Dec 11, 2016; Dublin, Ireland.
  71. Jo HS, Jung SM. Factors influencing use of smartphone applications for healthcare self-management: an extended technology acceptance model. Korean J Health Educ Promot. Oct 1, 2014;31(4):25-36. [CrossRef]
  72. Cho H, Porras T, Flynn G, Schnall R. Usability of a consumer health informatics tool following completion of a clinical trial: focus group study. J Med Internet Res. Jun 15, 2020;22(6):e17708. [CrossRef] [Medline]
  73. Peters D, Calvo RA, Ryan RM. Designing for motivation, engagement and wellbeing in digital experience. Front Psychol. 2018;9:797. [CrossRef] [Medline]
  74. Arias López MDP, Ong BA, Borrat Frigola X, et al. Digital literacy as a new determinant of health: a scoping review. PLOS Dig Health. Oct 2023;2(10):e0000279. [CrossRef] [Medline]
  75. Gliem JA, Gliem RR. Calculating, interpreting, and reporting cronbach’s alpha reliability coefficient for likert-type scales. Presented at: Midwest Research-to-Practice Conference in Adult, Continuing, and Community Education; Mar 27-29, 2003; Columbus, OH. URL: https://scholarworks.indianapolis.iu.edu/items/63734e75-1604-45b6-aed8-40dddd7036ee [Accessed 2024-12-16]
  76. Adams C, Walpola R, Schembri AM, Harrison R. The ultimate question? Evaluating the use of Net Promoter Score in healthcare: a systematic review. Health Expect. Oct 2022;25(5):2328-2339. [CrossRef] [Medline]
  77. Weinhold I, Gastaldi L. From shared decision making to patient engagement in health care processes: the role of digital technologies. In: Challenges and Opportunities in Health Care Management. Springer; 2014:185-196. [CrossRef]
  78. Goedhart NS, Verdonk P, Dedding C. “Never good enough.” A situated understanding of the impact of digitalization on citizens living in a low socioeconomic position. Pol & Int. Dec 2022;14(4):824-844. [CrossRef]
  79. Reynoldson C, Stones C, Allsop M, et al. Assessing the quality and usability of smartphone apps for pain self-management. Pain Med. Jun 2014;15(6):898-909. [CrossRef] [Medline]
  80. Cher BP, Kembhavi G, Toh KY, et al. Understanding the attitudes of clinicians and patients toward a self-management eHealth tool for atrial fibrillation: qualitative study. JMIR Hum Factors. Sep 17, 2020;7(3):e15492. [CrossRef] [Medline]
  81. Halim NAA, Sopri NHA, Wong YY, Mustafa QM, Lean QY. Patients’ perception towards chronic disease self-management and its program: a cross-sectional survey. Chron Illn. Jul 4, 2023:17423953231185385. [CrossRef] [Medline]
  82. Marklund S, Tistad M, Lundell S, et al. Experiences and factors affecting usage of an ehealth tool for self-management among people with chronic obstructive pulmonary disease: qualitative study. J Med Internet Res. Apr 30, 2021;23(4):e25672. [CrossRef] [Medline]
  83. Agúndez Del Castillo R, Ferro L, Silva E. The use of digital technologies in the co-creation process of photo elicitation. QRJ. 2024. [CrossRef]
  84. Dawes J. Do data characteristics change according to the number of scale points used? An experiment using 5-point, 7-point and 10-point scales. Int J Market Res. Jan 2008;50(1):61-104. [CrossRef]
  85. Oylum K, Arslan F. Impact of the number of scale points on data characteristics and respondents’ evaluations: an experimental design approach using 5-point’and 7-point Likert type scales. J Fac Pol Sci. 2016;55:1-20. [CrossRef]
  86. Beaton DE, Bombardier C, Guillemin F, Ferraz MB. Guidelines for the process of cross-cultural adaptation of self-report measures. Spine (Phila Pa 1976). Dec 15, 2000;25(24):3186-3191. [CrossRef] [Medline]


DHS: digital health solutions
HCP: health care professional
MAUQ: mHealth App Usability Questionnaire
PCA: principal component analysis
SEQ: Single Ease Questions
SUS: System Usability Scale
UMUX: Usability Metric for User Experience


Edited by Jennifer Hefner; submitted 27.06.24; peer-reviewed by Ghanshyam Parmar, Marcia Wright; final revised version received 04.10.24; accepted 06.10.24; published 08.01.25.

Copyright

© Susan J Oudbier, Ellen M A Smets, Pythia T Nieuwkerk, David P Neal, S Azam Nurmohamed, Hans J Meij, Linda W Dusseljee-Peute. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 8.1.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on https://medinform.jmir.org/, as well as this copyright and license information must be included.