Abstract
Artificial intelligence (AI) scribes using ambient documentation technology that capture clinician-patient dialogue and auto-generate visit notes promise to alleviate documentation burden and reduce clinician burnout. In discussing empirical evidence, highlighting research gaps, and emphasizing technology-related ethical issues beyond established AI and data ethics, we show how this promise comes along with epistemic and relational risks. We proceed in 5 steps: first, we conceptually distinguish ambient documentation from broader ambient intelligence, frame it as a “tech-fix” for documentation-related burnout, and establish the notion of AI scribes as epistemic agents rather than mere transcription tools; second, we summarize empirical evidence on AI scribes, especially with regard to their impact on physicians, highlighting risks such as cognitive deskilling, clinical deprofessionalization, and shifts in epistemic accountability; third, we analyze effects on the patient-physician relationship, focusing on relational and interpretive dimensions, including changes in communication patterns and the omission of narrative nuance; fourth, we highlight risks to patient agency and epistemic justice; and fifth, we propose a design framework for ethical deployment beyond techno-solutionism. We argue that the usefulness of AI scribes should not be justified by short-term effects, but must be assessed in the context of clinical reasoning to improve not only the working conditions of physicians, but also the quality of patient care. The paper proposes a research and design agenda to counter simple “tech-fixes” for systemic problems, envisioning AI scribes that safeguard clinical reasoning and honor patient narratives while delivering relief from documentation burdens.
JMIR Med Inform 2026;14:e88235doi:10.2196/88235
Keywords
Artificial Intelligence Scribes in Clinical Care
Clinical Documentation, Clinician Burnout, and Ethical Issues
The requirement for detailed clinical documentation imposes a heavy burden on health care professionals, manifesting both in time use and cognitive load []. Clinicians spend up to 50% of their shifts interacting with computers and up to 50% of that time documenting in electronic health record (EHR) systems [], depending on the medical specialty and clinical setting []. In observational and survey studies, high documentation burden is strongly associated with clinician burnout, job dissatisfaction, turnover intention, and reduced time for direct patient care [,]. Furthermore, increased documentation demands are linked to a higher frequency of documentation errors or omissions, risks to patient safety, and lower perceived quality of care, as clinicians may rush or shortcut parts of the record [].
Ambient documentation is envisioned to solve these problems with a combination of 2 artificial intelligence (AI)–based technologies, resulting in so-called “AI scribes” []. AI scribes combine ambient listening through audio recording, speech recognition, and natural language processing of the conversation during a patient visit with generative AI that extracts key clinical details from the transcript in real time, produces structured notes, drafts the documentation for the clinician to review, and then integrates it into the EHR [-]. Therefore, AI scribes are more than transcription tools [,]. In contrast to other ambient intelligence systems monitoring various practices at the patient’s bedside [], however, AI scribes are restricted to documenting and processing oral communication and should be assessed against the common practice of physician-only documentation and the alternative of human medical scribes ().
| Physician | Human medical scribe | AI scribe | |
| Presence in consultation | Physician and patient | Physician, patient, medical scribe (in-person or remote) | Physician, patient, AI scribe |
| Capture of conversation | Listens and records key points | Medical scribe listens and records | AI scribe records, transcribes, and processes |
| Note production | Writes or dictates note | Drafts note; physician edits | Generates summaries from transcripts; physician edits |
| Role in knowledge production | Sole epistemic agent | Recorder under supervision | Epistemic agent selects and rearranges information |
| Responsibility | Physician | Physician supervises scribe | Shared socio-technical process under physician’s responsibility |
| Scalability | Limited by physician time | Limited by staffing and costs | Highly scalable, automated, vendor-dependent |
aAI: artificial intelligence.
Clinicians and nurses increasingly use AI scribe products with the intention to optimize their productivity []. While industry meets this demand in order to capture a promising market, the evidence on AI scribe prevalence is poor and is mostly based on surveys conducted by companies, for example, stating that nearly 40% of primary care clinicians use AI daily for clinical documentation []. A survey of general practitioners in Australia found that 11% have installed or tried an AI scribe, and their regular use more than doubled within 4 months, from 3% to 8% in 2024 []. According to the weekly online polls conducted by the Royal Australian College of General Practitioners, the percentage of general practitioners currently using AI scribes increased from 22% in 2024 to 40% in 2025 [].
However, commentators warn that “the technology is currently too unsophisticated to improve productivity” []. It summarizes words spoken during a consultation but cannot replace clinical reasoning, and the time saved with the technology is lost again because clinicians need to revise AI-generated summaries and add their own inputs. In addition to these significant objections, there are concerns stemming from AI and data ethics, such as issues of data protection and privacy, bias, patient consent, hallucinations, omissions, misinterpretations, and speaker attribution errors [,-]. On the one hand, there are concerns that AI scribes may lack higher levels of explicability in terms of algorithmic transparency, which poses an epistemic hurdle in identifying misinterpretations of medical terminology, leading to inaccurate documentation [,]. On the other hand, process transparency as the lowest level of opacity, in terms of patients not being aware that an AI scribe is used at all due to lack of disclosure, is a concern []. Furthermore, equity issues are raised by the observation that some clinicians avoid using AI in shared offices because of privacy worries, and patients with speech impairments might be excluded []. Additionally, long-term effects such as “cognitive debt” from overreliance on AI are a matter of concern [,].
While many studies report effects such as reduction in burnout or “pajama time,” there is a lack of ethics research examining the normative challenges of deploying AI scribes for physicians, patients, and the patient-physician relationship. For example, although loss of physician autonomy is an important topic [], existing ethics literature currently focuses on institutional governance and justice rather than on professional autonomy []. Our approach will start from the concrete circumstances in clinical encounters and focus on health care providers and patients as relevant stakeholders, as well as their specific relationship in clinical consultations. We will map and describe ethical risks based on empirical evidence and ethical-epistemic concepts that are suitable for identifying the dimensions of these issues ().
| Ethical dimension | Current practice | New or intensified issues with AI scribes |
| Professional autonomy | Documentation supports reflection and memory | Automation may shift physicians into editors |
| Epistemic agency | Physician authors and curates the record | AI influencing knowledge formation and reliability gaps |
| Narrative integrity | Physician summarizes patient narratives | AI may omit narrative nuance or filter “nonclinical” content |
| Epistemic justice | Risk of testimonial injustice by clinicians | Epistemic failures such as speaker misattribution or errors |
| Situational vulnerability | Context-specific privacy and trust issues | Specialty-specific AI modes and privacy safeguards |
| Political economy of data | Data stored within institutional systems | Data processed by vendors and consent to (commercial) secondary uses |
aAI: artificial intelligence.
Some terminological clarity is required. We will conceive AI scribes as epistemic agents, that is, entities that participate in the pursuit of epistemic goals (ie, goals in knowledge production) within clinical practice and control the formation of beliefs [,]. Their outputs operate at a higher level in the data-information-knowledge-wisdom hierarchy []: rather than merely recording or transcribing speech, they structure, select, and formulate clinically relevant information, thereby shaping the record that underpins subsequent decision-making. As reasoning increasingly unfolds within hybrid systems of human-machine interaction, AI scribes become part of a socio-technical arrangement in which data are captured, filtered, accepted, or discarded through interactions between clinicians and algorithmic tools []. In such settings, epistemic agency should no longer be understood as being represented solely in humans or machines; rather, it emerges relationally through the ways humans and machines jointly shape what becomes recorded as clinically relevant knowledge.
At the same time, AI scribes do not qualify as full artificial agents in a strong philosophical sense. According to Dung’s 5 dimensions of agency (autonomy, goal-directedness, efficacy, planning, and intentionality), current systems at least lack intentionality [,]. Therefore, they cannot be considered independent epistemic subjects. Nevertheless, because they perform epistemically relevant tasks, influence what becomes recorded as knowledge, and are treated as contributors to clinical reasoning, their agency is tethered to human design choices, institutional settings, and clinical use contexts [] (p. 162). On this basis, we analyze AI scribes as epistemic agents with respect to the ethical and epistemic issues they raise for physicians, patients, and the patient-physician relationship (; ).

Impacts of AI Scribe Use on Physician Practice
Initial evidence on how the use of AI scribes affects clinicians’ professional and personal well-being, as well as the quality of care, shows promising results, even if systematic research is still in its infancy. A survey with 1430 clinicians from 2 US centers, for example, found that ambient documentation technology was associated with reductions in burnout as well as improved well-being scores compared with baseline before ambient documentation technology use []. Olson et al [] found that after 30 days with an ambient AI scribe, burnout among clinicians working in ambulatory clinics decreased from 51.9% to 38.8%. Pearlman et al [] report from a retrospective cohort study in which clinicians who had used an AI scribe reported reductions in time spent in the electronic documentation systems compared with the control group who had not used the AI assistant. No significant differences between the 2 groups were observed, however, for after-hours time spent documenting, mean appointment length, or monthly number of completed office visits [,].
Even if prospective controlled research on the effects of AI scribes as health care interventions is missing so far, such initial empirical studies hint at positive consequences for physician well-being, often using the amount of after-hours or burnout indices as end points. Regarding effects on physicians, risks of deprofessionalization and loss of autonomy are highlighted by authors who see documentation as an integral part of clinical reasoning. According to McCoy et al [], the process of taking notes, particularly in complex cases, is not “epiphenomenal” to clinical reasoning but constitutes an inherent part of it. They warn that text generated by a large language model (LLM) could reduce the overall quality of patient charts and draw a parallel: “EHR has shaped the way we think, practice, and record our patients’ stories—and so will LLMs” [] (p. 1564). At the same time, it would be simplistic to equate typing with reasoning. The counterargument holds that reducing clerical documentation load may free cognitive resources for higher-order tasks such as diagnostic reflection and attentive listening. From this perspective, AI scribes could support rather than undermine clinical reasoning if they relieve physicians from mechanical aspects of documentation. The ethical concern, therefore, is not the delegation of typing itself but the possible erosion of epistemically productive practices such as synthesizing information, articulating provisional assessments, and identifying knowledge gaps in case of clinicians becoming passive recipients of AI-generated notes.
In addition, a “potential loss of physician autonomy” [] is feared to be associated with the introduction of AI scribes. According to Funer and Wiesing [], physician autonomy as an ethical principle is fundamentally anchored in the purpose of promoting a patient’s health and well-being. With the growing use of AI-based clinical decision support systems, physicians must be able to critically integrate AI outputs into their clinical reasoning to preserve professional judgment and discretion. With regard to AI scribes, the central issue is whether they retain control over how the clinical encounter is represented in the record. This requires that automatically generated transcripts and protocols remain transparent to the physician and can be revised without increasing the overall workload.
At a more foundational level, the ethical evaluation of AI scribes is intrinsically linked to the question of which purposes clinical note-taking is actually directed at. Only at first glance, AI-driven protocols can be seen as a mere documentation of what has been found, discussed, and decided in a concrete patient appointment []. A deeper analysis of the phenomenon needs to account for the fact that clinical notes (as all written documents) have addressees, purposes, and contexts they are serving. As indicated, AI scribes do not only (and not primarily) deliver verbatim transcripts but are designed to summarize the transcripts. Such summaries may be used not only by other health care professionals but also for insurance communication, institutional management, and statistics, and potentially by patients themselves.
Depending on the addressee and context, it is obvious that different information is needed, other parts need to be removed, and styles of language must be taken that account for the respective addressee. Due to their flexibility, LLMs might perform well to account for such different styles of reports. Ethical trade-offs, however, can occur when customizing the AI scribe for such generative functions. For example, AI-generated protocols might either serve the purposes of intraprofessional communication (and not be useful for a lay readership) or they might be appropriate for patients as readers (and thereby potentially losing medical precision). Decisions for and against certain styles for documentation, therefore, have an ethical dimension in potentially excluding certain stakeholder groups as a readership.
AI Scribes Affecting Patient-Physician Relationship
The relationship between patient and physician is a fundamental element of health care and is even more important in times of AI-driven applications [,]. It is critical for ensuring effective treatment, patient satisfaction, and overall well-being. At its core, the relationship builds on trust, communication, and mutual respect, enabling physicians to provide personalized care while patients engage actively in their health management, facilitated, for example, by informed consent procedures []. Trust in the physician facilitates openness and encourages patients to share honest and intimate information that is needed for accurate diagnosis and effective treatment planning. At first glance, AI scribes might not influence this trustful relationship, as clinical notes usually serve the aim of professional communication. On closer look, however, automated forms of documentation can alter patient-physician communication and their relationship: “When you are being recorded, you’re going to be more careful about what you say—that’s the Hawthorne effect” [] (p. 6).
On the one hand, first evidence shows that AI scribes strengthen the therapeutic alliance, and patients tend to trust clinicians even if they use AI scribes []. A consistently reported benefit is improved clinician-patient interaction during visits: clinicians focus on the patient, maintaining eye contact and active listening instead of typing on a computer [,,]. An obvious advantage of using AI scribes, therefore, lies in the opportunity to better build rapport with the patient during the clinical encounter.
On the other hand, physicians’ trustworthiness can be compromised due to epistemic hurdles that prevent patients from assessing how physicians reach clinical decisions. There is a reliability gap regarding the epistemic asymmetry that arises when patients cannot clearly determine whether to trust physicians’ judgments when AI systems are involved []. Applied to AI scribes, this reliability gap complicates the patient-physician relationship by obscuring who accounts for and how the phrasing, selection, or omission of particular information in the medical record occurs. However, when AI scribes act as epistemic agents controlling clinical content, their output becomes part of the basis on which care is delivered. This epistemic opacity can undermine trust in clinical relationships because trust depends not only on interpersonal rapport but also on the transparency and credibility of how knowledge is formed and communicated. AI scribes reconfigure these relational dynamics.
Furthermore, there are critical issues in using AI scribes that relate to the question of which parts of the consultation are documented and which are not. Gordon Schiff [], as a primary care practitioner, for example, emphasizes the importance of informal social conversation (“chitchat”) as an integral part of medical consultations, serving for both a fuller understanding of the patient’s health-related living situation and a sustainable patient-provider relationship. This “chitchat,” however, is deliberately omitted in AI-generated notes, not as a bug but as a feature of the system. Vendors of AI scribes often advertise their products by claiming that they only summarize “useful” information based on the SOAP (Subjective, Objective, Assessment, and Plan) categories. Schiff [], therefore, asks whether “our society want(s) an empathetic, highly relational care delivery system built around primary care and trusting relationships” or rather develops in the direction of an efficient, convenient, highly transactional, and dehumanizing care.
It should be noted, however, that the omission of informal conversation from the formal medical record is not unique to AI-generated documentation. Clinical notes have always been selective, legally and administratively oriented documents, and social pleasantries are rarely recorded regardless of the documentation method. The ethical concern therefore does not lie primarily in the absence of such “chitchat” in the written record but in the possibility that the logic of AI-assisted documentation may gradually reshape the interaction itself in terms of communication and behavior patterns. If clinicians and patients adapt their communication styles to what is perceived as “recordable” or “clinically relevant,” informal exchanges may become less frequent even at the primary level of interaction.
Impacts of AI Scribes on Patients
In medical encounters, clinicians often chart while patients narrate their concerns. This practice can divide clinician attention, disrupt conversational flow, and impair rapport, ultimately compromising both documentation accuracy and the quality of the patient experience []. Introduction of AI scribes in the clinical encounter can directly affect patients’ experiences in terms of affective states such as shame, anger, or despair, thus showing ethical significance. One immediate effect is simply a reduced human presence in the exam room, which can improve patient comfort and privacy in case of sensitive examinations []. By taking over the role of a medical scribe—the human alternative to AI scribes—AI-driven ambient documentation can eliminate the need for additional staff, for example, in dermatology and venereology, meaning that patients may no longer have a third party observing intimate exams [,].
However, it is an open question which clinical contexts are more ethically sensitive or expose more situational vulnerability [,]. In acute emergencies, patients may be distressed, disoriented, or unable to consent meaningfully; in psychiatric care, communication often concerns highly intimate, identity-related, or legally sensitive issues; in pediatrics, the triadic constellation between child, parents, and clinician complicates questions of privacy, assent, and representation. Deploying AI scribes identically across all these settings risks ignoring context-specific vulnerabilities and may undermine both trust and epistemic justice, for instance, if patients censor themselves because they feel surveilled or if sensitive relational nuances are systematically filtered out by the system. Design principles for AI scribes should vary by these contexts, considering pertinent issues of patient comfort and privacy as well as other aims, for example, related to security and interprofessional care.
Although we know that patients tend to judge physicians who use AI as less competent and have a significantly lower willingness to make an appointment with physicians who are using AI in general [], there is little evidence on how patients perceive AI scribes, how much choice or control they wish to have, or how their own narratives are mediated by the AI. In an industry-initiated online survey, 57% of participants favored AI use if it reduces screen time and improves direct interaction with physicians, indicating that patients are willing to accept such technology when it enhances the human connection in care [].
An advantage lies in the potentially increased attention physicians can now devote to capturing their patients’ illness narratives. Kleinman [] distinguishes between disease (biomedical condition), illness (the person’s lived experience of symptoms, suffering, and changes to their identity), and sickness (societal meaning of being ill). By highlighting the patient’s story and the gap between physician and patient perspectives (disease vs illness), Kleinman argues that health care must attend to how people make sense of their illness, not just to how bodies malfunction. Although malfunctioning bodies are also a concern from the patient’s perspective, solely reducing health care to the professional perspective on this misrepresents holistic human experience. This aligns with the principles of narrative medicine, which emphasize the importance of honoring patients’ stories of illness as a path to better health care and empathy []. Narrative medicine argues that medicine practiced with “narrative competence,” that is, the ability to absorb and reflect on patient stories, leads to more humane and effective care. It is not about AI scribes recording the transcript of the patient’s illness narrative correctly but rather the positive effect of enabling doctors to practice active listening. To unlock the ethos of narrative medicine, AI scribes must help physicians listen more deeply and ensure that what enters the medical record reflects not just data but the moral and experiential truth of the patient’s story.
Capturing patient narratives is not just an esthetic or humanistic goal but can also counter epistemic injustice in health care. Epistemic injustice refers to the unfair treatment of someone as a knowledge holder []. In health care settings, patients have historically faced testimonial injustice, that is, their symptoms or accounts are dismissed or disbelieved, and hermeneutical injustice, that is, gaps in collective understanding that leave their experiences poorly understood due to, for example, lack of medical terminology. The framework of epistemic injustice helps to explain why, for instance, women or minority patients with pain conditions are often not taken seriously: pervasive stereotypes lead some clinicians to give their testimonies less credence. Similarly, patients with illnesses that lack clear biomedical tests (such as fibromyalgia or chronic fatigue) frequently report not having the language or credibility to explain the reality of their suffering, that is, a hermeneutical gap that leaves them feeling misunderstood.
Beyond these structural forms of epistemic injustice, AI scribes may also introduce more immediate technical biases. Speech recognition and language models often perform less accurately for speakers with strong accents, nonstandard dialects, speech impairments, or atypical prosody. In such cases, patients’ accounts may be partially misrecognized, simplified, or omitted altogether, resulting in a technologically mediated form of testimonial injustice. These disparities are not merely technical shortcomings but carry ethical significance, as they systematically reduce the epistemic credibility of certain patient groups within the clinical record.
Integrating narrative-focused AI scribes could help address such epistemic injustices by just recording the patient’s narrative to counter testimonial injustice by ensuring the patient’s account is documented in their own voice, making it harder to ignore or downplay. Likewise, broader availability of patient narratives can start to fill hermeneutical gaps that are relevant to medical research: patterns in patient-described symptoms and experiences may emerge and be analyzed, expanding the understanding of conditions that patients struggle to articulate. Hence, if AI scribes are to contribute to epistemic justice, their design should prioritize the secure preservation of verbatim patient narratives. Deleting these records, as some vendors do, risks erasing precisely the voices that narrative medicine seeks to amplify. However, the ethical appeal of preserving verbatim patient narratives must be weighed against data minimization and storage-limitation requirements as well as cybersecurity risks when retaining identifiable raw audio or transcripts. Retention of raw records can also create medicolegal exposure if a discoverable verbatim transcript diverges from the finalized clinical note. In addition, AI-generated documentation introduces the risk of “hallucinated compliance,” in which standardized sections of a note are inserted even though they were not discussed or performed. Such fabricated entries represent a distinct epistemic failure, as they can overwrite the patient’s actual testimony and create a misleading clinical record that appears complete or compliant while lacking factual grounding.
It should be emphasized that the verbatim transcript of the patient’s account follows a different structure than the SOAP framework for the purposes of physicians. From the patients’ perspective, there are also practical advantages in terms of remembering and understanding medical information when they have access to the results of the AI scribe, which are then tailored to their needs. Patients often struggle to remember what has been said during a clinical encounter: up to 80% of the medical information given by physicians is forgotten immediately, and much of what is remembered is incorrect []. Providing patients with a transcript or summary of the conversation can be a remedy to this catastrophe, which has significant disadvantages for the therapeutic alliance and adherence.
Indeed, clinical studies have found that patients have much better recall of information when they receive audio recordings or transcripts of their visits [], but this is not the exact application area of AI scribes (but may be a beneficial technical design aspect). Nevertheless, by automatically generating written summaries of the consultation, an AI scribe can help patients review the doctor’s instructions, medication changes, and explanations at their own pace after the appointment. This memory aid may benefit the patient’s autonomy, therapeutic adherence, and compliance, since patients who understand and remember their options are better equipped to participate in their care decisions []. However, any recommendation to increase retention of transcripts must be reconciled with data minimization principles, cybersecurity risks, and liability exposure created by discoverable verbatim records.
Ethical Design Recommendations for AI Scribes
From a critical perspective, the use of AI scribes can be conceived as a “tech-fix” [] for systemic problems in health care systems without correcting the underlying causes in organizational structures, documentation logic, or incentive systems. A well-known feature of technical fixes is that the solution to a problem may become a problem itself []. However, if we are willing to accept the consequences of technological deployment, then this tech-fixing may ethically be justified through its overall net benefits. We identified ethical risks ranging from potential deterioration of clinical reasoning and physician autonomy to threats against patients’ trust in physicians, reliability gaps, and the call for narrative integrity and epistemic justice. Against this background, we outline 4 key design recommendations for AI scribe developers. Implementing these recommendations could help ensure that AI scribes alleviate clinician burnout without undermining the ethical and epistemic foundations of clinical practice.
Mitigate Cognitive Deskilling and Support Clinical Reasoning
If physicians become passive editors of AI-produced notes, their reflective practice and case memory may atrophy over time, rendering documentation more extensive but less meaningful. Automation bias, that is, the tendency to uncritically accept AI outputs, further exacerbates the risk of deskilling by undermining independent verification and careful reasoning—at least a risk patients fear to witness []. Therefore, we recommend designing AI scribes as cognitive aids rather than replacements for clinical reasoning. AI scribes should require active clinician engagement in the documentation process to assert their abilities and competencies. For example, the AI scribe could prompt physicians to confirm or refine key assessments and conclusions rather than auto-populating them. In this scenario, clinicians are not only the human in the loop who must review and edit the AI’s note before finalizing. It rather preserves the physician’s mental models, experiences, and clinical judgment but also serves as a check against AI errors.
Support Physician Autonomy and Epistemic Agency
Physician autonomy is a cornerstone of professional ethics, grounding the clinician’s responsibility to exercise independent judgment in the care of patients. Yet, the introduction of AI scribes may erode physician autonomy, resulting in a slippery slope of deprofessionalization: delegation of clinical reasoning to AI scribes. If AI systems assume too much control over how encounters are represented, they also impair physicians’ epistemic authority [], that is, the competence to define what counts as relevant, true, and meaningful in the clinical record. In doing so, AI scribes risk threatening the epistemic agency of physicians through their own epistemic agency. Therefore, AI scribes must be designed to reinforce, not replace, professional judgment.
Safeguard Patients’ Authentic Narratives
Beyond professional autonomy, AI scribes also raise epistemic concerns for patients. When AI scribes selectively capture, summarize, or “clean” patients’ illness narratives, then they may inflict epistemic injustices on patients. If AI scribes effectively become the gatekeepers of what enters the medical record, they function as epistemic agents tethered to the design choices of their developers. Therefore, AI scribes should be designed for narrative preservation under data minimization constraints. Rather than default long-term retention of raw audio or transcripts, AI scribe systems should only preserve narrative data (1) if patients have given informed consent to the processing of these data. Furthermore, (2) these data should be stored within a “contestability window” during which they can be accessed for verification and as a memory aid before default deletion. Another future challenge is the navigation of situations in which options for selective pausing or revision at the patient’s or physician’s request should be reasonably provided.
However, preserving verbatim patient narratives also raises questions about data ownership and the political economy of clinical AI systems. Many AI scribes are developed and operated by private vendors who may gain access to large-scale datasets of intimate patient-physician conversations. Without appropriate governance, the ethical goal of narrative preservation could unintentionally enable the commercial accumulation and secondary use of highly sensitive data for model training or product development. Therefore, governance should aim at purpose limitation, institutional data stewardship, consent procedures for data protection, and explicit restrictions on secondary uses of conversational data.
Tailor AI Scribes to Clinical Settings and Situational Vulnerability
Different clinical contexts such as psychiatry, pediatrics, emergency medicine, oncology, or sexual and reproductive health expose patients to varying degrees and types of situational vulnerability []. Therefore, AI scribes should be designed and implemented as context-sensitive technologies, not as generic, one-size-fits-all tools. This includes enabling configurable “modes” that reflect different ethical priorities in different contexts, which must be weighed against excessive alerts or mandatory confirmations leading to workflow friction and “alert fatigue.” Complementary approaches such as periodic post hoc auditing or selective review mechanisms may provide safeguards without burdening the clinical encounter. In practice, however, encounter-level configuration may be difficult to implement in enterprise-wide EHR environments with standardized licensing and infrastructure. Context sensitivity may therefore need to be realized at different levels, for example, through institutional policies, specialty-specific templates, workflow settings, or deployment guidelines rather than through fully individualized system modes. For example, there could be default data minimization and stronger consent routines in psychiatry and sexual health; emphasis on continuity-of-care documentation in chronic care; or heightened attention to parental and surrogate roles in pediatrics. In high-vulnerability settings, the default may need to be more restrictive in terms of clearer in-room signaling when recording is active. Context-sensitive deployment also has a temporal dimension: some phases of care (eg, initial crisis intervention, breaking bad news, and end-of-life conversations) may justifiably be documented more sparsely or with delayed AI support, to protect emotional safety.
Conclusions
AI scribes promise meaningful relief from documentation burden, yet their integration into clinical practice raises ethical and epistemic questions that reach far beyond mere workflow efficiency. Viewing these systems as epistemic agents makes visible how they participate in shaping clinical reasoning, mediating patient narratives, and influencing trust within the patient-physician relationship. Their deployment must therefore be guided by context-sensitive design principles that preserve professional autonomy, safeguard patients’ epistemic agency, and respect the situational vulnerabilities inherent in different clinical settings. Rather than functioning as simple “tech-fixes,” AI scribes should be developed and evaluated as tools that augment clinicians’ cognitive work and the relational foundations of health care. If AI scribes are designed not only to reduce documentation burden, they may also contribute to better health care.
Acknowledgments
ChatGPT (GPT-5.2) has been used to check the spelling, grammar, and style of the manuscript. Based on the manuscript’s abstract, Napkin has been used to draft , which has been revised and corrected by FU.
Funding
There was no funding supporting this work.
Data Availability
Data sharing is not applicable to this paper, as no datasets were generated or analyzed during the current study. All evidence is publicly available.
Authors' Contributions
Conceptualization: FU, SS
Investigation: FU, SS
Methodology: FU, SS
Visualization: FU
Writing – original draft: FU, SS
Writing – review & editing: FU, SS
Both authors approved the final version of the manuscript.
Conflicts of Interest
None declared.
References
- Wang Z, West CP, Vaa Stelling BE, et al. Measuring Documentation Burden in Healthcare. Agency for Healthcare Research and Quality; 2024. [CrossRef]
- Cox ML, Farjat AE, Risoli TJ, et al. Documenting or operating: where is time spent in general surgery residency? J Surg Educ. Nov 2018;75(6):e97-e106. [CrossRef] [Medline]
- Pinevich Y, Clark KJ, Harrison AM, Pickering BW, Herasevich V. Interaction time with electronic health records: a systematic review. Appl Clin Inform. Aug 2021;12(4):788-799. [CrossRef] [Medline]
- Levy DR, Withall JB, Mishuris RG, et al. Defining documentation burden (DocBurden) and excessive DocBurden for all health professionals: a scoping review. Appl Clin Inform. Oct 2024;15(5):898-913. [CrossRef] [Medline]
- Ball CG, McBeth PB. The impact of documentation burden on patient care and surgeon satisfaction. Can J Surg. Aug 2021;64(4):E457-E458. [CrossRef]
- You JG, Dbouk RH, Landman A, et al. Ambient documentation technology in clinician experience of documentation burden and burnout. JAMA Netw Open. Aug 1, 2025;8(8):e2528056. [CrossRef] [Medline]
- Olson KD, Meeker D, Troup M, et al. Use of ambient AI scribes to reduce administrative burden and professional burnout. JAMA Netw Open. Oct 1, 2025;8(10):e2534976. [CrossRef] [Medline]
- Galloway JL, Munroe D, Vohra-Khullar PD, et al. Impact of an artificial intelligence-based solution on clinicians’ clinical documentation experience: initial findings using ambient listening technology. J Gen Intern Med. Oct 2024;39(13):2625-2627. [CrossRef] [Medline]
- Shah SJ, Crowell T, Jeong Y, et al. Physician perspectives on ambient AI scribes. JAMA Netw Open. Mar 3, 2025;8(3):e251904. [CrossRef] [Medline]
- Martinez-Martin N, Luo Z, Kaushal A, et al. Ethical issues in using ambient intelligence in health-care settings. Lancet Digit Health. Feb 2021;3(2):e115-e123. [CrossRef] [Medline]
- Cook DJ, Augusto JC, Jakkula VR. Ambient intelligence: technologies, applications, and opportunities. Pervasive Mob Comput. Aug 2009;5(4):277-298. [CrossRef]
- Gerke S, Yeung S, Cohen IG. Ethical and legal aspects of ambient intelligence in hospitals. JAMA. Feb 18, 2020;323(7):601-602. [CrossRef] [Medline]
- Topaz M. Invisible scribes: can nurses trust ambient AI for clinical documentation? J Contin Educ Nurs. Sep 2025;56(9):358-359. [CrossRef] [Medline]
- Rajaee L. Nearly 40% of primary care clinicians now use AI daily for clinical documentation, Elation survey finds. Elation. 2025. URL: https://www.elationhealth.com/resources/elation-health-ehr/state-of-ai-blog?utm_sourcedetail=BusinessWire&utm_source=PR&utm_medium=WebReferral&utm_campaign=Marketing-2025-Q3-StateofAI [Accessed 2026-03-15]
- Knibbs J. GP AI scribe use more than doubles in four months. The Medical Republic. 2025. URL: https://www.medicalrepublic.com.au/gp-ai-scribe-use-more-than-doubles-in-four-months/111429 [Accessed 2026-03-15]
- Wisbey M. How do patients feel about doctors using AI in consults? Royal Australian College of General Practitioners. 2025. URL: https://www1.racgp.org.au/newsgp/professional/how-do-patients-feel-about-doctors-using-ai-in-con [Accessed 2026-03-15]
- Whitaker P. Doctors, beware the AI scribe. The New Statesman. 2025. URL: https://www.newstatesman.com/politics/health/2025/07/doctors-beware-the-ai-scribe [Accessed 2026-03-15]
- Topaz M, Peltonen LM, Zhang Z. Beyond human ears: navigating the uncharted risks of AI scribes in clinical practice. NPJ Digit Med. Sep 24, 2025;8(1):569. [CrossRef] [Medline]
- Wang H, Yang R, Alwakeel M, et al. An evaluation framework for ambient digital scribing tools in clinical applications. NPJ Digit Med. Jun 13, 2025;8(1):358. [CrossRef] [Medline]
- Sezgin E, Sirrianni JW, Kranz K. Evaluation of a digital scribe: conversation summarization for emergency department consultation calls. Appl Clin Inform. May 15, 2024;15(3):600-611. [CrossRef] [Medline]
- Haberle T, Cleveland C, Snow GL, et al. The impact of nuance DAX ambient listening AI documentation: a cohort study. J Am Med Inform Assoc. Apr 3, 2024;31(4):975-979. [CrossRef] [Medline]
- Ursin F, Lindner F, Ropinski T, Salloch S, Timmermann C. Levels of explicability for medical artificial intelligence: what do we normatively need and what can we technically reach? Ethik Med. Jun 2023;35(2):173-199. [CrossRef]
- Nguyen OT, Turner K, Charles D, et al. Implementing digital scribes to reduce electronic health record documentation burden among cancer care clinicians: a mixed-methods pilot study. JCO Clin Cancer Inform. Mar 2023;7(7):e2200166. [CrossRef] [Medline]
- Leung TI, Coristine AJ, Benis A. AI scribes in health care: balancing transformative potential with responsible integration. JMIR Med Inform. Aug 1, 2025;13:e80898. [CrossRef] [Medline]
- Brosnahan H. Totalitarian technics: the hidden cost of AI scribes in healthcare. Med Health Care Philos. Mar 2025;29(1):155-166. [CrossRef] [Medline]
- Herington J, Cho MK. A justice-first approach to ambient intelligence in healthcare. Am J Bioeth. Feb 2026;26(2):10-21. [CrossRef] [Medline]
- Ponti M, Kasperowski D, Gander AJ. Narratives of epistemic agency in citizen science classification projects: ideals of science and roles of citizens. AI & Soc. Apr 2024;39(2):523-540. [CrossRef]
- Coeckelbergh M. AI and epistemic agency: how AI influences belief revision and its normative implications. Soc Epistemol. Jan 2, 2026;40(1):59-71. [CrossRef]
- Rowley J. The wisdom hierarchy: representations of the DIKW hierarchy. J Inf Sci. Apr 2007;33(2):163-180. [CrossRef]
- Dung L. Understanding artificial agency. Philos Q. Mar 24, 2025;75(2):450-472. [CrossRef]
- Hauswald R. Artificial epistemic authorities. Soc Epistemol. Nov 2, 2025;39(6):716-725. [CrossRef]
- Rubeis G. Ethics of Medical AI. Springer; 2024. URL: https://link.springer.com/book/10.1007/978-3-031-55744-6 [Accessed 2026-04-11]
- Pearlman K, Wan W, Shah S, Laiteerapong N. Use of an AI scribe and electronic health record efficiency. JAMA Netw Open. Oct 1, 2025;8(10):e2537000. [CrossRef] [Medline]
- Rotenstein L, Melnick ER, Iannaccone C, et al. Virtual scribes and physician time spent on electronic health records. JAMA Netw Open. May 1, 2024;7(5):e2413140. [CrossRef] [Medline]
- McCoy LG, Manrai AK, Rodman A. Large language models and the degradation of the medical record. N Engl J Med. Oct 31, 2024;391(17):1561-1564. [CrossRef] [Medline]
- Funer F, Wiesing U. Physician’s autonomy in the face of AI support: walking the ethical tightrope. Front Med (Lausanne). 2024;11:1324963. [CrossRef] [Medline]
- Emanuel EJ, Emanuel LL. Four models of the physician-patient relationship. JAMA. 1992;267(16):2221-2226. [CrossRef] [Medline]
- Zohny H, Allen JW, Wilkinson D, Savulescu J. Which AI doctor would you like to see? Emulating healthcare provider-patient communication models with GPT-4: proof-of-concept and ethical exploration. J Med Ethics. Mar 3, 2025:jme-2024-110256. [CrossRef] [Medline]
- Ludewigs S, Narchi J, Kiefer L, Winkler EC. Ethics of the fiduciary relationship between patient and physician: the case of informed consent. J Med Ethics. Jan 2024;51(1):59-66. [CrossRef]
- Lawrence K, Kuram VS, Levine DL, et al. Informed consent for ambient documentation using generative AI in ambulatory care. JAMA Netw Open. Jul 1, 2025;8(7):e2522400. [CrossRef] [Medline]
- Evans K, Papinniemi A, Ploderer B, et al. Impact of using an AI scribe on clinical documentation and clinician-patient interactions in allied health private practice: perspectives of clinicians and patients. Musculoskelet Sci Pract. Aug 2025;78:103333. [CrossRef] [Medline]
- Stults CD, Deng S, Martinez MC, et al. Evaluation of an ambient artificial intelligence documentation platform for clinicians. JAMA Netw Open. May 1, 2025;8(5):e258614. [CrossRef] [Medline]
- Buhr E, Onder O, Rudra P, Ursin F. Trust and artificial intelligence in the doctor-patient relationship: epistemological preconditions and reliability gaps. Ethics Inf Technol. Dec 2025;27(4):60. [CrossRef]
- Schiff GD. AI-driven clinical documentation - driving out the chitchat? N Engl J Med. May 15, 2025;392(19):1877-1879. [CrossRef] [Medline]
- Itauma O, Itauma I. AI scribes: boosting physician efficiency in clinical documentation. Int J Bioinform Biosci. Mar 10, 2024;14(1):09-18. [CrossRef]
- Chambers S, Gangal A, Stoff B. Ethics of ambient documentation. J Am Acad Dermatol. Jan 2026;94(1):e3-e4. [CrossRef] [Medline]
- Ghatnekar S, Faletsky A, Nambudiri VE. Digital scribes in dermatology: implications for practice. J Am Acad Dermatol. Apr 2022;86(4):968-969. [CrossRef] [Medline]
- Mackenzie C, Rogers W, Dodds S. Introduction: what is vulnerability, and why does it matter for moral theory. In: Vulnerability: New Essays in Ethics and Feminist Philosophy. Oxford University Press; 2014:1-29. [CrossRef]
- Luna F. Elucidating the concept of vulnerability: layers not labels. Int J Fem Approaches Bioeth. Mar 2009;2(1):121-139. [CrossRef]
- Reis M, Reis F, Kunde W. Public perception of physicians who use artificial intelligence. JAMA Netw Open. Jul 1, 2025;8(7):e2521643. [CrossRef] [Medline]
- Babia P. Patients want doctors, not data entry: ModMed finds nearly 60% support use of AI if it means more face time. Businesswire. 2025. URL: https://www.businesswire.com/news/home/20250624882329/en/Patients-Want-Doctors-Not-Data-Entry-ModMed-Finds-Nearly-60-Support-Use-of-AI-if-it-Means-More-Face-Time [Accessed 2026-03-15]
- Kleinman A. The Illness Narratives: Suffering, Healing, and the Human Condition. Basic Books; 1988. ISBN: 9780465032020
- Charon R. The patient-physician relationship. Narrative medicine: a model for empathy, reflection, profession, and trust. JAMA. Oct 17, 2001;286(15):1897-1902. [CrossRef] [Medline]
- Fricker M. Epistemic Injustice: Power and the Ethics of Knowing. Oxford Academic; 2007. URL: https://academic.oup.com/book/32817 [Accessed 2026-04-11]
- Kessels RPC. Patients’ memory for medical information. J R Soc Med. May 1, 2003;96(5):219-222. [CrossRef]
- Pitkethly M, Macgillivray S, Ryan R. Recordings or summaries of consultations for people with cancer. Cochrane Database Syst Rev. Jul 16, 2008;(3):CD001539. [CrossRef] [Medline]
- Sætra HS, Selinger E. Technological remedies for social problems: defining and demarcating techno-fixes and techno-solutionism. Sci Eng Ethics. Dec 2, 2024;30(6):60. [CrossRef] [Medline]
- Kuppanda PM, Janda M, Soyer HP, Caffery LJ. What are patients’ perceptions and attitudes regarding the use of artificial intelligence in skin cancer screening and diagnosis? Narrative review. J Invest Dermatol. Aug 2025;145(8):1858-1865. [CrossRef] [Medline]
Abbreviations
| AI: artificial intelligence |
| EHR: electronic health record |
| LLM: large language model |
| SOAP: Subjective, Objective, Assessment, and Plan |
Edited by Andrew Coristine; submitted 21.Nov.2025; peer-reviewed by Jos Aarts, Mark Iscoe; final revised version received 18.Mar.2026; accepted 18.Mar.2026; published 30.Apr.2026.
Copyright© Frank Ursin, Sabine Salloch. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 30.Apr.2026.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on https://medinform.jmir.org/, as well as this copyright and license information must be included.

