Abstract
Artificial intelligence (AI), particularly large language models (LLMs) such as ChatGPT, has rapidly evolved and is reshaping various fields, including clinical medicine. Emergency medicine stands to benefit from AI’s capacity for high-volume data processing, workflow optimization, and clinical decision support. However, important challenges exist, ranging from model “hallucinations” and data bias to questions of interpretability, liability, and ethical use in high-stake environments. This updated viewpoint provides a structured overview of AI’s current capabilities in emergency medicine, highlights real-world applications, and explores concerns regarding regulatory requirements, safety standards, and transparency (explainable AI). We discuss the potential risks and limitations of LLMs, including their performance in rare or atypical presentations common in the emergency department and potential biases that could disproportionately affect vulnerable populations. We also address the regulatory landscape, particularly the liability for AI-driven decisions, and emphasize the need for clear guidelines and human oversight. Ultimately, AI holds enormous promise for improving patient care and resource management in emergency medicine; however, ensuring safety, fairness, and accountability remains vital.
JMIR Med Inform 2025;13:e70903doi:10.2196/70903
Keywords
Introduction
Artificial intelligence (AI), particularly large language models (LLMs) such as OpenAI’s ChatGPT, is experiencing rapid growth in adoption, with a user base of tens of millions on a monthly basis [
]. While the United States and China are making significant advancements in AI technology, Europe is experiencing a lag due to challenges in investment and regulatory frameworks [ , ]. This disparity limits the European public’s comprehension of AI’s potential applications and implications.In medicine, AI demonstrates exceptional capabilities in processing complex health data [
], advanced medical imaging [ , ], anatomopathology [ ], and electrocardiogram interpretation [ ]. Furthermore, AI exhibits proficiency in “internship competitions” [ ] and has the potential to surpass practitioners in medical diagnoses [ ]. AI significantly contributes to medical research by aiding the creation of scientific articles [ ] and advancing new therapeutic areas, especially in infectiology [ ].Nevertheless, it is posited that there is an underestimation of the substantial impact of this technology on contemporary and future medicine. Recent advancements in AI, propelled by extensive computational power rather than conceptual innovations, demonstrate that the potential of these systems depends primarily on the accumulation of data, duration of training, and financial resources [
]. This observation suggests that the improvement of AI systems necessitates an increase in high-quality data, extended training periods, and financial investment, rather than relying solely on the ingenuity of researchers.Emergency medicine is a high-stakes field in which rapid and accurate decision-making is crucial. Overcrowded departments, resource constraints, and time pressure create significant challenges that advanced AI can mitigate. However, the complexity and unpredictability of emergency medicine highlight the importance of safety, liability, and ethical considerations. This paper presents the current applications of AI in emergency medicine, emphasizing both real-world implementations and critical challenges, such as hallucination, bias, and interpretability.
Current AI Capabilities in Emergency Medicine
AI systems, particularly deep learning networks, can be used to identify patterns in large, complex datasets, significantly contributing to medical science by analyzing variables to reliably predict outcomes [
, ]. In emergency departments (EDs), this enables improved triage, patient flow management, and resource allocation in overcrowded settings where resources are limited and time is critical [ - ]. Hospitals are also investigating AI’s potential to optimize resource allocation by forecasting bed availability and scheduling staff [ , ]. In one study, an AI tool analyzing historical admission patterns, vital signs, and laboratory data enhanced ED throughput by identifying high-risk patients for earlier admission [ ].Moreover, AI has demonstrated high accuracy in clinical tasks such as the recognition of acute coronary syndromes, recognition of acute appendicitis, and rapid interpretation of imaging for fractures or head injuries [
- ]. These tasks involve pattern recognition in large data streams—vital signs, laboratory results, or imaging scans—and can reduce diagnostic delays. Some EDs have tested AI models for specific conditions, such as early sepsis detection, by monitoring the vital signs and laboratory results [ ]. This approach has shown promise in reducing the time to antibiotic administration, although the results vary across different populations.Conventional clinical decision support systems (CDSSs) have been widely used in health care settings to assist medical professionals in making informed decisions. These systems typically operate on the basis of predefined rules, scoring systems, or algorithms developed by experts in the field. Although effective in many scenarios, these traditional CDSSs have limitations in their ability to adapt to new information or complex, multifaceted clinical situations [
]. In contrast, AI models, particularly LLMs, represent a paradigm shift in clinical decision support. These models leverage vast amounts of data to learn patterns and associations without the need for explicit rule-coding [ , ]. This data-driven approach allows AI models to potentially identify subtle or complex relationships within medical data that may be overlooked by traditional CDSSs or even by human experts [ ]. However, it introduces challenges in interpretability as the “reasoning” behind AI outputs may be opaque.Although these early successes are encouraging, robust prospective trials remain limited. Further evidence from large-scale, peer-reviewed studies is necessary to confirm whether AI-driven solutions consistently improve patient outcomes in emergency care settings.
Challenges and Limitations
A widely documented phenomenon in LLMs is “hallucination,” where the model confidently generates inaccurate or fabricated information [
- ]. While all diagnostic tools are susceptible to errors, AI hallucinations can be especially problematic because they are delivered convincingly, making them difficult to spot. In emergency settings, where time constraints are acute, misleading AI outputs can significantly affect patient care. The current research highlights the need for rigorous validation of LLM outputs, particularly when used for direct patient management. Because AI models are trained on large datasets, their performance is strongest for common presentations. Infrequent conditions such as rare genetic disorders or atypical manifestations of common pathologies are prone to misclassification. Moreover, LLMs struggle with rare and atypical cases that are common in emergency medicine, making them less effective in unconventional clinical situations. This limitation arises because LLMs rely on statistical correlations, favoring common cases over unique ones [ ]. Additionally, LLMs are not programmed to indicate uncertainty, which increases the risk of misleading information in critical situations [ , ].Data-driven models of emergency care can significantly improve patient outcomes and operational efficiency. However, their reliance on historical data can perpetuate the existing biases, leading to unequal treatment across diverse patient populations. This is concerning in emergency settings, where rapid decision-making is crucial and biased algorithms can be life-threatening. An AI system trained predominantly on data from one ethnic group might misinterpret the symptoms presented by patients from other backgrounds, resulting in delayed or inappropriate care. A study using an AI-powered dermatological algorithm clearly demonstrated this problem. When applied to Fitzpatrick skin type 6 (dark skin) dermatological conditions, the AI system achieved only 17% diagnostic accuracy compared to 69.9% for Caucasian skin types [
]. Interestingly, AI systems can also be designed to detect limitations when faced with unfamiliar data. Conformal prediction techniques have been shown to flag unreliable predictions when an AI system encounters new data from different laboratories, scanners, or pathology assessments [ ]. This approach could help mitigate risks associated with AI systems, such as incorrect diagnoses for patients from underrepresented groups. Diversifying training datasets is critical to ensure that AI models are exposed to a wide range of patient demographics and clinical presentations. Implementing fairness-aware algorithms that consider and adjust for potential biases during decision-making is equally important. Continuous monitoring and auditing of AI system performance across demographic subgroups is essential for identifying and rectifying emerging disparities. This vigilance, combined with regular updates to models and training data, can help ensure that AI-driven emergency care systems serve all patients equally, regardless of their background or socioeconomic status.Model Interpretability and Explainable AI (XAI)
AI-based triage systems in emergency medicine are powerful and complex tools for optimizing patient care. The “black-box” nature of these algorithms raises ethical concerns in high-stakes emergency settings, where rapid and accurate decision-making is crucial [
]. When an opaque AI system designates a patient as “low priority” without clear justification, it may lead to prolonged wait times or reduced attention, potentially compromising patient outcomes.Explainable artificial intelligence (XAI) is important in emergency medicine. Transparent and interpretable AI models are essential to maintain trust, accountability, and safety in emergency care [
]. XAI techniques such as Shapley Additive Explanations or Local Interpretable Model-Agnostic Explanations can provide insights into which variables most influence a model’s output, making AI-based recommendations more transparent to health care professionals [ ]. This interpretability is crucial in emergency settings, where clinicians must quickly understand and validate AI recommendations.In the fast-paced environment of emergency medicine, the balance between model performance and explainability is critical. Although complex models may offer superior predictive power, their opacity can hinder clinical trust and integration [
]. Developing AI systems that maintain high accuracy while providing clear explanations for their decisions is an active research area, particularly relevant to emergency care, where trust and clarity are paramount [ ].To address these challenges, transparent and XAI triage mechanisms tailored to emergency medicine are crucial. These systems should provide immediate insights into the decision-making process, allowing health care professionals to quickly understand and validate AI recommendations in time-sensitive situations. Structured override processes must be established to enable rapid human intervention, when necessary, ensuring that algorithmic decisions can be reviewed and adjusted based on clinical judgment and contextual factors.
By prioritizing transparency, explainability, and human oversight in AI-based triage systems for emergency medicine, health care institutions can harness AI benefits, while maintaining fairness, accountability, and public trust. This approach enhances patient care quality, supports clinician confidence, facilitates training, and helps to identify potential failure modes in critical care situations.
The Evolving Regulatory Landscape
Comparative analyses of the European Union, United States, and international regulatory frameworks reveal distinct policy priorities. The European Union’s Artificial Intelligence Act and General Data Protection Regulation established a structured, risk-based approach for health care AI, mandating transparency, accountability, and bias prevention [
, ]. The United States relies on sector-specific regulations, notably the Food and Drug Administration’s Total Product Life Cycle framework for Software as a Medical Device, prioritizing flexible risk stratification to accommodate continuous AI model updates with weaker postmarket monitoring [ , ]. International bodies, including the United Nations Educational, Scientific and Cultural Organization, the World Health Organization, and the Organisation for Economic Co-operation and Development, have advanced frameworks for ethical AI governance—centering on fairness, transparency, and human oversight—but lack enforceable mechanisms, limiting regulatory convergence across regions [ ].Health care providers face liability concerns when AI-driven recommendations lead to misdiagnoses or suboptimal treatment. The unpredictable nature of AI systems is complex. As legal frameworks evolve, questions arise about responsibility when errors stem from “black-box” AI. Should providers be held accountable for following AI advice? Can liabilities extend to AI developers or institutions? Jurisdictions worldwide are addressing these issues [
, ]. The European Union’s proposed Artificial Intelligence Act [ ] and the US Food and Drug Administration’s guidance on AI in medical devices represent initial steps [ ], but specific guidelines for LLM-driven decision support remain limited. The global health care community recognizes the need for rigorous validation and clear risk stratification before integrating AI tools into clinical workflows [ ]. Regulatory bodies emphasize robust clinical validation, decision-making traceability, and mechanisms to update AI tools as new data emerge. However, formalized legal precedents for AI liabilities in emergency medicine are still emerging. This lack of established guidelines creates uncertainty for the stakeholders. As AI is integrated into medical practice, stakeholders must collaborate to develop comprehensive frameworks that balance innovation with patient safety and legal protection, addressing transparency, ongoing monitoring, and clear delineation of responsibilities [ , ].A pressing issue is the global disparity in the adoption of AI regulations. While the European Union has implemented binding legal frameworks and the United States is adapting existing structures to AI technologies, a significant regulatory gap persists in many regions; only 15.2% of countries have enacted AI-specific health care regulations, amplifying concerns over global inequities in AI development and patient safety [
]. Scholars have increasingly advocated hybrid approaches that integrate risk-based innovation enablement with enforceable ethical guardrails to bridge the regulatory divide between EU prescriptive models, US flexible guidelines, and international voluntary standards.Toward Responsible and Fair AI Implementation
EDs serve diverse patient populations, necessitating a focus on fairness. Institutions employ several strategies to mitigate bias.
First, diversifying the training data ensures representation across demographics, locations, and disease phenotypes. Second, the implementation of fairness-aware algorithms adjusts model parameters to reduce performance disparities. Third, conducting regular audits helps to monitor AI outputs for patterns of discrimination.
While AI offers powerful capabilities, it should complement, rather than replace, clinical judgment. Humans excel in contextual understanding, ethical reasoning, and handling unique cases, whereas AI excels in recognizing patterns in large datasets. A collaborative approach combining AI recommendations with clinician reviews can help to identify errors or biases. Some institutions have established AI oversight committees to evaluate the performance, manage updates, and review near-miss incidents.
To address the risks of AI hallucinations, institutions should implement structured verification processes for AI-suggested diagnoses or treatments. Encouraging AI models to express confidence levels or flag uncertain outputs through uncertainty quantification is beneficial. Additionally, requiring critical decisions prompted by AI to be documented with explanations can increase trust.
AI-driven triage can expedite care, but it also raises questions about distributive justice and patient autonomy. Transparency in how AI models prioritize patients is crucial, and ED staff should retain the authority to override AI-driven triage decisions when necessary.
Future Directions and Conclusion
The integration of AI into emergency medicine holds tremendous potential to revolutionize clinical practice, from optimizing patient flow to aiding in complex diagnoses. However, these powerful tools are associated with significant risks and require careful governance. Ensuring model interpretability through XAI is crucial for building trust and validating clinical applications. Unresolved liability issues necessitate collaboration between health care providers, policy makers, and AI developers to clearly define their responsibilities.
Moving forward, researchers and ED administrators should prioritize several key areas:
- Comprehensive, multisite validation: large-scale prospective studies should be conducted to assess the real-world impact of AI on patient outcomes and safety.
- Robust regulatory frameworks: clear guidelines addressing liability, performance monitoring, and transparency requirements should be developed.
- Bias mitigation: fairness-aware techniques and regular audits should be implemented to ensure equitable treatment across diverse patient populations.
- Educational initiatives: clinicians, residents, and administrators should be trained to use AI responsibly, critically interpret outputs, and maintain essential diagnostic skills.
The rapid evolution of AI underscores its transformative potential and the need for prudent caution. By implementing appropriate safeguards across the technical, legal, ethical, and educational domains, AI can become a powerful ally in emergency medicine, ultimately enhancing patient care and outcomes.
Acknowledgments
We would like to thank Dr Thibault Viard, Dr Eric Tellier, and Dr Nicolas Peschanski for their help writing this manuscript. There were no sponsors or financial support for this trial.
Authors' Contributions
FA was responsible for the data acquisition, literature review, and writing of this paper. BP was responsible for the data acquisition and literature review.
Conflicts of Interest
None declared.
References
- Hu K. ChatGPT sets record for fastest-growing user base – analyst note. Reuters. 2023. URL: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ [Accessed 2024-11-05]
- Regulation - EU - 2024/1689 - EN. EUR-Lex. URL: http://data.europa.eu/eli/reg/2024/1689/oj/eng [Accessed 2024-11-05]
- Porsdam Mann S, Cohen IG, Minssen T. The EU AI Act: implications for U.S. health care. NEJM AI. Oct 24, 2024;1(11):AIp2400449. [CrossRef]
- Boussina A, Krishnamoorthy R, Quintero K, et al. Large language models for more efficient reporting of hospital quality measures. NEJM AI. Oct 24, 2024;1(11):AIcs2400420. [CrossRef] [Medline]
- Dippel J, Prenißl N, Hense J, et al. AI-based anomaly detection for clinical-grade histopathological diagnostics. NEJM AI. Oct 24, 2024;1(11):AIoa2400468. [CrossRef]
- Lin C, Liu WT, Chang CH, et al. Artificial intelligence–powered rapid identification of ST-elevation myocardial infarction via electrocardiogram (ARISE) — a pragmatic randomized controlled trial. NEJM AI. Jun 27, 2024;1(7):AIoa2400190. [CrossRef]
- Katz U, Cohen E, Shachar E, et al. GPT versus resident physicians — a benchmark based on official board scores. NEJM AI. Apr 25, 2024;1(5):AIdbp2300192. [CrossRef]
- Goh E, Gallo R, Hom J, et al. Large language model influence on diagnostic reasoning: a randomized clinical trial. JAMA Netw Open. Oct 1, 2024;7(10):e2440969. [CrossRef] [Medline]
- Koller D, Beam A, Manrai A, et al. Why we support and encourage the use of large language models in NEJM AI submissions. NEJM AI. Jan 2024;1(1):AIe2300128. [CrossRef]
- Wong F, de la Fuente-Nunez C, Collins JJ. Leveraging artificial intelligence in the fight against infectious diseases. Science. Jul 14, 2023;381(6654):164-170. [CrossRef] [Medline]
- Kaplan J, McCandlish S, Henighan T, et al. Scaling laws for neural language models. arXiv. Preprint posted online on Jan 23, 2020. [CrossRef]
- Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. Jan 2019;25(1):44-56. [CrossRef] [Medline]
- Obermeyer Z, Emanuel EJ. Predicting the future - big data, machine learning, and clinical medicine. N Engl J Med. Sep 29, 2016;375(13):1216-1219. [CrossRef] [Medline]
- Tyler S, Olis M, Aust N, et al. Use of artificial intelligence in triage in hospital emergency departments: a scoping review. Cureus. May 2024;16(5):e59906. [CrossRef] [Medline]
- Zaboli A, Brigo F, Sibilio S, Mian M, Turcato G. Human intelligence versus Chat-GPT: who performs better in correctly classifying patients in triage? Am J Emerg Med. May 2024;79:44-47. [CrossRef] [Medline]
- Levin S, Toerper M, Hamrock E, et al. machine-learning-based electronic triage more accurately differentiates patients with respect to clinical outcomes compared with the Emergency Severity Index. Ann Emerg Med. May 2018;71(5):565-574. [CrossRef] [Medline]
- Frost DW, Vembu S, Wang J, Tu K, Morris Q, Abrams HB. Using the electronic medical record to identify patients at high risk for frequent emergency department visits and high system costs. Am J Med. May 2017;130(5):601. [CrossRef] [Medline]
- Lauque D, Khalemsky A, Boudi Z, et al. Length-of-stay in the emergency department and in-hospital mortality: a systematic review and meta-analysis. J Clin Med. Dec 21, 2022;12(1):32. [CrossRef] [Medline]
- Roussel M, Teissandier D, Yordanov Y, et al. Overnight stay in the emergency department and mortality in older patients. JAMA Intern Med. Dec 1, 2023;183(12):1378-1385. [CrossRef] [Medline]
- Hong WS, Haimovich AD, Taylor RA. Predicting hospital admission at emergency department triage using machine learning. PLoS ONE. 2018;13(7):e0201016. [CrossRef] [Medline]
- Graham B, Bond R, Quinn M, Mulvenna M. Using data mining to predict hospital admissions from the emergency department. IEEE Access. 2018;6:10458-10469. [CrossRef]
- Akhlaghi H, Freeman S, Vari C, et al. Machine learning in clinical practice: evaluation of an artificial intelligence tool after implementation. Emerg Med Australas. Feb 2024;36(1):118-124. [CrossRef] [Medline]
- Kim D, Hwang JE, Cho Y, et al. A retrospective clinical evaluation of an artificial intelligence screening method for early detection of STEMI in the emergency department. J Korean Med Sci. Feb 8, 2022;37(10):e81. [CrossRef]
- Zhao Y, Xiong J, Hou Y, et al. Early detection of ST-segment elevated myocardial infarction by artificial intelligence with 12-lead electrocardiogram. Int J Cardiol. Oct 15, 2020;317:223-230. [CrossRef] [Medline]
- Zhang PI, Hsu CC, Kao Y, et al. Real-time AI prediction for major adverse cardiac events in emergency department patients with chest pain. Scand J Trauma Resusc Emerg Med. Sep 11, 2020;28(1):93. [CrossRef] [Medline]
- Taylor RA, Moore CL, Cheung KH, Brandt C. Predicting urinary tract infections in the emergency department with machine learning. PLOS ONE. 2018;13(3):e0194085. [CrossRef] [Medline]
- Ozkaya E, Topal FE, Bulut T, Gursoy M, Ozuysal M, Karakaya Z. Evaluation of an artificial intelligence system for diagnosing scaphoid fracture on direct radiography. Eur J Trauma Emerg Surg. Feb 2022;48(1):585-592. [CrossRef] [Medline]
- Tu KC, Eric Nyam TT, Wang CC, et al. A computer-assisted system for early mortality risk prediction in patients with traumatic brain injury using artificial intelligence algorithms in emergency room triage. Brain Sci. May 7, 2022;12(5):612. [CrossRef] [Medline]
- Klang E, Barash Y, Soffer S, et al. Promoting head CT exams in the emergency department triage using a machine learning model. Neuroradiology. Feb 2020;62(2):153-160. [CrossRef] [Medline]
- Issaiy M, Zarei D, Saghazadeh A. Artificial intelligence and acute appendicitis: a systematic review of diagnostic and prognostic models. World J Emerg Surg. Dec 19, 2023;18(1):59. [CrossRef] [Medline]
- Taylor RA, Pare JR, Venkatesh AK, et al. Prediction of in-hospital mortality in emergency department patients with sepsis: a local big data-driven, machine learning approach. Acad Emerg Med. Mar 2016;23(3):269-278. [CrossRef] [Medline]
- Bai E, Zhang Z, Xu Y, Luo X, Adelgais K. Enhancing prehospital decision-making: exploring user needs and design considerations for clinical decision support systems. BMC Med Inform Decis Mak. Jan 17, 2025;25(1):31. [CrossRef] [Medline]
- LeCun Y, Bengio Y, Hinton G. Deep learning. Nature New Biol. May 28, 2015;521(7553):436-444. [CrossRef] [Medline]
- Qian N. On the momentum term in gradient descent learning algorithms. Neural Netw. Jan 1999;12(1):145-151. [CrossRef] [Medline]
- Wang D, Wang L, Zhang Z, et al. “Brilliant AI doctor” in rural clinics: challenges in AI-powered clinical decision support system deployment. Presented at: CHI ’21; May 8-13, 2021; Yokohama, Japan. [CrossRef]
- Bommasani R, Hudson DA, Adeli E, et al. On the opportunities and risks of foundation models. arXiv. Preprint posted online on Aug 16, 2021. [CrossRef]
- Bubeck S, Chandrasekaran V, Eldan R, et al. Sparks of artificial general intelligence: early experiments with GPT-4. arXiv. Preprint posted online on Mar 22, 2023. [CrossRef]
- Breum SM, Egdal DV, Gram Mortensen V, Møller AG, Aiello LM. The persuasive power of large language models. ICWSM. 2024;18:152-163. [CrossRef]
- Kadavath S, Conerly T, Askell A, et al. Language models (mostly) know what they know. arXiv. Preprint posted online on Jul 11, 2022. [CrossRef]
- LeBrun B, Sordoni A, O’Donnell TJ. Evaluating distributional distortion in neural language modeling. arXiv. Preprint posted online on Mar 24, 2022. [CrossRef]
- Yuksekgonul M, Zhang L, Zou J, Guestrin C. Beyond confidence: reliable models should also consider atypicality. arXiv. Preprint posted online on May 29, 2023. [CrossRef]
- Kamulegeya L, Bwanika J, Okello M, et al. Using artificial intelligence on dermatology conditions in Uganda: a case for diversity in training data sets for machine learning. Afr Health Sci. Jun 2023;23(2):753-763. [CrossRef] [Medline]
- Olsson H, Kartasalo K, Mulliqi N, et al. Estimating diagnostic uncertainty in artificial intelligence assisted pathology using conformal prediction. Nat Commun. Dec 15, 2022;13(1):7761. [CrossRef] [Medline]
- Upadhyay U, Gradisek A, Iqbal U, Dhar E, Li YC, Syed-Abdul S. Call for the responsible artificial intelligence in the healthcare. BMJ Health Care Inform. Dec 21, 2023;30(1):e100920. [CrossRef] [Medline]
- Amann J, Blasimme A, Vayena E, Frey D, Madai VI, Precise4Q consortium. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak. Nov 30, 2020;20(1):310. [CrossRef] [Medline]
- Rane J, Kaya Ö, Mallick SK, Rane NL. Future Research Opportunities for Artificial Intelligence in Industry 4.0 and 5.0. Deep Science Publishing; 2024. [CrossRef]
- Frasca M, La Torre D, Pravettoni G, Cutica I. Explainable and interpretable artificial intelligence in medicine: a systematic bibliometric review. Discov Artif Intell. Feb 27, 2024;4(1):15. [CrossRef]
- Okada Y, Ning Y, Ong MEH. Explainable artificial intelligence in emergency medicine: an overview. Clin Exp Emerg Med. Dec 2023;10(4):354-362. [CrossRef] [Medline]
- Li P, Williams R, Gilbert S, Anderson S. Regulating artificial intelligence and machine learning-enabled medical devices in Europe and the United Kingdom. Law Tech Hum. Nov 21, 2023;5(2):94-113. [CrossRef]
- Proposed regulatory framework for modifications to artificial intelligence/machine learning (AI/ML)-based Software as a Medical Device (SaMD). US FDA. URL: https://tinyurl.com/ndv8mfta [Accessed 2025-03-06]
- Gilbert S, Fenech M, Hirsch M, Upadhyay S, Biasiucci A, Starlinger J. Algorithm change protocols in the regulation of adaptive machine learning-based medical devices. J Med Internet Res. Oct 26, 2021;23(10):e30545. [CrossRef] [Medline]
- Palaniappan K, Lin EYT, Vogel S. Global regulatory frameworks for the use of artificial intelligence (AI) In the healthcare services sector. Healthcare (Basel). Feb 28, 2024;12(5):562. [CrossRef] [Medline]
- Gerke S, Minssen T, Cohen IG. Ethical and legal challenges of artificial intelligence-driven healthcare. In: Artificial Intelligence in Healthcare. Elsevier; 2020. [CrossRef]
- Gilbert S, Anderson S, Daumer M, Li P, Melvin T, Williams R. Learning from experience and finding the right balance in the governance of artificial intelligence and digital health technologies. J Med Internet Res. Apr 14, 2023;25(1):e43682. [CrossRef] [Medline]
- Cha S. Towards an international regulatory framework for AI safety: lessons from the IAEA’s nuclear safety regulations. Humanit Soc Sci Commun. Apr 12, 2024;11(1):1-13. [CrossRef]
Abbreviations
AI: artificial intelligence |
CDSS: clinical decision support system |
ED: emergency department |
LLM: large language model |
XAI: explainable artificial intelligence |
Edited by Alexandre Castonguay; submitted 05.01.25; peer-reviewed by Olalekan Kehinde, Rabie Adel El Arab, Sadhasivam Mohanadas, Shamnad Mohamed Shaffi; final revised version received 17.03.25; accepted 21.03.25; published 13.08.25.
Copyright© Félix Amiot, Benoit Potier. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 13.8.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on https://medinform.jmir.org/, as well as this copyright and license information must be included.