@Article{info:doi/10.2196/68139, author="Shi, Qiming and Luzuriaga, Katherine and Allison, Jeroan J and Oztekin, Asil and Faro, Jamie M and Lee, Joy L and Hafer, Nathaniel and McManus, Margaret and Zai, Adrian H", title="Transforming Informed Consent Generation Using Large Language Models: Mixed Methods Study", journal="JMIR Med Inform", year="2025", month="Feb", day="13", volume="13", pages="e68139", keywords="informed consent form; ICF; large language models; LLMs; clinical trials; readability; health informatics; artificial intelligence; AI; AI in health care", abstract="Background: Informed consent forms (ICFs) for clinical trials have become increasingly complex, often hindering participant comprehension and engagement due to legal jargon and lengthy content. The recent advances in large language models (LLMs) present an opportunity to streamline the ICF creation process while improving readability, understandability, and actionability. Objectives: This study aims to evaluate the performance of the Mistral 8x22B LLM in generating ICFs with improved readability, understandability, and actionability. Specifically, we evaluate the model's effectiveness in generating ICFs that are readable, understandable, and actionable while maintaining the accuracy and completeness. Methods: We processed 4 clinical trial protocols from the institutional review board of UMass Chan Medical School using the Mistral 8x22B model to generate key information sections of ICFs. A multidisciplinary team of 8 evaluators, including clinical researchers and health informaticians, assessed the generated ICFs against human-generated counterparts for completeness, accuracy, readability, understandability, and actionability. Readability, Understandability, and Actionability of Key Information indicators, which include 18 binary-scored items, were used to evaluate these aspects, with higher scores indicating greater accessibility, comprehensibility, and actionability of the information. Statistical analysis, including Wilcoxon rank sum tests and intraclass correlation coefficient calculations, was used to compare outputs. Results: LLM-generated ICFs demonstrated comparable performance to human-generated versions across key sections, with no significant differences in accuracy and completeness (P>.10). The LLM outperformed human-generated ICFs in readability (Readability, Understandability, and Actionability of Key Information score of 76.39{\%} vs 66.67{\%}; Flesch-Kincaid grade level of 7.95 vs 8.38) and understandability (90.63{\%} vs 67.19{\%}; P=.02). The LLM-generated content achieved a perfect score in actionability compared with the human-generated version (100{\%} vs 0{\%}; P<.001). Intraclass correlation coefficient for evaluator consistency was high at 0.83 (95{\%} CI 0.64-1.03), indicating good reliability across assessments. Conclusions: The Mistral 8x22B LLM showed promising capabilities in enhancing the readability, understandability, and actionability of ICFs without sacrificing accuracy or completeness. LLMs present a scalable, efficient solution for ICF generation, potentially enhancing participant comprehension and consent in clinical trials. ", issn="2291-9694", doi="10.2196/68139", url="https://medinform.jmir.org/2025/1/e68139", url="https://doi.org/10.2196/68139" }