%0 Journal Article %@ 2291-9694 %I JMIR Publications %V 12 %N %P e54345 %T Reference Hallucination Score for Medical Artificial Intelligence Chatbots: Development and Usability Study %A Aljamaan,Fadi %A Temsah,Mohamad-Hani %A Altamimi,Ibraheem %A Al-Eyadhy,Ayman %A Jamal,Amr %A Alhasan,Khalid %A Mesallam,Tamer A %A Farahat,Mohamed %A Malki,Khalid H %+ Department of Otolaryngology, College of Medicine, Research Chair of Voice, Swallowing, and Communication Disorders, King Saud University, 12629 Abdulaziz Rd, Al Malaz, Riyadh, P.BOX 2925 Zip 11461, Saudi Arabia, 966 114876100, kalmalki@ksu.edu.sa %K artificial intelligence (AI) chatbots %K reference hallucination %K bibliographic verification %K ChatGPT %K Perplexity %K SciSpace %K Elicit %K Bing %D 2024 %7 31.7.2024 %9 Original Paper %J JMIR Med Inform %G English %X Background: Artificial intelligence (AI) chatbots have recently gained use in medical practice by health care practitioners. Interestingly, the output of these AI chatbots was found to have varying degrees of hallucination in content and references. Such hallucinations generate doubts about their output and their implementation. Objective: The aim of our study was to propose a reference hallucination score (RHS) to evaluate the authenticity of AI chatbots’ citations. Methods: Six AI chatbots were challenged with the same 10 medical prompts, requesting 10 references per prompt. The RHS is composed of 6 bibliographic items and the reference’s relevance to prompts’ keywords. RHS was calculated for each reference, prompt, and type of prompt (basic vs complex). The average RHS was calculated for each AI chatbot and compared across the different types of prompts and AI chatbots. Results: Bard failed to generate any references. ChatGPT 3.5 and Bing generated the highest RHS (score=11), while Elicit and SciSpace generated the lowest RHS (score=1), and Perplexity generated a middle RHS (score=7). The highest degree of hallucination was observed for reference relevancy to the prompt keywords (308/500, 61.6%), while the lowest was for reference titles (169/500, 33.8%). ChatGPT and Bing had comparable RHS (β coefficient=–0.069; P=.32), while Perplexity had significantly lower RHS than ChatGPT (β coefficient=–0.345; P<.001). AI chatbots generally had significantly higher RHS when prompted with scenarios or complex format prompts (β coefficient=0.486; P<.001). Conclusions: The variation in RHS underscores the necessity for a robust reference evaluation tool to improve the authenticity of AI chatbots. Further, the variations highlight the importance of verifying their output and citations. Elicit and SciSpace had negligible hallucination, while ChatGPT and Bing had critical hallucination levels. The proposed AI chatbots’ RHS could contribute to ongoing efforts to enhance AI’s general reliability in medical research. %R 10.2196/54345 %U https://medinform.jmir.org/2024/1/e54345 %U https://doi.org/10.2196/54345