RT Article T1 Accuracy is inaccurate: Why a focus on diagnostic accuracy for medical chatbot AIs will not lead to improved health outcomes JF Bioethics VO 39 IS 2 SP 163 OP 169 A1 Milford, Stephen R. LA English YR 2025 UL https://ixtheo.de/Record/1915522285 AB Since its launch in November 2022, ChatGPT has become a global phenomenon, sparking widespread public interest in chatbot artificial intelligences (AIs) generally. While not approved for medical use, it is capable of passing all three United States medical licensing exams and offers diagnostic accuracy comparable to a human doctor. It seems inevitable that it, and tools like it, are and will be used by the general public to provide medical diagnostic information or treatment plans. Before we are taken in by the promise of a golden age for chatbot medical AIs, it would be wise to consider the implications of using these tools as either supplements to, or substitutes for, human doctors. With the rise of publicly available chatbot AIs, there has been a keen focus on research into the diagnostic accuracy of these tools. This, however, has left a notable gap in our understanding of the implications for health outcomes of these tools. Diagnosis accuracy is only part of good health care. For example, crucial to positive health outcomes is the doctor–patient relationship. This paper challenges the recent focus on diagnostic accuracy by drawing attention to the causal relationship between doctor–patient relationships and health outcomes arguing that chatbot AIs may even hinder outcomes in numerous ways including subtracting the elements of perception and observation that are crucial to clinical consultations. The paper offers brief suggestions to improve chatbot medical AIs so as to positively impact health outcomes. K1 medical diagnosis K1 improved health outcomes K1 doctor–patient relationships K1 ChatGPT K1 Chatbot AIs DO 10.1111/bioe.13365