Assessing the performance of ChatGPT in bioethics: a large language model’s moral compass in medicine

Chat Generative Pre-Trained Transformer (ChatGPT) has been a growing point of interest in medical education yet has not been assessed in the field of bioethics. This study evaluated the accuracy of ChatGPT-3.5 (April 2023 version) in answering text-based, multiple choice bioethics questions at the l...

Πλήρης περιγραφή

Αποθηκεύτηκε σε:  
Λεπτομέρειες βιβλιογραφικής εγγραφής
Κύριοι συγγραφείς: Chen, Jamie (Συγγραφέας) ; Cadiente, Angelo (Συγγραφέας) ; Kasselman, Lora J. (Συγγραφέας) ; Pilkington, Bryan (Συγγραφέας)
Τύπος μέσου: Ηλεκτρονική πηγή Άρθρο
Γλώσσα:Αγγλικά
Έλεγχος διαθεσιμότητας: HBZ Gateway
Interlibrary Loan:Interlibrary Loan for the Fachinformationsdienste (Specialized Information Services in Germany)
Έκδοση: 2024
Στο/Στη: Journal of medical ethics
Έτος: 2024, Τόμος: 50, Τεύχος: 2, Σελίδες: 97-101
Διαθέσιμο Online: Volltext (lizenzpflichtig)
Volltext (lizenzpflichtig)

MARC

LEADER 00000naa a22000002c 4500
001 1918775540
003 DE-627
005 20250228102320.0
007 cr uuu---uuuuu
008 250228s2024 xx |||||o 00| ||eng c
024 7 |a 10.1136/jme-2023-109366  |2 doi 
035 |a (DE-627)1918775540 
035 |a (DE-599)KXP1918775540 
040 |a DE-627  |b ger  |c DE-627  |e rda 
041 |a eng 
084 |a 1  |2 ssgn 
100 1 |a Chen, Jamie  |e VerfasserIn  |0 (orcid)0000-0003-0572-2291  |4 aut 
245 1 0 |a Assessing the performance of ChatGPT in bioethics: a large language model’s moral compass in medicine 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a Computermedien  |b c  |2 rdamedia 
338 |a Online-Ressource  |b cr  |2 rdacarrier 
520 |a Chat Generative Pre-Trained Transformer (ChatGPT) has been a growing point of interest in medical education yet has not been assessed in the field of bioethics. This study evaluated the accuracy of ChatGPT-3.5 (April 2023 version) in answering text-based, multiple choice bioethics questions at the level of US third-year and fourth-year medical students. A total of 114 bioethical questions were identified from the widely utilised question banks UWorld and AMBOSS. Accuracy, bioethical categories, difficulty levels, specialty data, error analysis and character count were analysed. We found that ChatGPT had an accuracy of 59.6%, with greater accuracy in topics surrounding death and patient-physician relationships and performed poorly on questions pertaining to informed consent. Of all the specialties, it performed best in paediatrics. Yet, certain specialties and bioethical categories were under-represented. Among the errors made, it tended towards content errors and application errors. There were no significant associations between character count and accuracy. Nevertheless, this investigation contributes to the ongoing dialogue on artificial intelligence’s (AI) role in healthcare and medical education, advocating for further research to fully understand AI systems’ capabilities and constraints in the nuanced field of medical bioethics. 
601 |a Performance 
601 |a ChatGPT 
700 1 |a Cadiente, Angelo  |e VerfasserIn  |4 aut 
700 1 |a Kasselman, Lora J.  |e VerfasserIn  |4 aut 
700 1 |a Pilkington, Bryan  |e VerfasserIn  |4 aut 
773 0 8 |i Enthalten in  |t Journal of medical ethics  |d London : BMJ Publ., 1975  |g 50(2024), 2, Seite 97-101  |h Online-Ressource  |w (DE-627)323607802  |w (DE-600)2026397-1  |w (DE-576)260773972  |x 1473-4257  |7 nnas 
773 1 8 |g volume:50  |g year:2024  |g number:2  |g pages:97-101 
856 4 0 |u https://doi.org/10.1136/jme-2023-109366  |x Resolving-System  |z lizenzpflichtig  |3 Volltext 
856 4 0 |u https://jme.bmj.com/content/50/2/97  |x Verlag  |z lizenzpflichtig  |3 Volltext 
951 |a AR 
ELC |a 1 
ITA |a 1  |t 1 
LOK |0 000 xxxxxcx a22 zn 4500 
LOK |0 001 4675088260 
LOK |0 003 DE-627 
LOK |0 004 1918775540 
LOK |0 005 20250228102320 
LOK |0 008 250228||||||||||||||||ger||||||| 
LOK |0 040   |a DE-Tue135  |c DE-627  |d DE-Tue135 
LOK |0 092   |o n 
LOK |0 852   |a DE-Tue135 
LOK |0 852 1  |9 00 
LOK |0 935   |a ixzs  |a ixzo  |a ixrk 
OAS |a 1  |b inherited from superior work 
ORI |a SA-MARC-ixtheoa001.raw