Large language models in medical ethics: useful but not expert

Large language models (LLMs) have now entered the realm of medical ethics. In a recent study, Balas et al examined the performance of GPT-4, a commercially available LLM, assessing its performance in generating responses to diverse medical ethics cases. Their findings reveal that GPT-4 demonstrates...

Full description

Saved in:  
Bibliographic Details
Authors: Ferrario, Andrea 1981- (Author) ; Biller-Andorno, Nikola (Author)
Format: Electronic Article
Language:English
Check availability: HBZ Gateway
Interlibrary Loan:Interlibrary Loan for the Fachinformationsdienste (Specialized Information Services in Germany)
Published: 2024
In: Journal of medical ethics
Year: 2024, Volume: 50, Issue: 9, Pages: 653-654
Online Access: Presumably Free Access
Volltext (lizenzpflichtig)
Volltext (lizenzpflichtig)

MARC

LEADER 00000caa a22000002c 4500
001 1918786682
003 DE-627
005 20250301054351.0
007 cr uuu---uuuuu
008 250228s2024 xx |||||o 00| ||eng c
024 7 |a 10.1136/jme-2023-109770  |2 doi 
035 |a (DE-627)1918786682 
035 |a (DE-599)KXP1918786682 
040 |a DE-627  |b ger  |c DE-627  |e rda 
041 |a eng 
084 |a 1  |2 ssgn 
100 1 |e VerfasserIn  |0 (DE-588)1340205874  |0 (DE-627)1899521771  |4 aut  |a Ferrario, Andrea  |d 1981- 
109 |a Ferrario, Andrea 1981- 
245 1 0 |a Large language models in medical ethics: useful but not expert 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a Computermedien  |b c  |2 rdamedia 
338 |a Online-Ressource  |b cr  |2 rdacarrier 
520 |a Large language models (LLMs) have now entered the realm of medical ethics. In a recent study, Balas et al examined the performance of GPT-4, a commercially available LLM, assessing its performance in generating responses to diverse medical ethics cases. Their findings reveal that GPT-4 demonstrates an ability to identify and articulate complex medical ethical issues, although its proficiency in encoding the depth of real-world ethical dilemmas remains an avenue for improvement. Investigating the integration of LLMs into medical ethics decision-making appears to be an interesting avenue of research. However, despite the promising trajectory of LLM technology in medicine, it is crucial to exercise caution and refrain from attributing their expertise to medical ethics. Our thesis follows an examination of the nature of expertise and the epistemic limitations that affect LLM technology. As a result, we propose two more fitting applications of LLMs in medical ethics: first, as tools for mining electronic health records or scientific literature, thereby supplementing evidence for resolving medical ethics cases, and second, as educational platforms to foster ethical reflection and critical thinking skills among students and residents. The integration of LLMs in medical ethics, while promising, requires careful consideration of their epistemic limitations. Consequently, a well-considered definition of their role in ethically sensitive decision-making is crucial. 
700 1 |a Biller-Andorno, Nikola  |e VerfasserIn  |4 aut 
773 0 8 |i Enthalten in  |t Journal of medical ethics  |d London : BMJ Publ., 1975  |g 50(2024), 9, Seite 653-654  |h Online-Ressource  |w (DE-627)323607802  |w (DE-600)2026397-1  |w (DE-576)260773972  |x 1473-4257  |7 nnas 
773 1 8 |g volume:50  |g year:2024  |g number:9  |g pages:653-654 
856 |u https://jme.bmj.com/content/medethics/early/2024/01/22/jme-2023-109770.full.pdf  |x unpaywall  |z Vermutlich kostenfreier Zugang  |h publisher [deprecated] 
856 4 0 |u https://doi.org/10.1136/jme-2023-109770  |x Resolving-System  |z lizenzpflichtig  |3 Volltext 
856 4 0 |u https://jme.bmj.com/content/50/9/653  |x Verlag  |z lizenzpflichtig  |3 Volltext 
951 |a AR 
ELC |a 1 
ITA |a 1  |t 1 
LOK |0 000 xxxxxcx a22 zn 4500 
LOK |0 001 4675113583 
LOK |0 003 DE-627 
LOK |0 004 1918786682 
LOK |0 005 20250228113456 
LOK |0 008 250228||||||||||||||||ger||||||| 
LOK |0 040   |a DE-Tue135  |c DE-627  |d DE-Tue135 
LOK |0 092   |o n 
LOK |0 852   |a DE-Tue135 
LOK |0 852 1  |9 00 
LOK |0 935   |a ixzs  |a ixzo  |a ixrk 
OAS |a 1 
ORI |a SA-MARC-ixtheoa001.raw