Trustworthy medical AI systems need to know when they don’t know

There is much to learn from Durán and Jongsma’s paper.1 One particularly important insight concerns the relationship between epistemology and ethics in medical artificial intelligence (AI). In clinical environments, the task of AI systems is to provide risk estimates or diagnostic decisions, which t...

Full description

Saved in:  
Bibliographic Details
Main Author: Grote, Thomas (Author)
Format: Electronic Article
Language:English
Check availability: HBZ Gateway
Journals Online & Print:
Drawer...
Fernleihe:Fernleihe für die Fachinformationsdienste
Published: BMJ Publ. 2021
In: Journal of medical ethics
Year: 2021, Volume: 47, Issue: 5, Pages: 337-338
Online Access: Volltext (lizenzpflichtig)
Volltext (lizenzpflichtig)

MARC

LEADER 00000caa a22000002 4500
001 1816164917
003 DE-627
005 20230428063552.0
007 cr uuu---uuuuu
008 220908s2021 xx |||||o 00| ||eng c
024 7 |a 10.1136/medethics-2021-107463  |2 doi 
035 |a (DE-627)1816164917 
035 |a (DE-599)KXP1816164917 
040 |a DE-627  |b ger  |c DE-627  |e rda 
041 |a eng 
084 |a 1  |2 ssgn 
100 1 |a Grote, Thomas  |e VerfasserIn  |4 aut 
109 |a Grote, Thomas  |a Grotte, Thomas  |a Groten, Thomas 
245 1 0 |a Trustworthy medical AI systems need to know when they don’t know 
264 1 |c 2021 
336 |a Text  |b txt  |2 rdacontent 
337 |a Computermedien  |b c  |2 rdamedia 
338 |a Online-Ressource  |b cr  |2 rdacarrier 
520 |a There is much to learn from Durán and Jongsma’s paper.1 One particularly important insight concerns the relationship between epistemology and ethics in medical artificial intelligence (AI). In clinical environments, the task of AI systems is to provide risk estimates or diagnostic decisions, which then need to be weighed by physicians. Hence, while the implementation of AI systems might give rise to ethical issues—for example, overtreatment, defensive medicine or paternalism2—the issue that lies at the heart is an epistemic problem: how can physicians know whether to trust decisions made by AI systems? In this manner, various studies examining the interaction of AI systems and physicians have shown that without being able to evaluate their trustworthiness, especially novice physicians become over-reliant on algorithmic support—and ultimately are led astray by incorrect decisions.3-5 This leads to a second insight from the paper, namely that even if some (deep learning-based) AI system happens to be opaque, it is still not built on the moon. To assess its trustworthiness, AI developers or physicians have different sorts of higher order evidence at hand. Most importantly, … 
773 0 8 |i Enthalten in  |t Journal of medical ethics  |d London : BMJ Publ., 1975  |g 47(2021), 5, Seite 337-338  |h Online-Ressource  |w (DE-627)323607802  |w (DE-600)2026397-1  |w (DE-576)260773972  |x 1473-4257  |7 nnns 
773 1 8 |g volume:47  |g year:2021  |g number:5  |g pages:337-338 
856 4 0 |u https://doi.org/10.1136/medethics-2021-107463  |x Resolving-System  |z lizenzpflichtig  |3 Volltext 
856 4 0 |u http://jme.bmj.com/content/47/5/337.abstract  |x Verlag  |z lizenzpflichtig  |3 Volltext 
935 |a mteo 
951 |a AR 
ELC |a 1 
ITA |a 1  |t 1 
LOK |0 000 xxxxxcx a22 zn 4500 
LOK |0 001 4185618867 
LOK |0 003 DE-627 
LOK |0 004 1816164917 
LOK |0 005 20220908053802 
LOK |0 008 220908||||||||||||||||ger||||||| 
LOK |0 035   |a (DE-Tue135)IxTheo#2022-08-03#55A9D56D7674E3C2B54737B8229B1FAF6306CDE6 
LOK |0 040   |a DE-Tue135  |c DE-627  |d DE-Tue135 
LOK |0 092   |o n 
LOK |0 852   |a DE-Tue135 
LOK |0 852 1  |9 00 
LOK |0 935   |a ixzs  |a ixrk  |a zota 
OAS |a 1  |b inherited from superior work 
ORI |a SA-MARC-ixtheoa001.raw