Trust does not need to be human: it is possible to trust medical AI

In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to...

Full description

Saved in:  
Bibliographic Details
Authors: Ferrario, Andrea (Author) ; Loi, Michele (Author) ; Viganò, Eleonora (Author)
Format: Electronic Article
Language:English
Check availability: HBZ Gateway
Journals Online & Print:
Drawer...
Fernleihe:Fernleihe für die Fachinformationsdienste
Published: BMJ Publ. 2021
In: Journal of medical ethics
Year: 2021, Volume: 47, Issue: 6, Pages: 437-438
Online Access: Volltext (kostenfrei)
Volltext (kostenfrei)

MARC

LEADER 00000caa a22000002 4500
001 1816165212
003 DE-627
005 20230428063555.0
007 cr uuu---uuuuu
008 220908s2021 xx |||||o 00| ||eng c
024 7 |a 10.1136/medethics-2020-106922  |2 doi 
035 |a (DE-627)1816165212 
035 |a (DE-599)KXP1816165212 
040 |a DE-627  |b ger  |c DE-627  |e rda 
041 |a eng 
084 |a 1  |2 ssgn 
100 1 |a Ferrario, Andrea  |e VerfasserIn  |4 aut 
245 1 0 |a Trust does not need to be human: it is possible to trust medical AI 
264 1 |c 2021 
336 |a Text  |b txt  |2 rdacontent 
337 |a Computermedien  |b c  |2 rdamedia 
338 |a Online-Ressource  |b cr  |2 rdacarrier 
520 |a In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence (AI), if one refrains from simply assuming that trust describes human-human interactions. To do so, we consider an account of trust that distinguishes trust from reliance in a way that is compatible with trusting non-human agents. In this account, to trust a medical AI is to rely on it with little monitoring and control of the elements that make it trustworthy. This attitude does not imply specific properties in the AI system that in fact only humans can have. This account of trust is applicable, in particular, to all cases where a physician relies on the medical AI predictions to support his or her decision making. 
700 1 |a Loi, Michele  |e VerfasserIn  |4 aut 
700 1 |a Viganò, Eleonora  |e VerfasserIn  |4 aut 
773 0 8 |i Enthalten in  |t Journal of medical ethics  |d London : BMJ Publ., 1975  |g 47(2021), 6, Seite 437-438  |h Online-Ressource  |w (DE-627)323607802  |w (DE-600)2026397-1  |w (DE-576)260773972  |x 1473-4257  |7 nnns 
773 1 8 |g volume:47  |g year:2021  |g number:6  |g pages:437-438 
856 |u https://jme.bmj.com/content/medethics/47/6/437.full.pdf  |x unpaywall  |z Vermutlich kostenfreier Zugang  |h publisher [open (via page says license)] 
856 4 0 |u https://doi.org/10.1136/medethics-2020-106922  |x Resolving-System  |z kostenfrei  |3 Volltext 
856 4 0 |u http://jme.bmj.com/content/47/6/437.abstract  |x Verlag  |z kostenfrei  |3 Volltext 
935 |a mteo 
951 |a AR 
ELC |a 1 
ITA |a 1  |t 1 
LOK |0 000 xxxxxcx a22 zn 4500 
LOK |0 001 4185619162 
LOK |0 003 DE-627 
LOK |0 004 1816165212 
LOK |0 005 20220908053803 
LOK |0 008 220908||||||||||||||||ger||||||| 
LOK |0 035   |a (DE-Tue135)IxTheo#2022-08-03#185E3AA324460728D14EE843E06A0B9953D20904 
LOK |0 040   |a DE-Tue135  |c DE-627  |d DE-Tue135 
LOK |0 092   |o n 
LOK |0 852   |a DE-Tue135 
LOK |0 852 1  |9 00 
LOK |0 935   |a ixzs  |a ixrk  |a zota 
OAS |a 1 
ORI |a SA-MARC-ixtheoa001.raw