Limits of trust in medical AI

Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new d...

Full description

Saved in:  
Bibliographic Details
Main Author: Hatherley, Joshua James (Author)
Format: Electronic Article
Language:English
Check availability: HBZ Gateway
Journals Online & Print:
Drawer...
Fernleihe:Fernleihe für die Fachinformationsdienste
Published: BMJ Publ. 2020
In: Journal of medical ethics
Year: 2020, Volume: 46, Issue: 7, Pages: 478-481
Online Access: Volltext (lizenzpflichtig)
Volltext (lizenzpflichtig)
Description
Summary:Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI systems can be relied on, and are capable of reliability, but cannot be trusted, and are not capable of trustworthiness. Insofar as patients are required to rely on AI systems for their medical decision-making, there is potential for this to produce a deficit of trust in relationships in clinical practice.
ISSN:1473-4257
Contains:Enthalten in: Journal of medical ethics
Persistent identifiers:DOI: 10.1136/medethics-2019-105935