Should we replace radiologists with deep learning? Pigeons, error and trust in medical AI

The sudden rise in the ability of machine learning methodology, such as deep neural networks, to identify and predict with great accuracy instances of malignant cell growth from radiological images has led prominent developers of this technology, such as Geoffrey Hinton, to hold the view that “[…] w...

Full description

Saved in:  
Bibliographic Details
Main Author: Alvarado, Ramón (Author)
Format: Electronic Article
Language:English
Check availability: HBZ Gateway
Journals Online & Print:
Drawer...
Fernleihe:Fernleihe für die Fachinformationsdienste
Published: Wiley-Blackwell 2022
In: Bioethics
Year: 2022, Volume: 36, Issue: 2, Pages: 121-133
IxTheo Classification:NCH Medical ethics
NCJ Ethics of science
Further subjects:B deep learning
B epistemic opacity
B radiology
B medical AI
B neural networks
B Error
Online Access: Volltext (lizenzpflichtig)
Volltext (lizenzpflichtig)
Description
Summary:The sudden rise in the ability of machine learning methodology, such as deep neural networks, to identify and predict with great accuracy instances of malignant cell growth from radiological images has led prominent developers of this technology, such as Geoffrey Hinton, to hold the view that “[…] we should stop training radiologists.” Similar views exist in other contexts regarding the replacement of humans with artificial intelligence (AI) technologies. The assumption in these kinds of views is that deep neural networks are better than human radiologists in that they are more accurate, less costly, and have more predictive power than their human counterparts. In this paper, I argue that these considerations, even if true, are simply inadequate as reasons for us to allocate the kind of trust suggested by Hinton and others to these sorts of artifacts. In particular, I show that if the same considerations were true of something other than an AI device, say a pigeon, we would not have sufficient reason to trust them in the same way as suggested of deep neural networks in a medical setting. If this is the case then these considerations are also insufficient to trust AI enough to replace radiologists. Furthermore, I argue that the reliability of AI methodologies such as deep neural networks—which are at the center of this argument—is something that has not yet been established, and doing so faces fundamental challenges. Because of these challenges, it is not possible to ascribe the level of reliability expected from the deployment of a medical device. So, not only are the reasons cited in favor of the deployment of AI technologies in medical settings not sufficient/adequate even if they are true, but knowing whether they are true or not faces non-trivial epistemic challenges. If this is so, then we have no good reasons to advocate replacing radiologists with AI methodologies such as deep neural networks.
ISSN:1467-8519
Contains:Enthalten in: Bioethics
Persistent identifiers:DOI: 10.1111/bioe.12959