Should we be afraid of medical AI?

I analyse an argument according to which medical artificial intelligence (AI) represents a threat to patient autonomy—recently put forward by Rosalind McDougall in the Journal of Medical Ethics. The argument takes the case of IBM Watson for Oncology to argue that such technologies risk disregarding...

Full description

Saved in:  
Bibliographic Details
Main Author: Di Nucci, Ezio (Author)
Format: Electronic Article
Language:English
Check availability: HBZ Gateway
Journals Online & Print:
Drawer...
Fernleihe:Fernleihe für die Fachinformationsdienste
Published: BMJ Publ. 2019
In: Journal of medical ethics
Year: 2019, Volume: 45, Issue: 8, Pages: 556-558
Online Access: Volltext (lizenzpflichtig)
Volltext (lizenzpflichtig)
Description
Summary:I analyse an argument according to which medical artificial intelligence (AI) represents a threat to patient autonomy—recently put forward by Rosalind McDougall in the Journal of Medical Ethics. The argument takes the case of IBM Watson for Oncology to argue that such technologies risk disregarding the individual values and wishes of patients. I find three problems with this argument: (1) it confuses AI with machine learning; (2) it misses machine learning’s potential for personalised medicine through big data; (3) it fails to distinguish between evidence-based advice and decision-making within healthcare. I conclude that how much and which tasks we should delegate to machine learning and other technologies within healthcare and beyond is indeed a crucial question of our time, but in order to answer it, we must be careful in analysing and properly distinguish between the different systems and different delegated tasks.
ISSN:1473-4257
Contains:Enthalten in: Journal of medical ethics
Persistent identifiers:DOI: 10.1136/medethics-2018-105281