The selective deployment of AI in healthcare: An ethical algorithm for algorithms

Machine-learning algorithms have the potential to revolutionise diagnostic and prognostic tasks in health care, yet algorithmic performance levels can be materially worse for subgroups that have been underrepresented in algorithmic training data. Given this epistemic deficit, the inclusion of underr...

Полное описание

Сохранить в:  
Библиографические подробности
Главные авторы: Vandersluis, Robert (Автор) ; Savulescu, Julian (Автор)
Формат: Электронный ресурс Статья
Язык:Английский
Проверить наличие: HBZ Gateway
Journals Online & Print:
Загрузка...
Fernleihe:Fernleihe für die Fachinformationsdienste
Опубликовано: Wiley-Blackwell 2024
В: Bioethics
Год: 2024, Том: 38, Выпуск: 5, Страницы: 391-400
Другие ключевые слова:B Artificial Intelligence
B Melanoma
B Exclusion
B Предвзятость
B Machine Learning
B Algorithm
Online-ссылка: Volltext (kostenfrei)
Volltext (kostenfrei)
Описание
Итог:Machine-learning algorithms have the potential to revolutionise diagnostic and prognostic tasks in health care, yet algorithmic performance levels can be materially worse for subgroups that have been underrepresented in algorithmic training data. Given this epistemic deficit, the inclusion of underrepresented groups in algorithmic processes can result in harm. Yet delaying the deployment of algorithmic systems until more equitable results can be achieved would avoidably and foreseeably lead to a significant number of unnecessary deaths in well-represented populations. Faced with this dilemma between equity and utility, we draw on two case studies involving breast cancer and melanoma to argue for the selective deployment of diagnostic and prognostic tools for some well-represented groups, even if this results in the temporary exclusion of underrepresented patients from algorithmic approaches. We argue that this approach is justifiable when the inclusion of underrepresented patients would cause them to be harmed. While the context of historic injustice poses a considerable challenge for the ethical acceptability of selective algorithmic deployment strategies, we argue that, at least for the case studies addressed in this article, the issue of historic injustice is better addressed through nonalgorithmic measures, including being transparent with patients about the nature of the current epistemic deficits, providing additional services to algorithmically excluded populations, and through urgent commitments to gather additional algorithmic training data from excluded populations, paving the way for universal algorithmic deployment that is accurate for all patient groups. These commitments should be supported by regulation and, where necessary, government funding to ensure that any delays for excluded groups are kept to the minimum. We offer an ethical algorithm for algorithms—showing when to ethically delay, expedite, or selectively deploy algorithmic systems in healthcare settings.
ISSN:1467-8519
Второстепенные работы:Enthalten in: Bioethics
Persistent identifiers:DOI: 10.1111/bioe.13281