Selective Deployment of AI in Healthcare and the Problem of Declining Human Expertise

Machine-learning algorithms are transforming healthcare diagnostics and prognostics. However, they sometimes underperform for groups underrepresented in their training data. Vandersluis and Savulescu have suggested selectively deploying these algorithms for populations well represented in the traini...

全面介紹

Saved in:  
書目詳細資料
主要作者: Feldblyum Le Blevennec, Marie Kerguelen (Author)
格式: 電子 Article
語言:English
Check availability: HBZ Gateway
Interlibrary Loan:Interlibrary Loan for the Fachinformationsdienste (Specialized Information Services in Germany)
出版: 2025
In: Bioethics
Year: 2025, 卷: 39, 發布: 7, Pages: 688-692
Further subjects:B underrepresented groups
B Algorithms
B Bias
B Artificial Intelligence
B Expertise
在線閱讀: Volltext (lizenzpflichtig)
Volltext (lizenzpflichtig)
實物特徵
總結:Machine-learning algorithms are transforming healthcare diagnostics and prognostics. However, they sometimes underperform for groups underrepresented in their training data. Vandersluis and Savulescu have suggested selectively deploying these algorithms for populations well represented in the training data, while excluding underrepresented groups until improvements are made to the algorithms. In this paper, I explore one long-term risk of such selective deployment for certain small underrepresented groups, such as those with rare diseases. The risk in question is the potential long-term decline in the human expertise critical for such small groups, which, because they are excluded from effective care by the algorithm, would still rely on non-algorithmic, human expertise even in the long run. I then discuss how to best preserve human expertise and maintain long-term access to quality care for excluded groups and contend that such expertise preservation is essential for ethical deployment of algorithmic processes in healthcare.
ISSN:1467-8519
Contains:Enthalten in: Bioethics
Persistent identifiers:DOI: 10.1111/bioe.13424