Testimonial injustice in medical machine learning

Machine learning (ML) systems play an increasingly relevant role in medicine and healthcare. As their applications move ever closer to patient care and cure in clinical settings, ethical concerns about the responsibility of their use come to the fore. I analyse an aspect of responsible ML use that b...

Full description

Saved in:  
Bibliographic Details
Main Author: Pozzi, Giorgia (Author)
Format: Electronic Article
Language:English
Check availability: HBZ Gateway
Interlibrary Loan:Interlibrary Loan for the Fachinformationsdienste (Specialized Information Services in Germany)
Published: 2023
In: Journal of medical ethics
Year: 2023, Volume: 49, Issue: 8, Pages: 536-540
Online Access: Presumably Free Access
Volltext (lizenzpflichtig)
Volltext (lizenzpflichtig)
Description
Summary:Machine learning (ML) systems play an increasingly relevant role in medicine and healthcare. As their applications move ever closer to patient care and cure in clinical settings, ethical concerns about the responsibility of their use come to the fore. I analyse an aspect of responsible ML use that bears not only an ethical but also a significant epistemic dimension. I focus on ML systems’ role in mediating patient-physician relations. I thereby consider how ML systems may silence patients’ voices and relativise the credibility of their opinions, which undermines their overall credibility status without valid moral and epistemic justification. More specifically, I argue that withholding credibility due to how ML systems operate can be particularly harmful to patients and, apart from adverse outcomes, qualifies as a form of testimonial injustice. I make my case for testimonial injustice in medical ML by considering ML systems currently used in the USA to predict patients’ risk of misusing opioids (automated Prediction Drug Monitoring Programmes, PDMPs for short). I argue that the locus of testimonial injustice in ML-mediated medical encounters is found in the fact that these systems are treated as markers of trustworthiness on which patients’ credibility is assessed. I further show how ML-based PDMPs exacerbate and further propagate social inequalities at the expense of vulnerable social groups.
ISSN:1473-4257
Contains:Enthalten in: Journal of medical ethics
Persistent identifiers:DOI: 10.1136/jme-2022-108630