Transparent AI: reliabilist and proud

Durán et al argue in ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’1 that traditionally proposed solutions to make black box machine learning models in medicine less opaque and more transparent are, though necessary, ultimately not sufficient...

Full description

Saved in:  
Bibliographic Details
Main Author: Mishra, Abhishek (Author)
Format: Electronic Article
Language:English
Check availability: HBZ Gateway
Journals Online & Print:
Drawer...
Fernleihe:Fernleihe für die Fachinformationsdienste
Published: BMJ Publ. 2021
In: Journal of medical ethics
Year: 2021, Volume: 47, Issue: 5, Pages: 341-342
Online Access: Volltext (lizenzpflichtig)
Volltext (lizenzpflichtig)
Description
Summary:Durán et al argue in ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’1 that traditionally proposed solutions to make black box machine learning models in medicine less opaque and more transparent are, though necessary, ultimately not sufficient to establish their overall trustworthiness. This is because transparency procedures currently employed, such as the use of an interpretable predictor (IP),2 cannot fully overcome the opacity of such models. Computational reliabilism (CR), an alternate approach to adjudicating trustworthiness that goes beyond transparency solutions, is argued to be a more promising approach. CR can bring the benefits of traditional process reliabilism in epistemology to bear on this problem of model trustworthiness.Durán et al’s explicitly reliabilist epistemology to assess the trustworthiness of black box models is a timely addition to current transparency-focused approaches in the literature. Their delineation of the epistemic from the ethical also serves the debate by clarifying the nature of the different problems. However, their overall account underestimates the epistemic value of certain transparency-enabling approaches by conflating different types of opacity and also oversimplifies transparency-advocating arguments in the literature.First, it is unclear why Durán et al consider transparency approaches as insufficient to overcome epistemic opacity, if heiraccount of opacity is the traditional one from the machine learning literature: opacity stemming from the mismatch between (1) mathematical optimisation in high dimensionality that is characteristic of machine learning and (2) the demands of human-scale reasoning and styles of semantic interpretation.3 …
ISSN:1473-4257
Contains:Enthalten in: Journal of medical ethics
Persistent identifiers:DOI: 10.1136/medethics-2021-107352