Can we Bridge AI’s responsibility gap at Will?

Artificial intelligence (AI) increasingly executes tasks that previously only humans could do, such as drive a car, fight in war, or perform a medical operation. However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans can no...

Full description

Saved in:  
Bibliographic Details
Main Author: Kiener, Maximilian (Author)
Format: Electronic Article
Language:English
Check availability: HBZ Gateway
Journals Online & Print:
Drawer...
Fernleihe:Fernleihe für die Fachinformationsdienste
Published: Springer Science + Business Media B. V 2022
In: Ethical theory and moral practice
Year: 2022, Volume: 25, Issue: 4, Pages: 575-593
Further subjects:B Liability
B Artificial Intelligence
B Normative Powers
B Responsibility gap
B Answerability
Online Access: Volltext (kostenfrei)
Description
Summary:Artificial intelligence (AI) increasingly executes tasks that previously only humans could do, such as drive a car, fight in war, or perform a medical operation. However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans can no longer be morally responsible for some of the AI-caused outcomes, which would then result in a responsibility gap. In this paper, I assume, for the sake of argument, that at least some of the most sophisticated AI systems do indeed create responsibility gaps, and I ask whether we can bridge these gaps at will, viz. whether certain people could take responsibility for AI-caused harm simply by performing a certain speech act, just as people can give permission for something simply by performing the act of consent. So understood, taking responsibility would be a genuine normative power. I first discuss and reject the view of Champagne and Tonkens, who advocate a view of taking liability. According to this view, a military commander can and must, ahead of time, accept liability to blame and punishment for any harm caused by autonomous weapon systems under her command. I then defend my own proposal of taking answerability, viz. the view that people can makes themselves morally answerable for the harm caused by AI systems, not only ahead of time but also when harm has already been caused.
ISSN:1572-8447
Contains:Enthalten in: Ethical theory and moral practice
Persistent identifiers:DOI: 10.1007/s10677-022-10313-9