Can we Bridge AI’s responsibility gap at Will?

Artificial intelligence (AI) increasingly executes tasks that previously only humans could do, such as drive a car, fight in war, or perform a medical operation. However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans can no...

Full description

Saved in:  
Bibliographic Details
Main Author: Kiener, Maximilian (Author)
Format: Electronic Article
Language:English
Check availability: HBZ Gateway
Journals Online & Print:
Drawer...
Fernleihe:Fernleihe für die Fachinformationsdienste
Published: Springer Science + Business Media B. V 2022
In: Ethical theory and moral practice
Year: 2022, Volume: 25, Issue: 4, Pages: 575-593
Further subjects:B Liability
B Artificial Intelligence
B Normative Powers
B Responsibility gap
B Answerability
Online Access: Volltext (kostenfrei)

MARC

LEADER 00000caa a22000002 4500
001 1819833933
003 DE-627
005 20230118120750.0
007 cr uuu---uuuuu
008 221024s2022 xx |||||o 00| ||eng c
024 7 |a 10.1007/s10677-022-10313-9  |2 doi 
035 |a (DE-627)1819833933 
035 |a (DE-599)KXP1819833933 
040 |a DE-627  |b ger  |c DE-627  |e rda 
041 |a eng 
084 |a 1  |2 ssgn 
100 1 |e VerfasserIn  |0 (DE-588)127848387X  |0 (DE-627)1831403668  |4 aut  |a Kiener, Maximilian 
109 |a Kiener, Maximilian 
245 1 0 |a Can we Bridge AI’s responsibility gap at Will? 
264 1 |c 2022 
336 |a Text  |b txt  |2 rdacontent 
337 |a Computermedien  |b c  |2 rdamedia 
338 |a Online-Ressource  |b cr  |2 rdacarrier 
520 |a Artificial intelligence (AI) increasingly executes tasks that previously only humans could do, such as drive a car, fight in war, or perform a medical operation. However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans can no longer be morally responsible for some of the AI-caused outcomes, which would then result in a responsibility gap. In this paper, I assume, for the sake of argument, that at least some of the most sophisticated AI systems do indeed create responsibility gaps, and I ask whether we can bridge these gaps at will, viz. whether certain people could take responsibility for AI-caused harm simply by performing a certain speech act, just as people can give permission for something simply by performing the act of consent. So understood, taking responsibility would be a genuine normative power. I first discuss and reject the view of Champagne and Tonkens, who advocate a view of taking liability. According to this view, a military commander can and must, ahead of time, accept liability to blame and punishment for any harm caused by autonomous weapon systems under her command. I then defend my own proposal of taking answerability, viz. the view that people can makes themselves morally answerable for the harm caused by AI systems, not only ahead of time but also when harm has already been caused. 
650 4 |a Answerability 
650 4 |a Artificial Intelligence 
650 4 |a Liability 
650 4 |a Normative Powers 
650 4 |a Responsibility gap 
773 0 8 |i Enthalten in  |t Ethical theory and moral practice  |d Dordrecht [u.a.] : Springer Science + Business Media B.V, 1998  |g 25(2022), 4, Seite 575-593  |h Online-Ressource  |w (DE-627)320527093  |w (DE-600)2015306-5  |w (DE-576)104558555  |x 1572-8447  |7 nnns 
773 1 8 |g volume:25  |g year:2022  |g number:4  |g pages:575-593 
856 |u https://link.springer.com/content/pdf/10.1007/s10677-022-10313-9.pdf  |x unpaywall  |z Vermutlich kostenfreier Zugang  |h publisher [open (via crossref license)] 
856 4 0 |u https://doi.org/10.1007/s10677-022-10313-9  |x Resolving-System  |z kostenfrei  |3 Volltext 
951 |a AR 
ELC |a 1 
ITA |a 1  |t 1 
LOK |0 000 xxxxxcx a22 zn 4500 
LOK |0 001 4201407888 
LOK |0 003 DE-627 
LOK |0 004 1819833933 
LOK |0 005 20221024165234 
LOK |0 008 221024||||||||||||||||ger||||||| 
LOK |0 040   |a DE-Tue135  |c DE-627  |d DE-Tue135 
LOK |0 092   |o n 
LOK |0 852   |a DE-Tue135 
LOK |0 852 1  |9 00 
LOK |0 935   |a ixzs  |a ixzo 
OAS |a 1 
ORI |a SA-MARC-ixtheoa001.raw