A new problem of evil?

This article examines whether artificial intelligence (AI)-driven harm can be classified as moral or natural evil, or whether a new category - artificial evil - is needed. Should AI’s harm be seen as a product of human design, thus maintaining moral responsibility for its creators, or whether AI’s a...

Descrizione completa

Salvato in:  
Dettagli Bibliografici
Autore principale: Aslantatar, Nesim (Autore)
Tipo di documento: Elettronico Articolo
Lingua:Inglese
Verificare la disponibilità: HBZ Gateway
Interlibrary Loan:Interlibrary Loan for the Fachinformationsdienste (Specialized Information Services in Germany)
Pubblicazione: 2025
In: Religious studies
Anno: 2025, Volume: 61, Fascicolo: 3, Pagine: 746-748
Altre parole chiave:B problem of artificial evil
B Moral Responsibility
B free will defence
B Autonomy
Accesso online: Volltext (lizenzpflichtig)
Volltext (lizenzpflichtig)

MARC

LEADER 00000naa a22000002c 4500
001 1942601514
003 DE-627
005 20251126134836.0
007 cr uuu---uuuuu
008 251126s2025 xx |||||o 00| ||eng c
024 7 |a 10.1017/S003441252500023X  |2 doi 
035 |a (DE-627)1942601514 
035 |a (DE-599)KXP1942601514 
040 |a DE-627  |b ger  |c DE-627  |e rda 
041 |a eng 
084 |a 0  |2 ssgn 
100 1 |a Aslantatar, Nesim  |e VerfasserIn  |0 (orcid)0000-0002-7817-8576  |4 aut 
245 1 2 |a A new problem of evil? 
264 1 |c 2025 
336 |a Text  |b txt  |2 rdacontent 
337 |a Computermedien  |b c  |2 rdamedia 
338 |a Online-Ressource  |b cr  |2 rdacarrier 
520 |a This article examines whether artificial intelligence (AI)-driven harm can be classified as moral or natural evil, or whether a new category - artificial evil - is needed. Should AI’s harm be seen as a product of human design, thus maintaining moral responsibility for its creators, or whether AI’s autonomous actions resemble natural evil, where harm arises unintentionally? The concept of artificial evil, combining elements of both moral and natural evil, is presented to better address this dilemma. Just as AI is still a form of intelligence (albeit non-biological), artificial evil would still be evil in the sense that it results in real harm or suffering - it is just that this harm is produced by AI systems rather than by nature or human moral agents directly. The discussion further extends into the realm of defence or theodicy, drawing parallels with the Free Will Defence, questioning if AI autonomy may be justified even if it results in harm, much like human free will. Ultimately, the article calls for a re-evaluation of our ethical frameworks and glossary of terms to address the emerging challenges of AI autonomy and its moral implications. 
601 |a Problem 
650 4 |a Autonomy 
650 4 |a free will defence 
650 4 |a Moral Responsibility 
650 4 |a problem of artificial evil 
773 0 8 |i Enthalten in  |t Religious studies  |d Cambridge [u.a.] : Cambridge Univ. Press, 1965  |g 61(2025), 3, Seite 746-748  |h Online-Ressource  |w (DE-627)265785405  |w (DE-600)1466479-3  |w (DE-576)079718671  |x 1469-901X  |7 nnas 
773 1 8 |g volume:61  |g year:2025  |g number:3  |g pages:746-748 
856 4 0 |u https://doi.org/10.1017/S003441252500023X  |x Resolving-System  |z lizenzpflichtig  |3 Volltext  |7 1 
856 4 0 |u https://www.cambridge.org/core/journals/religious-studies/article/new-problem-of-evil/D2EE20D5DAFD2946D92CE8096C90EA82  |x Verlag  |z lizenzpflichtig  |3 Volltext  |7 1 
951 |a AR 
ELC |a 1 
ITA |a 1  |t 1 
LOK |0 000 xxxxxcx a22 zn 4500 
LOK |0 001 4814243731 
LOK |0 003 DE-627 
LOK |0 004 1942601514 
LOK |0 005 20251126134836 
LOK |0 008 251126||||||||||||||||ger||||||| 
LOK |0 040   |a DE-Tue135  |c DE-627  |d DE-Tue135 
LOK |0 092   |o n 
LOK |0 852   |a DE-Tue135 
LOK |0 852 1  |9 00 
LOK |0 935   |a ixzs  |a ixzo 
LOK |0 939   |a 26-11-25  |b l01 
ORI |a SA-MARC-ixtheoa001.raw 
REL |a 1 
SUB |a REL