Moral Status for Malware! The Difficulty of Defining Advanced Artificial Intelligence

The suggestion has been made that future advanced artificial intelligence (AI) that passes some consciousness-related criteria should be treated as having moral status, and therefore, humans would have an ethical obligation to consider its well-being. In this paper, the author discusses the extent t...

Full description

Saved in:  
Bibliographic Details
Published in:Cambridge quarterly of healthcare ethics
Main Author: Mowbray, Miranda (Author)
Format: Electronic Article
Language:English
Check availability: HBZ Gateway
Journals Online & Print:
Drawer...
Fernleihe:Fernleihe für die Fachinformationsdienste
Published: Cambridge Univ. Press 2021
In: Cambridge quarterly of healthcare ethics
Further subjects:B malware
B artificial intelligence (AI)
B criteria for consciousness
B Code
B Robots
Online Access: Volltext (lizenzpflichtig)
Volltext (lizenzpflichtig)
Description
Summary:The suggestion has been made that future advanced artificial intelligence (AI) that passes some consciousness-related criteria should be treated as having moral status, and therefore, humans would have an ethical obligation to consider its well-being. In this paper, the author discusses the extent to which software and robots already pass proposed criteria for consciousness; and argues against the moral status for AI on the grounds that human malware authors may design malware to fake consciousness. In fact, the article warns that malware authors have stronger incentives than do authors of legitimate software to create code that passes some of the criteria. Thus, code that appears to be benign, but is in fact malware, might become the most common form of software to be treated as having moral status.
ISSN:1469-2147
Contains:Enthalten in: Cambridge quarterly of healthcare ethics
Persistent identifiers:DOI: 10.1017/S0963180120001061