Moral Status for Malware! The Difficulty of Defining Advanced Artificial Intelligence
The suggestion has been made that future advanced artificial intelligence (AI) that passes some consciousness-related criteria should be treated as having moral status, and therefore, humans would have an ethical obligation to consider its well-being. In this paper, the author discusses the extent t...
Main Author: | |
---|---|
Format: | Electronic Article |
Language: | English |
Check availability: | HBZ Gateway |
Journals Online & Print: | |
Interlibrary Loan: | Interlibrary Loan for the Fachinformationsdienste (Specialized Information Services in Germany) |
Published: |
2021
|
In: |
Cambridge quarterly of healthcare ethics
Year: 2021, Volume: 30, Issue: 3, Pages: 517-528 |
Further subjects: | B
malware
B artificial intelligence (AI) B criteria for consciousness B Code B Robots |
Online Access: |
Volltext (lizenzpflichtig) Volltext (lizenzpflichtig) |
Summary: | The suggestion has been made that future advanced artificial intelligence (AI) that passes some consciousness-related criteria should be treated as having moral status, and therefore, humans would have an ethical obligation to consider its well-being. In this paper, the author discusses the extent to which software and robots already pass proposed criteria for consciousness; and argues against the moral status for AI on the grounds that human malware authors may design malware to fake consciousness. In fact, the article warns that malware authors have stronger incentives than do authors of legitimate software to create code that passes some of the criteria. Thus, code that appears to be benign, but is in fact malware, might become the most common form of software to be treated as having moral status. |
---|---|
ISSN: | 1469-2147 |
Contains: | Enthalten in: Cambridge quarterly of healthcare ethics
|
Persistent identifiers: | DOI: 10.1017/S0963180120001061 |