Evaluation of artificial intelligence clinical applications: Detailed case analyses show value of healthcare ethics approach in identifying patient care issues
This paper is one of the first to analyse the ethical implications of specific healthcare artificial intelligence (AI) applications, and the first to provide a detailed analysis of AI-based systems for clinical decision support. AI is increasingly being deployed across multiple domains. In response,...
主要作者: | |
---|---|
其他作者: | ; |
格式: | 電子 Article |
語言: | English |
Check availability: | HBZ Gateway |
Journals Online & Print: | |
Interlibrary Loan: | Interlibrary Loan for the Fachinformationsdienste (Specialized Information Services in Germany) |
出版: |
2021
|
In: |
Bioethics
Year: 2021, 卷: 35, 發布: 7, Pages: 623-633 |
IxTheo Classification: | NCH Medical ethics NCJ Ethics of science |
Further subjects: | B
AI applications in healthcare
B ethics of new technologies B healthcare ethics B Ethical Evaluation B Artificial Intelligence B ethical frameworks |
在線閱讀: |
Presumably Free Access Volltext (lizenzpflichtig) Volltext (lizenzpflichtig) |
總結: | This paper is one of the first to analyse the ethical implications of specific healthcare artificial intelligence (AI) applications, and the first to provide a detailed analysis of AI-based systems for clinical decision support. AI is increasingly being deployed across multiple domains. In response, a plethora of ethical guidelines and principles for general AI use have been published, with some convergence about which ethical concepts are relevant to this new technology. However, few of these frameworks are healthcare-specific, and there has been limited examination of actual AI applications in healthcare. Our ethical evaluation identifies context- and case-specific healthcare ethical issues for two applications, and investigates the extent to which the general ethical principles for AI-assisted healthcare expressed in existing frameworks capture what is most ethically relevant from the perspective of healthcare ethics. We provide a detailed description and analysis of two AI-based systems for clinical decision support (Painchek® and IDx-DR). Our results identify ethical challenges associated with potentially deceptive promissory claims, lack of patient and public involvement in healthcare AI development and deployment, and lack of attention to the impact of AIs on healthcare relationships. Our analysis also highlights the close connection between evaluation and technical development and reporting. Critical appraisal frameworks for healthcare AIs should include explicit ethical evaluation with benchmarks. However, each application will require scrutiny across the AI life-cycle to identify ethical issues specific to healthcare. This level of analysis requires more attention to detail than is suggested by current ethical guidance or frameworks. |
---|---|
ISSN: | 1467-8519 |
Reference: | Errata "Corrigendum (2022)"
|
Contains: | Enthalten in: Bioethics
|
Persistent identifiers: | DOI: 10.1111/bioe.12885 |