Empowering Patient Autonomy: The Role of Large Language Models (LLMs) in Scaffolding Informed Consent in Medical Practice
The principle of (respect for) patient autonomy has traditionally emphasized independence in medical decision-making, reflecting a broader commitment to epistemic individualism. However, recent philosophical work has challenged this view, suggesting that autonomous decisions are inherently dependent...
| Authors: | ; ; |
|---|---|
| Format: | Electronic Article |
| Language: | English |
| Check availability: | HBZ Gateway |
| Interlibrary Loan: | Interlibrary Loan for the Fachinformationsdienste (Specialized Information Services in Germany) |
| Published: |
2026
|
| In: |
Bioethics
Year: 2026, Volume: 40, Issue: 2, Pages: 183-193 |
| Further subjects: | B
Informed Consent
B artificial intelligence ethics B medical decision-making B patient autonomy B large language models |
| Online Access: |
Volltext (kostenfrei) Volltext (kostenfrei) |
| Summary: | The principle of (respect for) patient autonomy has traditionally emphasized independence in medical decision-making, reflecting a broader commitment to epistemic individualism. However, recent philosophical work has challenged this view, suggesting that autonomous decisions are inherently dependent on epistemic and social supports. Wilkinson and Levy's “scaffolded model” of autonomy demonstrates how our everyday decisions rely on distributed cognition and various forms of epistemic scaffolding—from consulting others to using technological aids like maps or calculators. This paper explores how Large Language Models (LLMs) could operationalize scaffolded autonomy in medical informed consent. We argue that rather than undermining patient autonomy, appropriately designed LLM systems could enhance it by providing flexible, personalized support for information processing and value clarification. Drawing on examples from clinical practice, we examine how LLMs might serve as cognitive scaffolds in three key areas: enhancing information accessibility and comprehension, supporting value clarification, and facilitating culturally appropriate decision-making processes. However, implementing LLMs in consent procedures raises important challenges regarding epistemic responsibility, authenticity of choice, and the maintenance of appropriate human oversight. We analyze these challenges through the lens of scaffolded autonomy, arguing that successful implementation requires moving beyond simple questions of information provision to consider how technological systems can support genuinely autonomous decision-making. The paper concludes by proposing practical guidelines for LLM implementation while highlighting broader philosophical questions about the nature of autonomous choice in technologically mediated environments. |
|---|---|
| ISSN: | 1467-8519 |
| Contains: | Enthalten in: Bioethics
|
| Persistent identifiers: | DOI: 10.1111/bioe.70030 |