Benefits and Risks of Using AI Agents in Research

Scientists have begun using AI agents in tasks such as reviewing the published literature, formulating hypotheses and subjecting them to virtual tests, modeling complex phenomena, and conducting experiments. Although AI agents are likely to enhance the productivity and efficiency of scientific inqui...

Full description

Saved in:  
Bibliographic Details
Authors: Hosseini, Mohammad (Author) ; Murad, Maya (Author) ; Resnik, David B. (Author)
Format: Electronic Article
Language:English
Check availability: HBZ Gateway
Interlibrary Loan:Interlibrary Loan for the Fachinformationsdienste (Specialized Information Services in Germany)
Published: 2026
In: The Hastings Center report
Year: 2026, Volume: 56, Issue: 1, Pages: 13-17
Further subjects:B Risks
B LLMs
B research integrity
B AI agents
B Bioethics
B large language models
B research ethics
Online Access: Volltext (kostenfrei)
Volltext (kostenfrei)
Description
Summary:Scientists have begun using AI agents in tasks such as reviewing the published literature, formulating hypotheses and subjecting them to virtual tests, modeling complex phenomena, and conducting experiments. Although AI agents are likely to enhance the productivity and efficiency of scientific inquiry, their deployment also creates risks for the research enterprise and society, including poor policy decisions based on erroneous, inaccurate, or biased AI works or products; responsibility gaps in scientific research; loss of research jobs, especially entry-level ones; the deskilling of researchers; AI agents’ engagement in unethical research; AI-generated knowledge that is unverifiable by or incomprehensible to humans; and the loss of the insights and courage needed to challenge or critique AI and to engage in whistleblowing. Here, we discuss these risks and argue that, for responsible management of them, reflection on which research tasks should and should not be automated is urgently needed. To ensure responsible use of AI agents in research, institutions should train researchers in AI and algorithmic literacy, bias identification, and output verification, and should encourage understanding of the risks and limitations of AI agents. Research teams may benefit from designating an AI-specific role, such as an AI validator expert or AI guarantor, to oversee and take responsibility for the integrity of AI-assisted contributions.
ISSN:1552-146X
Contains:Enthalten in: Hastings Center, The Hastings Center report
Persistent identifiers:DOI: 10.1002/hast.70025