Chat-IRB? How application-specific language models can enhance research ethics review.
Published version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Abstract
Institutional review boards (IRBs) play a crucial role in ensuring the ethical conduct of human subjects research, but face challenges including inconsistency, delays, and inefficiencies. We propose the development and implementation of application-specific large language models (LLMs) to facilitate IRB review processes. These IRB-specific LLMs would be fine-tuned on IRB-specific literature and institutional datasets, and equipped with retrieval capabilities to access up-to-date, context-relevant information. We outline potential applications, including pre-review screening, preliminary analysis, consistency checking, and decision support. While addressing concerns about accuracy, context sensitivity, and human oversight, we acknowledge remaining challenges such as over-reliance on artificial intelligence and the need for transparency. By enhancing the efficiency and quality of ethical review while maintaining human judgement in critical decisions, IRB-specific LLMs offer a promising tool to improve research oversight. We call for pilot studies to evaluate the feasibility and impact of this approach.
Description
Peer reviewed: True
Acknowledgements: Any use of generative AI in this manuscript adheres to ethical guidelines for use and acknowledgment of generative AI in academic research.39 Each author has made a substantial contribution to the work, which has been thoroughly vetted for accuracy, and assumes responsibility for the integrity of their contributions.
Publication status: Published
Journal Title
Conference Name
Journal ISSN
1473-4257
Volume Title
Publisher
Publisher DOI
Rights and licensing
Sponsorship
Novo Nordisk Fonden (NNF23SA0087056)
Wellcome Trust (226801)