Abstract
This study investigates the perspectives of 12 journal reviewers from diverse academic disciplines on using large language models (LLMs) in the peer review process. We identified key themes regarding integrating LLMs through qualitative data analysis of verbatim responses to an open-ended questionnaire. Reviewers noted that LLMs can automate tasks such as preliminary screening, plagiarism detection, and language verification, thereby reducing workload and enhancing consistency in applying review standards. However, significant ethical concerns were raised, including potential biases, lack of transparency, and risks to privacy and confidentiality. Reviewers emphasized that LLMs should not replace human judgment but rather complement it with human oversight, which is essential to ensure the relevance and accuracy of AI-generated feedback. This study underscores the need for clear guidelines and policies, as well as their proper dissemination among researchers, to address the ethical and practical challenges of using LLMs in academic publishing.
| Original language | English |
|---|---|
| Pages (from-to) | 1383-1397 |
| Number of pages | 15 |
| Journal | Journal of Academic Ethics |
| Volume | 23 |
| Issue number | 3 |
| DOIs | |
| Publication status | Published - Sept 2025 |
Keywords
- Academic publishing
- Journal review
- Large language models (LLMs)
- Peer review
- Reviewer perspectives
ASJC Scopus subject areas
- Education
- Arts and Humanities (miscellaneous)
- Sociology and Political Science
- Philosophy