Exploring the Impact of Generative AI on Peer Review: Insights from Journal Reviewers

  • Saman Ebadi
  • , Hassan Nejadghanbar (Corresponding Author)
  • , Ahmed Rawdhan Salman
  • , Hassan Khosravi

Research output: Journal article publicationJournal articleAcademic researchpeer-review

8 Citations (Scopus)

Abstract

This study investigates the perspectives of 12 journal reviewers from diverse academic disciplines on using large language models (LLMs) in the peer review process. We identified key themes regarding integrating LLMs through qualitative data analysis of verbatim responses to an open-ended questionnaire. Reviewers noted that LLMs can automate tasks such as preliminary screening, plagiarism detection, and language verification, thereby reducing workload and enhancing consistency in applying review standards. However, significant ethical concerns were raised, including potential biases, lack of transparency, and risks to privacy and confidentiality. Reviewers emphasized that LLMs should not replace human judgment but rather complement it with human oversight, which is essential to ensure the relevance and accuracy of AI-generated feedback. This study underscores the need for clear guidelines and policies, as well as their proper dissemination among researchers, to address the ethical and practical challenges of using LLMs in academic publishing.
Original languageEnglish
Pages (from-to)1383-1397
Number of pages15
JournalJournal of Academic Ethics
Volume23
Issue number3
DOIs
Publication statusPublished - Sept 2025

Keywords

  • Academic publishing
  • Journal review
  • Large language models (LLMs)
  • Peer review
  • Reviewer perspectives

ASJC Scopus subject areas

  • Education
  • Arts and Humanities (miscellaneous)
  • Sociology and Political Science
  • Philosophy

Fingerprint

Dive into the research topics of 'Exploring the Impact of Generative AI on Peer Review: Insights from Journal Reviewers'. Together they form a unique fingerprint.

Cite this