Since the onset of generative artificial intelligence (AI) technologies, a rising number of researchers have been using this tool to not only assist them with research but also with paper writing. A study conducted by Daniel Evanko, the director of journal operations at the American Association for Cancer Research (AACR), and his team found that 36 percent of papers published in the AACR journals include abstracts containing AI-generated text. However, when researchers were asked to disclose their use of AI, only nine percent admitted to using it for paper drafting. The line between acceptable and unacceptable use of AI in journal writing remains blurred, as a broad consensus on this matter is nonexistent.

Previously, it was agreed that any use of artificial intelligence in research should be disclosed, but publishers and organizations have released conflicting disclosure policies. For instance, Springer does not require AI-assisted copyediting to be disclosed, whereas Taylor & Francis requires any use of AI to be acknowledged. Discrepancies like these can cause inconsistency in standards and may erode trust within the scientific community. The general public also has varying viewpoints surrounding its use in academia. In a survey conducted by Nature in May 2025, more than 90 percent of the study’s respondents believed that editing and translating papers using machine learning algorithms is acceptable. However, participants’ opinions varied regarding the extent of disclosure. A biomedical researcher responded that the broad adoption of AI would be similar to that of calculators, making disclosure a trivial issue. Others, however, believed in full transparency, including specific instances where it was used in the research and paper-drafting process.

Moreover, AI is implemented by editors in peer-reviewed journals. Even before the introduction of ChatGPT, reviewers have been using various forms of machine learning algorithms to check statistics and summarize findings. Now, large language models are generating reviews. In a poll conducted by Nature, 40 percent of respondents believed that AI was either as helpful as or even more helpful than human reviewers. However, it is important to note that AI reviews often contain errors, substantiating the need for a proofreading process. Conversely, this tool can also be used to evaluate human-written reports and generate suggestions for reviewers’ comments.

Despite failing to reach a consensus across the scientific community, researchers have raised some overlapping ideas regarding the acceptable uses of AI. Notably, it can be implemented by researchers who are not native English speakers to edit and polish their manuscripts. Some have proposed criteria for mandatory AI disclosure in scientific papers. Bioethicists David B Resnik and Mohammad Hosseini generated three such criteria. First, artificial intelligence tools that make decisions directly impacting research results should be addressed. Secondly, any form of AI used to generate or synthesize content, data, or images should also be disclosed. Finally, AI tools used to analyze content, data, or images should be acknowledged.

Though criteria like these can serve as a guideline to researchers and reviewers, the rapid development of artificial intelligence indicates that novel tools might fall through the cracks. Therefore, being flexible with frameworks is crucial to eliminating plagiarism and maintaining trust within and beyond the scientific community.