Preview

Administrative Consulting

Advanced search

Between the Scylla of Prohibition and the Charybdis of Permissiveness: Journal Editorial Strategies in the Age of Generative AI Models

EDN: NCWVQL

Abstract

Objective. To synthesise current editorial policies governing the use of generative artificial intelligence (AI) models in scholarly publishing and to identify unresolved issues requiring further guidance and evidence.

Methods. A descriptive review of peer-reviewed publications (2023–2025) and openly accessible policies issued by publishers and journals was conducted.

Results. In the absence of a unified international standard, major global and Russian publishers have issued role-specific guidance for authors, reviewers, and editors on interacting with generative AI (e. g., GPT-class models). Areas of emerging consensus include: AI systems are not recognised as authors; accountability for content resides exclusively with human contributors; and the use and role of AI must be transparently disclosed. Notable heterogeneity persists in the boundaries of permitted practices, ranging from non-binding «fair-use» recommendations to formal checklists and mandatory disclosure fields embedded in editorial management systems. Guidance is most developed for authors and editors, whereas rules for reviewers are comparatively sparse. Disciplinary variation is evident in both the permissiveness and specificity of recommended practices.

Research gaps. There is no industry-wide consensus on acceptable uses of generative AI in research reporting or editorial workflows. Empirical evidence remains limited regarding the impact of generative AI on manuscript quality, the integrity and efficiency of peer review, and reader perception. Standards for provenance tracking and durable recording of AI-generated content are under-specified, and documented retractions explicitly involving AI-generated manuscripts are rare.

Conclusions. While norms around authorship, responsibility, and disclosure are converging, operationalisation across journals and disciplines is inconsistent. Coordinated standard-setting and rigorous empirical studies are needed to evaluate risks and benefits and to support evidencebased policy.

About the Author

V. A. Vasileva
Russian Presidential Academy of National Economy and Public Administration (North-West Institute of Management, Branch of RANEPA)
Russian Federation

Valeria A. Vasileva, PhD in Sociology, Senior Lecturer of Department of Comparative Political Studies 

Saint Petersburg



References

1. Bobrova V. Yu. Generative artificial intelligence in the service of scientists: practices and limitations // Problems of the Scientist and Scientific Collectives Activity [Problemy deyatel’nosti uchenogo i nauchnykh kollektivov]. 2024. Vol. 10. P. 75–86. DOI 10.24412/2414-9241-2024-10-75-86. EDN IYIJQI (In Russ.).

2. Golubinskaya A. V. What would Robert Merton have done if he had ChatGPT? // Siberian Historical Research [Sibirskie istoricheskie issledovaniya]. 2024. N 1. P. 112–124. DOI 10.17223/2312461X/43/8. EDN NVVCTI (In Russ.).

3. Grebenshchikova E. G. Scientific publications in the era of artificial intelligence // Scientific and Technical Information. Series 1: Organization and Methods of Information Work [Nauchnotekhnicheskaya informatsiya. Seriya 1: Organizatsiya i metodika informatsionnoy raboty]. 2024. N 11. P. 39–43. DOI 10.36535/0548-0019-2024-11-5. EDN LSYUTU. (In Russ.).

4. Gritsai G. Behind the scenes of ChatGPT intelligence: a story about how AI-generated texts are defined // Khabr. 10.04.2023. URL: https://habr.com/ru/companies/antiplagiat/articles/728112/ (accessed: 27.08.2025).

5. Komashko M. N. Chatgpt, text, information: a critical analysis // Works on Intellectual Property [Trudy po intellektual’noy sobstvennosti]. 2024. Vol. 50, N 3. P. 118–128. DOI 10.17323/tis.2024.22306. EDN OPWCNS. (In Russ.).

6. Milyukov S. A. Objects created using artificial intelligence as objects of copyright and related rights // Journal of the Court for Intellectual Property Rights [Zhurnal Suda po intellektual’nym pravam]. 2025. N 1 (47). P. 63–73. DOI 10.58741/23134852_2025_1_6. EDN HTQGFD (In Russ.).

7. Telitsyna A. Yu. Optimization of scientific activity through AI integration: neural networks as a tool for working with academic literature // Monitoring of Public Opinion: Economic and Social Changes [Monitoring obshchestvennogo mneniya: ekonomicheskie i sotsial’nye peremeny]. 2024. N 5 (183). P. 218–236. DOI 10.14515/monitoring.2024.5.2623. EDN FEHOZO (In Russ.).

8. Chernyak M. A., Morozova S. A. «My light, GPT, tell me…», or The phenomenon of the literary text of the post-literary era // World of the Russian Word [Mir russkogo slova]. 2024. N 4. P. 50–62. DOI 10.21638/spbu30.2024.406. EDN WRAQHM. (In Russ.).

9. Shomova S. A., Kachkaeva A. G. Between enchantment and fear: a dialogue with the “other”. Experience in analyzing practices of using AI in professional and everyday life // Monitoring of Public Opinion: Economic and Social Changes [Monitoring obshchestvennogo mneniya: ekonomicheskie i sotsial’nye peremeny]. 2024. N 5 (183). P. 3–17. DOI 10.14515/monitoring.2024.5.2766. EDN YVQHAJ. (In Russ.).

10. Akpur A. Exploring the potential and limitations of chatgpt in academic writing and editorial tasks // Fırat Üniversitesi Sosyal Bilimler Dergisi. 2024. Vol. 3. N 1. P. 177–186. DOI https://doi.org/10.18069/firatsbed.1299700.

11. Bergstrom T., Dylan R. A Third Transformation? Generative AI and Scholarly Publishing // Ithaka S+R. 2025. September. DOI: https://doi.org/10.18665/sr.321519.

12. Chen Sh., Brumby D., Cox A. Envisioning the Future of Peer Review: Investigating LLM-Assisted Reviewing Using ChatGPT as a Case Study // In Proceedings of the 4th Annual Symposium on Human-Computer Interaction for Work (CHIWORK ‘25). Association for Computing Machinery. 2025. New York, NY, USA, Article 8. P. 1–18. DOI: https://doi.org/10.1145/3729176.3729196.

13. Committee on Publication Ethics. Discussion document: Artificial intelligence (AI) in decision making (Version 1). September 2021. https://publicationethics.org/guidance/discussion-document/artificial-intelligence-ai-decision-making (accessed: 15.08.2025).

14. Conner G., et al. Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis // BMJ. 2024. N 384. DOI: https://doi.org/10.1136/bmj-2023-077192.

15. De Leon J., De Leon-Martinez S., Artés-Rodríguez A., Baca-García E., De Las Cuevas C. Reflections on the potential and risks of AI for scientific article writing after the AI endorsement by some scientific publishers: focusing on Scopus AI // Actas Espanolas de Psiquiatria. 2025. N 53. P. 433–442. DOI: https://doi.org/10.62641/aep.v53i2.1849.

16. Ehsan A., Raza A. Evolving Journal Policies for Ethical Use of Generative AI in Scientific Publishing: Editorial Challenges in the Age of Generative AI // Journal of University Medical & Dental College. 2025. Vol. 4. N 2. P. 193–195. DOI: https://doi.org/10.51846/jucmd.v4i2.4121.

17. European Association of Science Editors. Recommendations on the use of AI in scholarly communication. 25 September 2024. https://ease.org.uk/2024/09/recommendations-on-theuse-of-ai-in-scholarly-communication/ (accessed: 23.08.2025).

18. Flanagin A., Kendall-Taylor J., Bibbins-Domingo K. Guidance for authors, peer reviewers, and editors on use of AI, language models, and chatbots // JAMA. 2023. Published online July 27. DOI: https://doi.org/10.1001/jama.2023.12500.

19. Garcia M. B. ChatGPT as an Academic Writing Tool: Factors Influencing Researchers’ Intention to Write Manuscripts Using Generative Artificial Intelligence // International Journal of Human Computer Interaction. 2025. N 1–15. DOI: https://doi.org/10.1080/10447318.2025.2499158.

20. Holden Thorp H. ChatGPT is fun, but not an author // Science. 2023. N 379. P. 313–313. DOI:10.1126/science.adg7879.

21. Hosseini M., Rasmussen L. M., Resnik D. B. Using AI to write scholarly publications // Account Res. 2024. Oct. N 31 (7). P. 715–723. DOI: 10.1080/08989621.2023.2168535. Epub 2023 Jan 25. PMID: 36697395; PMCID: PMC10366336.

22. Hosseini M., Horbach S. P. J. M. Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review // Research Integrity and Peer Review. 2023. Vol. 8. N 4 (2023). https://doi.org/10.1186/s41073-023-00133-5.

23. Jaime A., Teixeira D. S., Panagiotis T. Would Artificial Intelligence, lake ChatGPT, be a good “peer” reviewer in Academic Publishing? A human versus AI-based SWOT Assessment // Journal of Scholarly publishing. 2025. Vol. 56, N 1. P. 79–103. DOI: https://doi.org/10.3138/jsp-2024-0001.

24. Kaebnick G. E., Magnu D. C., Ka A., et al. Editors’ statement on the responsible use of generative AI technologies in scholarly journal publishing // Medicine, Health Care and Philososhy. 2023. N 26. P. 499–503. DOI: https://doi.org/10.1007/s11019-023-10176-6.

25. Levene A. Where next in peer review? Part 2: COPE Commentary. Committee on Publication Ethics. 16 November 2023. https://publicationethics.org/news/where-next-peer-review-ai.

26. Li Z.-Q., Xu H.-L., Cao H.-J., Liu Z.-L., Fei Y.-T., Liu J.-P. Use of Artificial Intelligence in Peer Review Among Top 100 Medical Journals // JAMA Network Open. 2024 Dec 2. N 7 (12): e2448609. DOI: 10.1001/jamanetworkopen.2024.48609.

27. Liu J. Q. J., Hu K. T. K., Al Zoubi F., et al. The great detectives: humans versus AI detectors in catching large language model-generated medical writing // International Journal for Educational Integrity. 2024. N 20 P. 8. DOI: https://doi.org/10.1007/s40979-024-00155-6.

28. Lo Vecchio N. Personal experience with AI-generated peer reviews: a case study // Research Integrity and Peer Review. 2025. Apr 7. N 10 (1). P. 4. DOI 10.1186/s41073-025-00161-3. PMID: 40189554; PMCID: PMC11974187.

29. Pratama A. R. The accuracy-bias trade-offs in AI text detection tools and their impact on fairness in scholarly publication // PeerJ Computer Science. 2025. N 11:e2953. DOI: https://doi.org/10.7717/peerj-cs.2953

30. Prepare for truly useful large language models // Nature Biomedical Engineering. 2023. N 7. P. 85–86. DOI: https://doi.org/10.1038/s41551-023-01012-6.

31. Prillaman M. ChatGPT detecto’ catches AI-generated papers with unprecedented accuracy // Nature. 2023. Nov 6. DOI: 10.1038/d41586-023-03479-4. Epub ahead of print. PMID: 37974032.

32. Rowberry S. Moving Beyond the Hype / Doom Cycles of Generative AI Discourse in Publishing // Interscript. 2025. N 5 (1). DOI: https://doi.org/10.14324/111.444.2398-4732.2004.

33. Seifert R., Hartman E., Wang K., Yildiz D. Authors must follow the editorial guidelines on the use of large language models in review papers // Naunyn-Schmiedeberg’s Archives of Pharmacology. 2025. Jul. N 398 (7). P. 7655–7656. DOI: 10.1007/s00210-025-04102-1.

34. Teixeira da S., Panagiotis T. Would artificial intelligence, like ChatGPT, be a good ‘peer’reviewer in academic publishing? A human versus AI-based SWOT assessment. Journal of Scholarly Publishing. 2025. N 56 (1). P. 79–103.

35. The advent of human-assisted peer review by AI // Nature Biomedical Engineering. 2024. N 8. P. 665–666 (2024). DOI: https://doi.org/10.1038/s41551-024-01228-0.

36. Thelwall M., Lehtisaari M., Katsirea I., Holmberg K., Zheng Er-Te. 2025. Does ChatGPT Ignore Article Retractions and Other Reliability Concerns? // Learned Publishing. 2025. Vol 38. N 4: e2018. DOI: https://doi.org/10.1002/leap.2018.

37. Van Noorden R., Perkel J. M. AI and science: what 1,600 researchers think // Nature. 2023. Sep. N 621 (7980). P. 672–675. DOI 10.1038/d41586-023-02980-0. PMID: 37758894.

38. Wong R. Role of generative artificial intelligence in publishing. What is acceptable, what is not // J Extra Corpor Technol. 2023. N 55 (3). P. 103–104. DOI 10.1051/ject/2023033

39. Writing the rules in AI-assisted writing // Nature Machine Intelligence. 2023. N 5. P. 469. DOI: https://doi.org/10.1038/s42256-023-00678-6.


Review

For citations:


Vasileva V.A. Between the Scylla of Prohibition and the Charybdis of Permissiveness: Journal Editorial Strategies in the Age of Generative AI Models. Administrative Consulting. 2025;(6):192-210. (In Russ.) EDN: NCWVQL

Views: 13


ISSN 1726-1139 (Print)
ISSN 1816-8590 (Online)