Skip to content

[ RESEARCH INTEGRITY ] January 12, 2023

Using peer review to detect AI-generated scientific papers

We’re not the first blog to talk about ChatGPT and its potential impact on scholarly communication. With so much buzz around AI writing tools, we thought it was a perfect time to explore the connections between AI, peer review, and the interesting examples of journal articles written by machines.

What is an AI-generated scientific paper?

Artificial intelligence writing tools - usually based on advanced language generation models - are designed for creating conversations (and even mock interviews). Recent advancement in AI has raised concerns about the use of these tools in writing scientific papers, abstracts, and journal articles. The research papers generated by AI writing tools are sometimes plagiarized and may not illustrate the methods and results, even though it appears sophisticated and human-like. 

This challenge is not new to academic circles. Decades ago, three MIT students made headlines for their experiments with computer-generated papers. While this was little more than a hoax, it revealed an underlying struggle in the scientific community.


Risks associated with AI-generated papers

AI writing tools, such as the recently released ChatGPT, have shown great potential, but with it comes research integrity threats. AI-generated papers raise ethical standard concerns because it is not original, professionally authored content. 

Furthermore, reports have shown that AI-generated essays and content lack personality and may not capture the research perspective in a comprehensive manner. These issues could question the integrity of research and the qualification of the authors involved.

On the other hand, some in the industry have wondered whether AI should write research papers. Once the experiment is done and the findings analyzed, is composing a paper to meet the standards and requirements of journals an administrative burden? Opinions are divided, but perhaps approaching AI as a tool rather than a replacement might chart a path forward.


How peer review can detect AI-generated papers

As different aspects of science continue to advance, the process of peer review can’t be left behind. To help combat misconduct and ensure research integrity, the peer review process must improve to include the ability to detect AI-generated papers. Combining both technological and human approaches, these strategies can help peer reviewers catch AI-generated papers.

Maybe the only way to combat AI-related research misconduct is with AI-supported peer review manuscript checks.


Unusual or suspicious writing styles

Although AI-written text is highly sophisticated, it may have unusual writing styles that differentiate it from the personal touch in human-written papers. Peer reviewers can leverage this and check for irregularities such as repetition of sentences and incoherent structure to detect the possible use of AI writing tools. We can also use plagiarism detection software to analyze the text on a macro level: fight AI with AI.


Lack of originality

Originality is an important factor in research integrity. The use of AI in writing threatens the originality of scientific papers because AI tools usually include tortured phrases and plagiarized content in the paper, which undermines its ethical standards. Furthermore, AI-generated papers are not the original ideas of the author and may not adequately explain the research methods and results. Peer reviewers can use plagiarism detection tools to catch these unoriginal content to prevent the cost that may result from retractions further down the line.


Scientifically inaccurate content and results

Using domain-specific and advanced integrity checkers, peer reviewers can detect inaccuracies in text and results generated using artificial intelligence writing tools. AI writing tools cannot provide the same depth and comprehensive nuance as an author and this can be a potential loophole that can be exploited for improved ability to catch AI-generated scientific papers.



The use of AI to write scientific papers has huge implications for the future of scholarly publishing. If left unchecked, it could threaten the very foundation of ethical research, allow the dissemination of false results, and devalue the research industry. Used effectively, however, AI could level the playing field for researchers for whom English isn’t a first language, or support the drafting stage in a way that accelerates the process of writing a paper.  

It's crucial to implement effective means of identifying AI-generated papers, and a clear set of ethics for where AI is an appropriate tool and where it is not. 

At Morressier, we are constantly innovating our research integrity solutions to help curb misconduct, maintain the authenticity of published work and ensure public trust in science.

future peer review whitepaper