Skip to content

[ RESEARCH INTEGRITY ] March 23, 2023

We can't improve peer review without publishing data

Peer review is a key element of publishing workflows, but the process needs to evolve to meet the changes to our industry. Discover how data-driven insights can be used to transform peer review by strengthening its efficiency, transparency, and reliability.


The problem with peer review

Peer review was instituted to ensure the accuracy and quality of research, yet it was never intended to be static. As journal retractions rise and research misconduct continues to make headlines, it’s become clear that parts of the system are failing and we need to build new infrastructure to tackle the new integrity challenges that we’re facing. 

Better peer review begins with identifying ways to smooth the flaws in the process and transform workflows for the better.


How can data accelerate the peer review process?

In our digital age, data is opening new doors in every industry, helping us create and identify new opportunities and solve problems at an unprecedented rate.
When we use technology to take advantage of key analytics, we can begin to optimize the peer reviewing experience for all academic stakeholders. 

Learn how data can support the scholarly publishing community in growing its reviewing pool, uncovering trends, and making strategic decisions around peer review.


Finding reviewers

Scholars receive article-review requests all the time, yet they are accepting these requests less and less, given a lack of time to devote to reviewing. Studies have shown that between 69 and 94% of papers are reviewed by only 20% of scientists. This uneven burden may provide a potential reasoning for the increase in costly retractions due to inaccurate or unreliable reviews.

However, by monitoring engagement analytics around meetings and conference proceedings, societies and journals can easily identify the experts who are publishing high-quality findings and making strides in their communities. Early-stage research represents an untapped pool of potential reviewers. Further, conferences may also feature interdisciplinary research or emerging sub-disciplines, helping you find the people you need to review future papers.


Measuring the quality of peer review

Peer review has been described as the worst way to judge research, except everything else. But what are the system’s true weaknesses and how can we use metrics to uncover them?

Data can be used to evaluate reviewer performance by tracking and analyzing the timeliness and consistency of reviewer analysis, the language used, the accuracy of the reviewer’s statements and recommendations, and the feedback quality overall. 

As we’ve explored in the past, AI has the potential to analyze data at an unparalleled level. Thanks to technology, publishers can now use automated technologies to collect, track, and analyze these insights in order to assess the quality of peer review in their communities. This creates space to identify areas for improvement and fill gaps in the system through effective strategies and solutions like reviewer training courses, workshops, and other member support methods. 


Identifying the patterns of bias

Although innovative workflows stand to strengthen the process, at the heart of peer review is human perspectives. That human touch is crucial, but also results in unconscious errors and biases. Gathering demographic data on both authors and reviewers within your community can help you track the ways in which race, background, and gender influence reviews.
Just two years ago, it was found that peer reviewers are twice as likely to accept research conducted on men than the same research on women. This kind of analysis can also serve as an opportunity for your society or organization to identify where they may need to boost their diversity efforts within their reviewing pool to combat these prejudices in order to create a more equitable process.


Tracking progress and impact

While data can be a very powerful tool for strengthening peer review models, the possibilities don’t end there. It’s just as important to use analytics tools to trace the journey after peer review and evaluate the impact of your community’s work. 
Amidst an ongoing research integrity crisis, tracking metrics like the reproducibility of research, engagement levels, and other altmetrics can give editors an idea of the quality and impact of research or conference proceedings that have gone through rigorous peer review methods and checks for integrity. These statistics provide more value than the traditional citation counts and impact factor, and help provide editors with a better understanding of the value that peer review brings to their community and the need to invest in its success.



There is no future without peer review, but there is a world where the workflows and systems built around the process can work much better, smarter, and faster.

We’re using data-driven insights and evidence-based solutions to build that world. Discover it now.

future peer review whitepaper