Skip to content

[ RESEARCH INTEGRITY ] October 5, 2023

Morressier’s Research Integrity Survey: your questions answered

In our recent webinar, we had so many questions that we ran out of time to answer all of the details of our survey insights. Take a look below to learn more.

Our Research Integrity Survey was run throughout June of 2023 for four weeks. We received responses from more than 250 members of the research community, with diverse geographical and generational representation. You can watch a recording of our webinar here.

Below are answers to the questions we didn’t get to during the webinar.

 

1. Can you please clarify what you mean by “ethics checks”? That seems very broad.

We gave several examples to our respondents, which we have combined together into the “ethics checks” category. These include citation manipulation and conflicts of interest. Citation manipulation would occur when researchers over-cite certain researchers, or self-cite beyond a journals’ threshold for self-citation. Missing conflict of interest forms are also about validating the ethical responsibility and disclosures of each researcher and author. 


2. I am surprised to see Transparent Peer Review ranked so highly in terms of preventing misconduct. It doesn't make sense to me that running the same peer review process, but having the details visible to the public, would somehow allow reviewers/editors to spot data fabrication or other issues better than in a non-transparent setting.


There’s a certain safety in anonymity. One possible explanation is that ranking transparent peer review so high might correlate with the frequency of peer review bias. Having transparent peer review doesn’t automatically eliminate biases, but when reviewers are aware that their names are attached to their reviews, they might be more conscientious about subconscious bias.

 

3. Is there any evidence that journals using transparent review processes have fewer issues than other equivalent journals? 

The primary objectives of transparent review processes, or open peer review in general, are to increase reviewer recognition, increase the quality of reviews, and even disclose conflicts of interest. There are also a variety of different ways to define transparent or open peer review, from naming the reviewers to publishing the reviews themselves alongside the final article. Ultimately, publishers are still analyzing the potential impact and effectiveness of these processes.

 

4. What challenges did researchers encounter while conducting the survey on research integrity, and how were they addressed?

No survey respondents came to us with issues or challenges with this survey. 

 

5. Is using results of research collaboration with proper citation, considered plagiarism?

No. Plagiarism is defined as taking someone else’s work and using it without proper citation or credit. Plagiarism detection involves scanning a large body of published literature and checking for similarities without proper citation. While it varies from discipline to discipline, it is also possible to rely too heavily on citations, either overly citing one's previous work, or citing certain studies or an individual author too heavily. Our Citation Checks point out these occurrences, but it is up to the editorial team to determine whether such citation is appropriate or beyond the thresholds for the journal.

 

6. What are some effective strategies or initiatives that have emerged from the survey insights to enhance research integrity?


We will be expanding on these in the coming months. Research integrity is a complex problem, with no single solution able to solve it all. But there were two major themes, that ran through many of the responses: 

 

  • Embracing technology


    While there was some skepticism about the role of technology, and indeed a sense that technology is exacerbating the challenges of fraud, it's clear that the peer review process is under too much pressure to catch all instances of misconduct. We need to get more organizations and editorial teams testing and engaging with the emerging research integrity solutions, to evaluate their impact. Now, we would never suggest that peer review lose the all-important ‘peer’ part of the equation. Instead, we think technology needs to be embraced as a tool for support. This transition will only start with a change in mindset.

  • Relieving the pressure to publish


    This one is quite a mountain to climb. But throughout our survey, respondents of all generations, geographies, and roles restated the impact of the pressure to publish. Altering this would require bringing more stakeholders into the conversation, from academic institutions to funders. Relieving the pressure to publish involves transforming tenure and promotion processes, funding requirements, government mandates, and perhaps even a mindset shift away from the article economy, or more broadly the attention economy. It's no small challenge. But a pressure to publish, rather than a pressure to discover and experiment, places emphasis on the wrong values. 

 

Conclusion

We’re excited to keep sharing results and insights from our Research Integrity Survey. In fact, we published a blog earlier this week on technology readiness.

guide to research integrity