Assessing diversity throughout the peer review process
With the vast majority (73%) of our journals on online submission and peer review systems (OPRS), Cambridge University Press can use their reporting tools to assess progress and performance in a number of ways. This can range from basic information about the number of annual submissions and the journal’s acceptance ratio to turnaround times for editors, authors, and reviewers.
Cambridge University Press provides most of this information to our journals already as a way to quantify the performance and impact of the journal, and yet a recent development is an interest in assessing diversity throughout the peer review process—who is submitting, and from where? Who is reviewing, or asked to review? Do factors like institution or gender influence the decision on a manuscript, or the time to reach it?
Our journals have taken different approaches in terms of requesting this information, depending on their individual goals and subject. Some are driven by grant requirements, such as Canada’s SSHRC grant requesting information on rates of student and researcher submissions. Others use systems’ tools to drive diverse submissions, for example by offering a bilingual submission process or multi-lingual abstracts. Still others examine diversity in terms of how content is distributed, providing options for authors to submit to Twitter or pertinent websites.
Journals within the Political Science field have become interested in gauging potential bias on the part of the editorial team. By asking questions during the submission process focused on academic position, gender, ethnicity, or institution, these journals can uncover the diversity of their submission pool.
Further, they can discover how the acceptance rates, times to decision, and even number of completed reviewers compare across groups. With strong research showing biases against female authors, early career researchers (ECRs), and researchers from underdeveloped countries, recognizing potential bias is key for the healthy operation of a journal. Once any potential biases is diagnosed, concrete steps can then be taken to address it, ranging from targeted special issues to new article types to adjustments in the editorial board.
Of course, it is vital to balance the collection of this information with the GDPR regulations that went into effect earlier this year. Questions must be crafted to both explain their purpose and to provide an option for authors to “opt-out” of answering, in the event they want to preserve their privacy.
With that in mind, it is also important to consider who needs access to this information—and who does not. If the point of gathering information on the position, gender, or ethnicity of authors is to gauge bias, then blinding that information from the handling editors is of utmost importance to ensure they are processed fairly.
In terms of finding reviewers, editors have a number of strategies at their disposal. The OPRSs can pull matches based on keywords or other classifications, and show prior review statistics so reviewers are not overburdened. Tools like Publons’ Reviewer Connect can prove useful in finding potential high quality reviewers for a manuscript, but so can simpler efforts like searching the various researcher databases. Incentives like Continuing Medical Education credits (in the case of medical ECRs) can also pull in reviewers who otherwise wouldn’t be considered, increasing diversity.
Assessing this diversity is crucial, as it can help reveal the dynamics and needs of the reviewer pool. ECRs around the world value conducting peer review highly, but also have concerns on how they should go about properly doing it. As peer review represents an avenue of growth and progression, providing that support should be made a priority for journals which are positioning to or already do utilize ECRs. That same drive pushes the need to assess diversity in other respects, with the overarching goal being the advancement of knowledge in a fair, rigorous, and accessible manner.