Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Peer Review? | Types & Examples

What Is Peer Review? | Types & Examples

Published on December 17, 2021 by Tegan George . Revised on June 22, 2023.

Peer review, sometimes referred to as refereeing , is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of reviewers in the same subject area decides whether to accept each submission for publication.

Peer-reviewed articles are considered a highly credible source due to the stringent process they go through before publication.

There are various types of peer review. The main difference between them is to what extent the authors, reviewers, and editors know each other’s identities. The most common types are:

  • Single-blind review
  • Double-blind review
  • Triple-blind review

Collaborative review

Open review.

Relatedly, peer assessment is a process where your peers provide you with feedback on something you’ve written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

Table of contents

What is the purpose of peer review, types of peer review, the peer review process, providing feedback to your peers, peer review example, advantages of peer review, criticisms of peer review, other interesting articles, frequently asked questions about peer reviews.

Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. For this reason, academic journals are among the most credible sources you can refer to.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Prevent plagiarism. Run a free check.

Depending on the journal, there are several types of peer review.

Single-blind peer review

The most common type of peer review is single-blind (or single anonymized) review . Here, the names of the reviewers are not known by the author.

While this gives the reviewers the ability to give feedback without the possibility of interference from the author, there has been substantial criticism of this method in the last few years. Many argue that single-blind reviewing can lead to poaching or intellectual theft or that anonymized comments cause reviewers to be too harsh.

Double-blind peer review

In double-blind (or double anonymized) review , both the author and the reviewers are anonymous.

Arguments for double-blind review highlight that this mitigates any risk of prejudice on the side of the reviewer, while protecting the nature of the process. In theory, it also leads to manuscripts being published on merit rather than on the reputation of the author.

Triple-blind peer review

While triple-blind (or triple anonymized) review —where the identities of the author, reviewers, and editors are all anonymized—does exist, it is difficult to carry out in practice.

Proponents of adopting triple-blind review for journal submissions argue that it minimizes potential conflicts of interest and biases. However, ensuring anonymity is logistically challenging, and current editing software is not always able to fully anonymize everyone involved in the process.

In collaborative review , authors and reviewers interact with each other directly throughout the process. However, the identity of the reviewer is not known to the author. This gives all parties the opportunity to resolve any inconsistencies or contradictions in real time, and provides them a rich forum for discussion. It can mitigate the need for multiple rounds of editing and minimize back-and-forth.

Collaborative review can be time- and resource-intensive for the journal, however. For these collaborations to occur, there has to be a set system in place, often a technological platform, with staff monitoring and fixing any bugs or glitches.

Lastly, in open review , all parties know each other’s identities throughout the process. Often, open review can also include feedback from a larger audience, such as an online forum, or reviewer feedback included as part of the final published product.

While many argue that greater transparency prevents plagiarism or unnecessary harshness, there is also concern about the quality of future scholarship if reviewers feel they have to censor their comments.

In general, the peer review process includes the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to the author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits and resubmit it to the editor for publication.

The peer review process

In an effort to be transparent, many journals are now disclosing who reviewed each article in the published product. There are also increasing opportunities for collaboration and feedback, with some journals allowing open communication between reviewers and authors.

It can seem daunting at first to conduct a peer review or peer assessment. If you’re not sure where to start, there are several best practices you can use.

Summarize the argument in your own words

Summarizing the main argument helps the author see how their argument is interpreted by readers, and gives you a jumping-off point for providing feedback. If you’re having trouble doing this, it’s a sign that the argument needs to be clearer, more concise, or worded differently.

If the author sees that you’ve interpreted their argument differently than they intended, they have an opportunity to address any misunderstandings when they get the manuscript back.

Separate your feedback into major and minor issues

It can be challenging to keep feedback organized. One strategy is to start out with any major issues and then flow into the more minor points. It’s often helpful to keep your feedback in a numbered list, so the author has concrete points to refer back to.

Major issues typically consist of any problems with the style, flow, or key points of the manuscript. Minor issues include spelling errors, citation errors, or other smaller, easy-to-apply feedback.

Tip: Try not to focus too much on the minor issues. If the manuscript has a lot of typos, consider making a note that the author should address spelling and grammar issues, rather than going through and fixing each one.

The best feedback you can provide is anything that helps them strengthen their argument or resolve major stylistic issues.

Give the type of feedback that you would like to receive

No one likes being criticized, and it can be difficult to give honest feedback without sounding overly harsh or critical. One strategy you can use here is the “compliment sandwich,” where you “sandwich” your constructive criticism between two compliments.

Be sure you are giving concrete, actionable feedback that will help the author submit a successful final draft. While you shouldn’t tell them exactly what they should do, your feedback should help them resolve any issues they may have overlooked.

As a rule of thumb, your feedback should be:

  • Easy to understand
  • Constructive

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Below is a brief annotated research example. You can view examples of peer feedback by hovering over the highlighted sections.

Influence of phone use on sleep

Studies show that teens from the US are getting less sleep than they were a decade ago (Johnson, 2019) . On average, teens only slept for 6 hours a night in 2021, compared to 8 hours a night in 2011. Johnson mentions several potential causes, such as increased anxiety, changed diets, and increased phone use.

The current study focuses on the effect phone use before bedtime has on the number of hours of sleep teens are getting.

For this study, a sample of 300 teens was recruited using social media, such as Facebook, Instagram, and Snapchat. The first week, all teens were allowed to use their phone the way they normally would, in order to obtain a baseline.

The sample was then divided into 3 groups:

  • Group 1 was not allowed to use their phone before bedtime.
  • Group 2 used their phone for 1 hour before bedtime.
  • Group 3 used their phone for 3 hours before bedtime.

All participants were asked to go to sleep around 10 p.m. to control for variation in bedtime . In the morning, their Fitbit showed the number of hours they’d slept. They kept track of these numbers themselves for 1 week.

Two independent t tests were used in order to compare Group 1 and Group 2, and Group 1 and Group 3. The first t test showed no significant difference ( p > .05) between the number of hours for Group 1 ( M = 7.8, SD = 0.6) and Group 2 ( M = 7.0, SD = 0.8). The second t test showed a significant difference ( p < .01) between the average difference for Group 1 ( M = 7.8, SD = 0.6) and Group 3 ( M = 6.1, SD = 1.5).

This shows that teens sleep fewer hours a night if they use their phone for over an hour before bedtime, compared to teens who use their phone for 0 to 1 hours.

Peer review is an established and hallowed process in academia, dating back hundreds of years. It provides various fields of study with metrics, expectations, and guidance to ensure published work is consistent with predetermined standards.

  • Protects the quality of published research

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. Any content that raises red flags for reviewers can be closely examined in the review stage, preventing plagiarized or duplicated research from being published.

  • Gives you access to feedback from experts in your field

Peer review represents an excellent opportunity to get feedback from renowned experts in your field and to improve your writing through their feedback and guidance. Experts with knowledge about your subject matter can give you feedback on both style and content, and they may also suggest avenues for further research that you hadn’t yet considered.

  • Helps you identify any weaknesses in your argument

Peer review acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process. This way, you’ll end up with a more robust, more cohesive article.

While peer review is a widely accepted metric for credibility, it’s not without its drawbacks.

  • Reviewer bias

The more transparent double-blind system is not yet very common, which can lead to bias in reviewing. A common criticism is that an excellent paper by a new researcher may be declined, while an objectively lower-quality submission by an established researcher would be accepted.

  • Delays in publication

The thoroughness of the peer review process can lead to significant delays in publishing time. Research that was current at the time of submission may not be as current by the time it’s published. There is also high risk of publication bias , where journals are more likely to publish studies with positive findings than studies with negative findings.

  • Risk of human error

By its very nature, peer review carries a risk of human error. In particular, falsification often cannot be detected, given that reviewers would have to replicate entire experiments to ensure the validity of results.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Discourse analysis
  • Cohort study
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Peer review is a process of evaluating submissions to an academic journal. Utilizing rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication. For this reason, academic journals are often considered among the most credible sources you can use in a research project– provided that the journal itself is trustworthy and well-regarded.

In general, the peer review process follows the following steps: 

  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

A credible source should pass the CRAAP test  and follow these guidelines:

  • The information should be up to date and current.
  • The author and publication should be a trusted authority on the subject you are researching.
  • The sources the author cited should be easy to find, clear, and unbiased.
  • For a web source, the URL and layout should signify that it is trustworthy.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, June 22). What Is Peer Review? | Types & Examples. Scribbr. Retrieved April 15, 2024, from https://www.scribbr.com/methodology/peer-review/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what are credible sources & how to spot them | examples, ethical considerations in research | types & examples, applying the craap test & evaluating sources, what is your plagiarism score.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 12 November 2021

Demystifying the process of scholarly peer-review: an autoethnographic investigation of feedback literacy of two award-winning peer reviewers

  • Sin Wang Chong   ORCID: orcid.org/0000-0002-4519-0544 1 &
  • Shannon Mason 2  

Humanities and Social Sciences Communications volume  8 , Article number:  266 ( 2021 ) Cite this article

2675 Accesses

6 Citations

18 Altmetric

Metrics details

  • Language and linguistics

A Correction to this article was published on 26 November 2021

This article has been updated

Peer reviewers serve a vital role in assessing the value of published scholarship and improving the quality of submitted manuscripts. To provide more appropriate and systematic support to peer reviewers, especially those new to the role, this study documents the feedback practices and experiences of two award-winning peer reviewers in the field of education. Adopting a conceptual framework of feedback literacy and an autoethnographic-ecological lens, findings shed light on how the two authors design opportunities for feedback uptake, navigate responsibilities, reflect on their feedback experiences, and understand journal standards. Informed by ecological systems theory, the reflective narratives reveal how they unravel the five layers of contextual influences on their feedback practices as peer reviewers (micro, meso, exo, macro, chrono). Implications related to peer reviewer support are discussed and future research directions are proposed.

Similar content being viewed by others

peer review process academic research

The transformative power of values-enacted scholarship

Nicky Agate, Rebecca Kennison, … Penelope Weber

What matters in the cultivation of student feedback literacy: exploring university EFL teachers’ perceptions and practices

Zhenfang Xie & Wen Liu

peer review process academic research

Writing impact case studies: a comparative study of high-scoring and low-scoring case studies from REF2014

Bella Reichard, Mark S Reed, … Andrea Whittle

Introduction

The peer-review process is the longstanding method by which research quality is assured. On the one hand, it aims to assess the quality of a manuscript, with the desired outcome being (in theory if not always in practice) that only research that has been conducted according to methodological and ethical principles be published in reputable journals and other dissemination outlets (Starck, 2017 ). On the other hand, it is seen as an opportunity to improve the quality of manuscripts, as peers identify errors and areas of weakness, and offer suggestions for improvement (Kelly et al., 2014 ). Whether or not peer review is actually successful in these areas is open to considerable debate, but in any case it is the “critical juncture where scientific work is accepted for publication or rejected” (Heesen and Bright, 2020 , p. 2). In contemporary academia, where higher education systems across the world are contending with decreasing levels of public funding, there is increasing pressure on researchers to be ‘productive’, which is largely measured by the number of papers published, and of funding grants awarded (Kandiko, 2010 ), both of which involve peer review.

Researchers are generally invited to review manuscripts once they have established themselves in their disciplinary field through publication of their own research. This means that for early career researchers (ECRs), their first exposure to the peer-review process is generally as an author. These early experiences influence the ways ECRs themselves conduct peer review. However, negative experiences can have a profound and lasting impact on researchers’ professional identity. This appears to be particularly true when feedback is perceived to be unfair, with feedback tone largely shaping author experience (Horn, 2016 ). In most fields, reviewers remain anonymous to ensure freedom to give honest and critical feedback, although there are concerns that a lack of accountability can result in ‘bad’ and ‘rude’ reviews (Mavrogenis et al., 2020 ). Such reviews can negatively impact all researchers, but disproportionately impact underrepresented researchers (Silbiger and Stubler, 2019 ). Regardless of career phase, no one is served well by unprofessional reviews, which contribute to the ongoing problem of bullying and toxicity prevalent in academia, with serious implications on the health and well-being of researchers (Keashly and Neuman, 2010 ).

Because of its position as the central process through which research is vetted and refined, peer review should play a similarly central role in researcher training, although it rarely features. In surveying almost 3000 researchers, Warne ( 2016 ) found that support for reviewers was mostly received “in the form of journal guidelines or informally as advice from supervisors or colleagues” (p. 41), with very few engaging in formal training. Among more than 1600 reviewers of 41 nursing journals, only one third received any form of support (Freda et al., 2009 ), with participants across both of these studies calling for further training. In light of the lack of widespread formal training, most researchers learn ‘on the job’, and little is known about how researchers develop their knowledge and skills in providing effective assessment feedback to their peers. In this study, we undertake such an investigation, by drawing on our first-hand experiences. Through a collaborative and reflective process, we look to identify the forms and forces of our feedback literacy development, and seek to answer specifically the following research questions:

What are the exhibited features of peer reviewer feedback literacy?

What are the forces at work that affect the development of feedback literacy?

Literature review

Conceptualisation of feedback literacy.

The notion of feedback literacy originates from the research base of new literacy studies, which examines ‘literacies’ from a sociocultural perspective (Gee, 1999 ; Street, 1997 ). In the educational context, one of the most notable types of literacy is assessment literacy (Stiggins, 1999 ). Traditionally, assessment literacy is perceived as one of the indispensable qualities of a successful educator, which refers to the skills and knowledge for teachers “to deal with the new world of assessment” (Fulcher, 2012 , p. 115). Following this line of teacher-oriented assessment literacy, recent attempts have been made to develop more subject-specific assessment literacy constructs (e.g., Levi and Inbar-Lourie, 2019 ). Given the rise of student-centred approaches and formative assessment in higher education, researchers began to make the case for students to be ‘assessment literate’; comprising of such knowledge and skills as understanding of assessment standards, the relationship between assessment and learning, peer assessment, and self-assessment skills (Price et al., 2012 ). Feedback literacy, as argued by Winstone and Carless ( 2019 ), is essentially a subset of assessment literacy because “part of learning through assessment is using feedback to calibrate evaluative judgement” (p. 24). The notion of feedback literacy was first extensively discussed by Sutton ( 2012 ) and more recently by Carless and Boud ( 2018 ). Focusing on students’ feedback literacy, Sutton ( 2012 ) conceptualised feedback literacy as a three-dimensional construct—an epistemological dimension (what do I know about feedback?), an ontological dimension (How capable am I to understand feedback?), and a practical dimension (How can I engage with feedback?). In close alignment with Sutton’s construct, the seminal conceptual paper by Carless and Boud ( 2018 ) further illustrated the four distinctive abilities of feedback literate students: the abilities to (1) understand the formative role of feedback, (2) make informed and accurate evaluative judgement against standards, (3) manage emotions especially in the face of critical and harsh feedback, and (4) take action based on feedback. Since the publication of Carless and Boud ( 2018 ), student and teacher feedback literacy has been in the limelight of assessment research in higher education (e.g., Chong 2021b ; Carless and Winstone 2020 ). These conceptual contributions expand the notion of feedback literacy to consider not only the manifestations of various forms of effective student engagement with feedback but also the confluence of contexts and individual differences of students in developing students’ feedback literacy by drawing upon various theoretical perspectives (e.g., ecological systems theory; sociomaterial perspective) and disciplines (e.g., business and human resource management). Others address practicalities of feedback literacy; for example, how teachers and students can work in synergy to develop feedback literacy (Carless and Winstone, 2020 ) and ways to maximise student engagement with feedback at a curricular level (Malecka et al., 2020). In addition to conceptualisation, advancement of the notion of feedback literacy is evident in the recent proliferation of primary studies. The majority of these studies are conducted in the field of higher education, focusing mostly on student feedback literacy in classrooms (e.g., Molloy et al., 2019 ; Winstone et al., 2019 ) and in the workplace (Noble et al., 2020 ), with a handful focused on teacher feedback literacy (e.g., Xu and Carless 2016 ). Some studies focusing on student feedback literacy adopt a qualitative case study research design to delve into individual students’ experience of engaging with various forms of feedback. For example, Han and Xu ( 2019 ) analysed the profiles of feedback literacy of two Chinese undergraduate students. Findings uncovered students’ resistance to engagement with feedback, which relates to the misalignment between the cognitive, social, and affective components of individual students’ feedback literacy profiles. Others reported interventions designed to facilitate students’ uptake of feedback, focusing on their effectiveness and students’ perceptions. Specifically, affordances and constraints of educational technology such as electronic feedback portfolio (Chong, 2019 ; Winstone et al., 2019 ) are investigated. Of particular interest is a recent study by Noble et al. ( 2020 ), which looked into student feedback literacy in the workplace by probing into the perceptions of a group of Australian healthcare students towards a feedback literacy training programme conducted prior to their placement. There is, however, a dearth of primary research in other areas where elicitation, process, and enactment of feedback are vital; for instance, academics’ feedback literacy. In the ‘publish or perish’ culture of higher education, academics, especially ECRs, face immense pressure to publish in top-tiered journals in their fields and face the daunting peer-review process, while juggling other teaching and administrative responsibilities (Hollywood et al., 2019 ; Tynan and Garbett 2007 ). Taking up the role of authors and reviewers, researchers have to possess the capacity and disposition to engage meaningfully with feedback provided by peer reviewers and to provide constructive comments to authors. Similar to students, researchers have to learn how to manage their emotions in the face of critical feedback, to understand the formative values of feedback, and to make informed judgements about the quality of feedback (Gravett et al., 2019 ). At the same time, feedback literacy of academics also resembles that of teachers. When considering the kind of feedback given to authors, academics who serve as peer reviewers have to (1) design opportunities for feedback uptake, (2) maintain a professional and supportive relationship with authors, and (3) take into account the practical dimension of giving feedback (e.g., how to strike a balance between quality of feedback and time constraints due to multiple commitments) (Carless and Winstone 2020 ). To address the above, one of the aims of the present study is to expand the application of feedback literacy as a useful analytical lens to areas outside the classroom, that is, scholarly peer-review activities in academia, by presenting, analysing, and synthesising the personal experiences of the authors as successful peer reviewers for academic journals.

Conceptual framework

We adopt a feedback literacy of peer reviewers framework (Chong 2021a ) as an analytical lens to analyse, systemise, and synthesise our own experiences and practices as scholarly peer reviewers (Fig. 1 ). This two-tier framework includes a dimension on the manifestation of feedback literacy, which categorises five features of feedback literacy of peer reviewers, informed by student and teacher feedback literacy frameworks by Carless and Boud ( 2018 ) and Carless and Winstone ( 2020 ). When engaging in scholarly peer review, reviewers are expected to be able to provide constructive and formative feedback, which authors can act on in their revisions ( engineer feedback uptake ). Besides, peer reviewers who are usually full-time researchers or academics lead hectic professional lives; thus, when writing reviewers’ reports, it is important for them to consider practically and realistically the time they can invest and how their various degrees of commitment may have an impact on the feedback they provide ( navigate responsibilities ). Furthermore, peer reviewers should consider the emotional and relational influences their feedback exert on the authors. It is crucial for feedback to be not only informative but also supportive and professional (Chong, 2018 ) ( maintain relationships ). Equally important, it is imperative for peer reviewers to critically reflect on their own experience in the scholarly peer-review process, including their experience of receiving and giving feedback to academic peers, as well as the ways authors and editors respond to their feedback ( reflect on feedback experienc e). Lastly, acting as gatekeepers of journals to assess the quality of manuscripts, peer reviewers have to demonstrate an accurate understanding of the journals’ aims, remit, guidelines and standards, and reflect those in their written assessments of submitted manuscripts ( understand standards ). Situated in the context of scholarly peer review, this collaborative autoethnographic study conceptualises feedback literacy not only as a set of abilities but also orientations (London and Smither, 2002 ; Steelman and Wolfeld, 2016 ), which refers to academics’ tendency, beliefs, and habits in relation to engaging with feedback (London and Smither, 2002 ). According to Cheung ( 2000 ), orientations are influenced by a plethora of factors, namely experiences, cultures, and politics. It is important to understand feedback literacy as orientations because it takes into account that feedback is a convoluted process and is influenced by a plethora of contextual and personal factors. Informed by ecological systems theory (Bronfenbrenner, 1986 ; Neal and Neal, 2013 ) and synthesising existing feedback literacy models (Carless and Boud, 2018 ; Carless and Winstone, 2020 ; Chong, 2021a , 2021b ), we consider feedback literacy as a malleable, situated, and emergent construct, which is influenced by the interplay of various networked layers of ecological systems (Neal and Neal, 2013 ) (Fig. 1 ). Also important is that conceptualising feedback literacy as orientations avoids dichotomisation (feedback literate vs. feedback illiterate), emphasises the developmental nature of feedback literacy, and better captures the multifaceted manifestations of feedback engagement.

figure 1

The outer ring of the figure shows the components of feedback literacy while the inner ring concerns the layers of contexts (ecosystems) which influence the manifestation of feedback literacy of peer reviewers.

Echoing recent conceptual papers on feedback literacy which emphasises the indispensable role of contexts (Chong 2021b ; Boud and Dawson, 2021 ; Gravett et al., 2019 ), our conceptual framework includes an underlying dimension of networked ecological systems (micro, meso, exo, macro, and chrono), which portrays the contextual forces shaping our feedback orientations. Informed by the networked ecological system theory of Neal and Neal ( 2013 ), we postulate that there are five systems of contextual influence, which affect the feedback experience and development of feedback literacy of peer reviewers. The five ecological systems refer to ‘settings’, which is defined by Bronfenbrenner ( 1986 ) as “place[s] where people can readily engage in social interactions” (p. 22). Even though Bronfenbrenner’s ( 1986 ) somewhat dated definition of ‘place’ is limited to ‘physical space’, we believe that ‘places’ should be more broadly defined in the 21st century to encompass physical and virtual, recent and dated, closed and distanced locations where people engage; as for ‘interactions’, from a sociocultural perspective, we understand that ‘interactions’ can include not only social, but also cognitive and emotional exchanges (Vygotsky, 1978 ). Microsystem refers to a setting where people, including the focal individual, interact. Mesosystem , on the other hand, means the interactions between people from different settings and the influence they exert on the focal individual. An exosystem , similar to a microsystem, is understood as a single setting but this setting excludes the focal individual but it is likely that participants in this setting would interact with the focal individual. The remaining two systems, macrosystem and chronosystem, refer not only to ‘settings’ but ‘forces that shape the patterns of social interactions that define settings’ (Neal and Neal, 2013 , p. 729). Macrosystem is “the set of social patterns that govern the formation and dissolution of… interactions… and thus the relationship among ecological systems” (ibid). Some examples of macrosystems given by Neal and Neal ( 2013 ) include political and cultural systems. Finally, chronosystem is “the observation that patterns of social interactions between individuals change over time, and that such changes impact on the focal individual” (ibid, p. 729). Figure 2 illustrates this networked ecological systems theory using a hypothetical example of an early career researcher who is involved in scholarly peer review for Journal A; at the same time, they are completing a PhD and are working as a faculty member at a university.

figure 2

This is a hypothetical example of an early career researcher who is involved in scholarly peer review for Journal A.

From the reviewed literature on the construct of feedback literacy, the investigation of feedback literacy as a personal, situated, and unfolding process is best done through an autoethnographic lens, which underscores critical self-reflection. Autoethnography refers to “an approach to research and writing that seeks to describe and systematically analyse (graphy) personal experience (auto) in order to understand cultural experience (ethno)” (Ellis et al., 2011 , p. 273). Autoethnography stems from research in the field of anthropology and is later introduced to the fields of education by Ellis and Bochner ( 1996 ). In higher education research, autoethnographic studies are conducted to illuminate on topics related to identity and teaching practices (e.g., Abedi Asante and Abubakari, 2020 ; Hains-Wesson and Young 2016 ; Kumar, 2020 ). In this article, a collaborative approach to autoethnography is adopted. Based on Chang et al. ( 2013 ), Lapadat ( 2017 ) defines collaborative autoethnography (CAE) as follows:

… an autobiographic qualitative research method that combines the autobiographic study of self with ethnographic analysis of the sociocultural milieu within which the researchers are situated, and in which the collaborating researchers interact dialogically to analyse and interpret the collection of autobiographic data. (p. 598)

CAE is not only a product but a worldview and process (Wall, 2006 ). CAE is a discrete view about the world and research, which straddles between paradigmatic boundaries of scientific and literary studies. Similar to traditional scientific research, CAE advocates systematicity in the research process and consideration is given to such crucial research issues as reliability, validity, generalisability, and ethics (Lapadat, 2017 ). In closer alignment with studies on humanities and literature, the goal of CAE is not to uncover irrefutable universal truths and generate theories; instead, researchers of CAE are interested in co-constructing and analysing their own personal narratives or ‘stories’ to enrich and/or challenge mainstream beliefs and ideas, embracing diverse rather than canonical ways of behaviour, experience, and thinking (Ellis et al., 2011 ). Regarding the role of researchers, CAE researchers openly acknowledge the influence (and also vulnerability) of researchers throughout the research process and interpret this juxtaposition of identities between researchers and participants of research as conducive to offering an insider’s perspective to illustrate sociocultural phenomena (Sughrua, 2019 ). For our CAE on the scholarly peer-review experiences of two ECRs, the purpose is to reconstruct, analyse, and publicise our lived experience as peer reviewers and how multiple forces (i.e., ecological systems) interact to shape our identity, experience, and feedback practice. As a research process, CAE is a collaborative and dynamic reflective journey towards self-discovery, resulting in narratives, which connect with and add to the existing literature base in a personalised manner (Ellis et al., 2011 ). The collaborators should go beyond personal reflection to engage in dialogues to identify similarities and differences in experiences to throw new light on sociocultural phenomena (Merga et al., 2018 ). The iterative process of self- and collective reflections takes place when CAE researchers write about their own “remembered moments perceived to have significantly impacted the trajectory of a person’s life” and read each other’s stories (Ellis et al., 2011 , p. 275). These ‘moments’ or vignettes are usually written retrospectively, selectively, and systematically to shed light on facets of personal experience (Hughes et al., 2012 ). In addition to personal stories, some autoethnographies and CAEs utilise multiple data sources (e.g., reflective essays, diaries, photographs, interviews with co-researchers) and various ways of expressions (e.g., metaphors) to achieve some sort of triangulation and to present evidence in a ‘systematic’ yet evocative manner (Kumar, 2020 ). One could easily notice that overarching methodological principles are discussed in lieu of a set of rigid and linear steps because the process of reconstructing experience through storytelling can be messy and emergent, and certain degree of flexibility is necessary. However, autoethnographic studies, like other primary studies, address core research issues including reliability (reader’s judgement of the credibility of the narrator), validity (reader’s judgement that the narratives are believable), and generalisability (resemblance between the reader’s experience and the narrative, or enlightenment of the reader regarding unfamiliar cultural practices) (Ellis et al., 2011 ). Ethical issues also need to be considered. For example, authors are expected to be honest in reporting their experiences; to protect the privacy of the people who ‘participated’ in our stories, pseudonyms need to be used (Wilkinson, 2019 ). For the current study, we follow the suggested CAE process outlined by Chang et al. ( 2013 ), which includes four stages: deciding on topic and method , collecting materials , making meaning , and writing . When deciding on the topic, we decided to focus on our experience as scholarly peer reviewers because doing peer review and having our work reviewed are an indispensable part of our academic lives. The next is to collect relevant autoethnographic materials. In this study, we follow Kumar ( 2020 ) to focus on multiple data sources: (1) reflective essays which were written separately through ‘recalling’, which is referred to by Chang et al. ( 2013 ) as ‘a free-spirited way of bringing out memories about critical events, people, place, behaviours, talks, thoughts, perspectives, opinions, and emotions pertaining to the research topic’ (p. 113), and (2) discussion meetings. In our reflective essays, we included written records of reflection and excerpts of feedback in our peer-review reports. Following material collection is meaning making. CAE, as opposed to autoethnography, emphasises the importance of engaging in dialogues with collaborators and through this process we identify similarities and differences in our experiences (Sughrua, 2019 ). To do so, we exchanged our reflective essays; we read each other’s reflections and added questions or comments on the margins. Then, we met online twice to share our experiences and exchange views regarding the two reflective essays we wrote. Both meetings lasted for approximately 90 min, were audio-recorded and transcribed. After each meeting, we coded our stories and experiences with reference to the two dimensions of the ecological framework of feedback literacy (Fig. 1 ). With regards to coding our data, we followed the model of Miles and Huberman ( 1994 ), which comprises four stages: data reduction (abstracting data), data display (visualising data in tabular form), conclusion-drawing, and verification. The coding and writing processes were done collaboratively on Google Docs and care was taken to address the aforesaid ethical (e.g., honesty, privacy) and methodological issues (e.g., validity, reliability, generalisability). As a CAE study, the participants are the researchers themselves, that is, the two authors of this paper. We acknowledge that research data are collected from human subjects (from the two authors), such data are collected in accordance with the standards and guidelines of the School Research Ethics Committee at the School of Social Sciences, Education and Social Work, Queen’s University Belfast (Ref: 005_2021). Despite our different experiences in our unique training and employment contexts, we share some common characteristics, both being ECRs (<5 years post-PhD), working in the field of education, active in the scholarly publication process as both authors and peer reviewers. Importantly for this study, we were both recipients of Reviewer of the Year Award 2019 awarded jointly by the journal, Higher Education Research & Development and the publisher , Taylor & Francis. This award in recognition of the quality of our reviewing efforts, as determined by the editorial board of a prestigious higher education journal, provided a strong impetus for this study, providing an opportunity to reflect on our own experiences and practices. The extent of our peer-review activities during our early career leading up to the time of data collection is summarised in Table 1 .

Findings and discussion

Analysis of the four individual essays (E1 and E2 for each participant) and transcripts of the two subsequent discussions (D1 and D2) resulted in the identification of multiple descriptive codes and in turn a number of overarching themes (Supplementary Appendix 1). Our reporting of these themes is guided by our conceptual framework, where we first focus on the five manifestations of feedback literacy to highlight the experiences that contribute to our growth as effective and confident peer reviewers. Then, we report on the five ecological systems to unravel how each contextual layer develops our feedback literacy as peer reviewers. (Note that the discussion of the chronosystem has been necessarily incorporated into each of the four others dimensions: microsystem , mesosystem , exosystem , and macrosystem in order to demonstrate temporal changes). In particular, similarities and differences will be underscored, and connections with manifested feedback beliefs and behaviours will be made. We include quotes from both Author 1 (A1) and Author 2 (A2), in order to illustrate our findings, and to show the richness and depth of the data collected (Corden and Sainsbury, 2006 ). Transcribed quotes may be lightly edited while retaining meaning, for example through the removal of fillers and repetitions, which is generally accepted practice to ensure readability ( ibid ).

Manifestations of feedback literacy

Engineering feedback uptake.

The two authors have a strong sense of the purpose of peer review as promoting not only research quality, but the growth of researchers. One way that we engineer author uptake is to ensure that feedback is ‘clear’ (A2,E1), ‘explicit’ (A2,E1), ‘specific’ (A1,E1), and importantly ‘actionable… to ensure that authors can act on this feedback so that their manuscripts can be improved and ultimately accepted for publication’ (A1,E1). In less than favourable author outcomes, we ensure that there is reference to the role of the feedback in promoting the development of the manuscript, which A1 refers to as ‘promotion of a growth mindset’ (A1,E1). For example, after requesting a second round of major revisions, A2 ‘acknowledged the frustration that the author might have felt on getting further revisions by noting how much improvement was made to the paper, but also making clear the justification for sending it off for more work’ (A2,E1). We both note that we tend to write longer reviews when a rejection is the recommended outcome, as our ultimate goal is to aid in the development of a manuscript.

Rejections doesn’t mean a paper is beyond repair. It can still be fixed and improved; a rejection simply means that the fix may be too extensive even for multiple review cycles. It is crucial to let the authors whose manuscripts are rejected know that they can still act on the feedback to improve their work; they should not give up on their own work. I think this message is especially important to first-time authors or early career researchers. (A1,E1)

In promoting a growth mindset and in providing actionable feedback, we hope to ‘show the authors that I’m not targeting them, but their work’ (A1,D1). We particularly draw on our own experiences as ECRs, with first-hand understanding that ‘everyone takes it personally when they get rejected. Yeah. Moreover, it is hard to separate (yourself from the paper)’ (A2,D1).

Navigating responsibilities

As with most academics, the two authors have multiple pressures on their time, and there ‘isn’t much formal recognition or reward’ (A1,E1) and ‘little extrinsic incentive for me to review’ (A2,E1). Nevertheless we both view our roles as peer reviewers as ‘an important part of the process’ (A2,E1), ‘a modest way for me to give back to the academic community’ (A1,E1). Through peer review we have built a sense of ‘identity as an academic’ (A1,D1), through ‘being a member of the academic community’ (A2,D1). While A1 commits to ‘review as many papers as possible’ (A1,E1) and A2 will usually accept offers to review, there are still limits on our time and therefore we consider the topic and methods employed when deciding whether or not to accept an invitation, as well as the journal itself, as we feel we can review more efficiently for journals with which we are more familiar. A1 and A2 have different processes for conducting their review that are most efficient for their own situations. For A1, the process begins with reading the whole manuscript in one go, adding notes to the pdf document along the way, which he then reviews, and makes a tentative decision, including ‘a few reasons why I have come to this decision’ (A1,E1). After waiting at least one day, he reviews all of the notes and begins writing the report, which is divided into the sections of the paper. He notes it ‘usually takes me 30–45 min to write a report. I then proofread this report and submit it to the system. So it usually takes me no more than three hours to complete a review’ (A1,E1). For A2, the process for reviewing and structuring the report is quite different, with a need to ‘just find small but regular opportunities to work on the review’ (A2,E1). As was the case during her Ph.D, which involved juggling research and raising two babies, ‘I’ve trained myself to be able to do things in bits’ (A2,D1). So A2 also begins by reading the paper once through, although generally without making initial comments. The next phase involves going through the paper at various points in time whenever possible, and at the same time building up the report, making the report structurally slightly different to that of A1.

What my reviews look like are bullet points, basically. And they’re not really in a particular order. They generally… follow the flow (of the paper). But I mean, I might think of something, looking at the methods and realise, hey, you haven’t defined this concept in the literature review so I’ll just add you haven’t done this. And so I will usually preface (the review)… Here’s a list of suggestions. Some of them are minor, some of them are serious, but they’re in no particular order. (A1,D1)

As such, both reviewers engage in personalised strategies to make more effective use of their time. Both A1 and A2 give explicit but not exhaustive examples of an area of concern, and they also pose questions for the author to consider, in both cases placing the onus back on the author to take action. As A1 notes, ‘I’m not going to do a summary of that reference for you. I’m just going to include that there. If you’d like you can check it out’ (A1,D1). For A2, a lack of adequate reporting of the methods employed in a study makes it difficult to proceed, and in such cases will not invest further time, sending it back to the editor, because ‘I can’t even comment on the findings… I can’t go on. I’m not gonna waste my time’ (A2,D1). In cases where the authors may be ‘on the fence’ about a particular review, they will use the confidential comments to the editor to help work through difficult cases as ‘they are obviously very experienced reviewers’ (A1,D1). Delegating tasks to the expertise of the editorial teams when appropriate also ensures time is used more prudently.

Maintaining relationships

Except in a few cases where A2 has reviewed for journals with a single-blind model, the vast majority of the reviews that we have completed have been double-blind. This means that we are unaware of the identity of the author/s, and we are unknown to them. However, ‘even with blind-reviews I tend to think of it as a conversation with a person’ (A2,E1). A1 talks about the need to have respect for the author and their expertise and effort ‘regardless of the quality of the submission (which can be in some cases subjective)’ (A1,E1). A2 writes similarly about the ‘privilege’ and ‘responsibility’ of being able to review manuscripts that authors ‘have put so much time and energy into possibly over an extended period’ (A2,E1). In this way it is possible to develop a sort of relationship with an author even without knowing their identity. In trying to articulate the nature of that relationship (which we struggle to do so definitively), we note that it is more than just a reviewer, and A2 reflected on a recent review, which went through a number of rounds of resubmission where ‘it felt like we were developing a relationship, more like a mentor than a reviewer’ (A2,E1).

I consider this role as a peer reviewer more than giving helpful and actionable feedback; I would like to be a supporter and critical friend to the authors, even though in most cases I don’t even know who they are or what career stage they are at (A1,E1).
In any case, as A1 notes, ‘we don’t even need to know who that person is because we know that people like encouragement’ (A1,D1), and we are very conscious of the emotional impact that feedback can have on authors, and the inherent power imbalance in the relationship. For this reason, A1 is ‘cautious about the way I write so that I don’t accidentally make the authors the target of my feedback’. As A2 notes ‘I don’t want authors feeling depressed after reading a review’ (A2,E1). While we note that we try to deliver our feedback with ‘respect’ (A1,E1; A1,E2; A2,D1) ‘empathy’ (A1,E1), and ‘kindness’ (A2,D1), we both noted that we do not ‘sugar coat’ our feedback and A1 describes himself as ‘harsh’ and ‘critical’ (A1,E1) while A2 describes herself as ‘pretty direct’ (A2,E1). In our discussion, we tried to delve into this seeming contradiction:… the encouragement, hopefully is to the researcher, but the directness it should be, I hope, is related directly to whatever it is, the methods or the reporting or the scope of the literature review. It’s something specific about the manuscript itself. And I know myself, being an ECR and being reviewed, that it’s hard to separate yourself from your work… And I want to make it really explicit. If it’s critical, it’s not about the person. It’s about the work, you know, the weakness of the work, but not the person. (A2,D1)

A1 explains that at times his initial report may be highly critical, and at times he will ‘sit back and rethink… With empathy, I will write feedback, which is more constructive’ (A1,E1). However, he adds that ‘I will never try to overrate a piece or sugar-coat my comments just to sound “friendly”’ (A1,E1), with the ultimate goal being to uphold academic rigour. Thus, honesty is seen as the best strategy to maintain a strong, professional relationship with reviewers. Another strategy employed by A2 is showing explicit commitment to the review process. One way this is communicated is by prefacing a review with a summary of the paper, not only ‘to confirm with the author that I am interpreting the findings in the way that they intended, but also importantly to show that I have engaged with the paper’ (A2,E1). Further, if the recommendation is for a further round of review, she will state directly to the authors ‘that I would be happy to review a revised manuscript’ (A2,E1).

Reflecting on feedback experience

As ECRs we have engaged in the scholarly publishing process initially as authors, subsequently as reviewers, and most recently as Associate Editors. Insights gained in each of these roles have influenced our feedback practices, and have interacted to ‘develop a more holistic understanding of the whole review process’ (A1,E1).

We reflect on our experiences as authors beginning in our doctoral candidatures, with reviews that ranged from ‘the most helpful to the most cynical’ (A1,E1). A2 reflected on two particular experiences both of which resulted in rejection, one being ‘snarky’ and ‘unprofessional’ with ‘no substance’, the other providing ‘strong encouragement … the focus was clearly on the paper and not me personally’ (A2,E1). It was this experience that showed the divergence between the tone and content of review despite the same outcome, and as result A2 committed to being ‘ the amazing one’. A1 also drew from a negative experience noting that ‘I remember the least useful feedback as much as I do with the most constructive one’ (A1,E1). This was particularly the case when a reviewer made apparently politically-motivated judgements that A1 ‘felt very uncomfortable with’ and flagged with the editor (A1,E1). Through these experiences both authors wrote in their essays about the need to focus on the work and not on the individual, with an understanding that a review ‘can have a really serious impact’ (A2,D1) on an author.

It is important to note that neither authors have been involved in any formal or informal training on how to conduct peer review, although A1 expresses appreciation of the regular practice of one journal for which he reviews, where ‘the editor would write an email to the reviewers giving feedback on the feedback we have given’ (A1,E1). For A2, an important source of learning is in comparing her reviews with that of others who have reviewed the same manuscript, the norm for some journals being to send all reports to all reviewers along with the final decision.

I’m always interested to see how [my] review compares with others. Have I given the same recommendation? Have I identified the same areas of weakness? Have I formatted my review in the same way? How does the tone of delivery differ? I generally find that I give a similar if not the same response to other reviews, and I’m happy to see that I often pick up the same issues with methodology. (A2,E1)

For A2 there is comfort in seeing reviews that are similar to others, although we both draw on experiences where our recommendation diverged from others, with a source of assurance being the ultimate decision of the editor.

So it’s like, I don’t think it can be published and that [other] reviewer thinks it’s excellent. So usually, what the editor would do in this instance is invite the third one. Right, yeah. But then this editor told me… that they decided to go with my decision to reject because they find that my comments are more convincing. (A1,D1)

A2 also was surprised to read another report of the same manuscript she reviewed, that raised similar concerns and gave the same recommendation for major revisions, but noted the ‘wording is soooo snarky. What need?’ (A2,E1). In one case that A1 detailed in our first discussion, significant but improbable changes made to the methodology section of a resubmitted paper caused him to question the honesty of the reporting, making him ‘uncomfortable’ and as a result reported his concerns to the editor. In this case the review took some time to craft, trying to balance the ‘fine line between catering for the emotion [of the author], right, and upholding the academic standards’ (A1,D1). While he conceded initially his report was ‘kind of too harsh… later I think I rephrased it a little bit, I kind of softened (it)’.

While the role of Associate Editor is very new to A2 and thus was yet unable to comment, for A1 the ‘opportunity to read various kinds of comments given by reviewers’ (A1,E1) is viewed favourably. This includes not only how reviewers structure their feedback, but also how they use the confidential comments to the editors to express their thoughts more openly, providing important insights into the process that are largely hidden.

Understanding standards

While our reviewing practices are informed more broadly ‘according to more general academic standards of the study itself, and the clarity and fullness of the reporting’ (A2,E1), we look in the first instance to advice and guidelines from journals to develop an understanding of journal-specific standards, although A2 notes that a lack of review guidelines for one of the earliest journals she reviewed led her to ‘searching Google for standard criteria’ (A2,E1). However, our development in this area seems to come from developing a familiarity with a journal, particularly through engagement with the journal as an author.

In addition to reading the scope and instructions for authors to obtain such basic information as readership, length of submissions, citation style, the best way for me to understand the requirements and preferences of the journals is my own experience as an author. I review for journals which I have published in and for those which I have not. I always find it easier to make a judgement about whether the manuscripts I review meet the journal’s standards if I have published there before. (A1,E1)

Indeed, it seems that journal familiarity is connected closely to our confidence in reviewing, and while both authors ‘review for journals which I have published in and for those which I have not’ (A1,E1), A2 states that she is reluctant to ‘readily accept an offer to review for a journal that I’m not familiar with’, and A1 takes extra time to ‘do more preparatory work before I begin reading the manuscript and writing the review’ when reviewing for an unfamiliar journal.

Ecological systems

Microsystem.

Three microsystems exert influence on A1’s and A2’s development of feedback literacy: university, journal community, and Twitter.

In regards to the university, we are full-time academics in research-intensive universities in the UK and Japan where expectations for academics include publishing research in high-impact journals ‘which is vital to promotion’ (A1,E2). It is especially true in A2’s context where the national higher education agenda is to increase world rankings of universities. Thus, ‘there is little value placed on peer review, as it is not directly related to the broader agenda’ (A2,E2). When considering his recent relocation to the UK together with the current pandemic, A1 navigated his responsibilities within the university context and decided to allocate more time to his university-related responsibilities, especially providing learning and pastoral support to his students, who are mostly international students. Besides, A2 observed that there is a dearth of institution-wide support on conducting peer review although ‘there are a lot of training opportunities related to how to write academic papers in English, how to present at international conferences, how to write grant applications’, etc. (A2,E2). As a result, she ‘struggled for a couple of years’ because of the lack of institutional support for her development as a peer reviewer’ (A2,D2); but this helplessness also motivated her to seek her own ways to learn how to give feedback, such as ‘seeing through glimpses of other reviews, how others approach it, in terms of length, structure, tone, foci etc.’ (A2,E2). A1 shares the same view that no training is available at his institution to support his development as a peer reviewer. However, his postgraduate supervision experiences enabled him to reflect on how his feedback can benefit researchers. In our second online discussion, A1 shared that he held individual advising sessions with some postgraduate students, which made him realise that it is important for feedback to serve the function to inspire rather than to ‘give them right answers’ (A1,D2).

Because of the lack of formal training provided by universities, both authors searched for other professional communities to help us develop our expertise in giving feedback as peer reviewers, with journal communities being the next microsystem. We found that international journals provide valuable opportunities for us to understand more about the whole peer-review process, in particular the role of feedback. For A1, the training which he received from the editor-in-chief when he took up the associate editorship of a language education journal two years ago was particularly useful. A1 benefited greatly from meetings with the editor who walked him through every stage in the review process and provided ‘hands-on experience on how to handle delicate scenarios’ (A1,E2). Since then, A1 has had plenty of opportunities to oversee various stages of peer review and read a large number of reviewers’ reports which helped him gain ‘a holistic understanding of the peer-review process’ (A1,E2) and gradually made him become more cognizant of how he wants to give feedback. Although there was no explicit instruction on the technical aspect of giving feedback, A1 found that being an associate editor has developed his ‘consciousness’ and ‘awareness’ of giving feedback as a peer reviewer (A1,D2). Further, he felt that his editorial experiences provided him the awareness to constantly refine and improve his ways of giving feedback, especially ways to make his feedback ‘more structured, evidence-based, and objective’ (A1,E2). Despite not reflecting from the perspective of an editor, A2 recalled her experience as an author who received in-depth and constructive feedback from a reviewer, which really impacted the way she viewed the whole review process. She understood from this experience that even though the paper under review may not be particularly strong, peer reviewers should always aim to provide formative feedback which helps the authors to improve their work. These positive experiences of the two authors are impactful on the ways they give feedback as peer reviewers. In addition, close engagement with a specific journal has helped A2 to develop a sense of belonging, making it ‘much more than a journal, but also a way to become part of an academic community’ (A2,E2). With such a sense of belonging, it is more likely for her to be ‘pulled towards that journal than others’ when she can only review a limited number of manuscripts (A2,D2).

Another professional community in which we are both involved is Twitter. We regard Twitter as a platform for self-learning, reflection, and inspiration. We perceive Twitter as a space where we get to learn from others’ peer-review experiences and disciplinary practices. For example, A1 found the tweets on peer-review informative ‘because they are written by different stakeholders in the process—the authors, editors, reviewers’ and offer ‘different perspectives and sometimes different versions of the same story’ (A1,E2). A2 recalled a tweet she came across about the ‘infamous Reviewer 2’ and how she learned to not make the same mistakes (A2,D2). Reading other people’s experiences helps us reconsider our own feedback practices and, more broadly, the whole peer-review system because we ‘get a glimpse of the do’s and don’ts for peer reviewers’ (A1,E2).

Further to our three common microsystems, A2 also draws on a unique microsystem, that of her former profession as a teacher, which shapes her feedback practices in three ways. First, in her four years of teacher training, a lot of emphasis was placed on assessment and feedback such as ‘error correction’; this understanding related to giving feedback to students and was solidified through ‘learning on the job’ (A2,D2). Second, A2 acknowledges that as a teacher, she has a passion to ‘guide others in their knowledge and skill development… and continue this in our review practices’ (A2,E2). Finally, her teaching experience prepared her to consider the authors’ emotional responses in her peer-review feedback practices, constantly ‘thinking there’s a person there who’s going to be shattered getting a rejection’ (A2,D2).

Mesosystem considers the confluence of our interactions in various microsystems. Particularly, we experienced a lack of support from our institutions, which pushed us to seek alternative paths to acquire the art of giving feedback. This has made us realise the importance of self-learning in developing feedback literacy as peer reviewers, especially in how to develop constructive and actionable feedback. Both authors self-learn how to give feedback by reading others’ feedback. A1 felt ‘fortunate to be involved in journal editing and Twitter’ because he gets ‘a glimpse of how other peer reviewers give feedback to authors’ (A1,E2). A2, on the other hand, learned through her correspondences with a journal editor who made her stop ‘looking for every word’ and move away from ‘over proofreading and over editing’ (A2,D2).

Focusing on the chronosystem, it is noticed that both authors adjusted how they give feedback over time because of the aggregated influence of their microsystems. What stands out is that they have become more strategic in giving feedback. One way this is achieved is through focusing their comments on the arguments of the manuscripts instead of burning the midnight oil with error-correcting.

Exosystem concerns the environment where the focal individuals do not have direct interactions with the people in it but have access to information about. In his case, A1’s understanding of advising techniques promoted by a self-access language learning centre is conducive to the cultivation of his feedback literacy. Although A1 is not a part of the language advising team, he has a working relationship with the director. A1 was especially impressed by the learner-centeredness of an advising process:

The primary duty of the language advisor is not to be confused with that of a language teacher. Language teachers may teach a lecture on a linguistic feature or correct errors on an essay, but language advisors focus on designing activities and engaging students in dialogues to help them reflect on their own learning needs… The advisors may also suggest useful resources to the students which cater to their needs. In short, language advisors work in partnership with the students to help them improve their language while language teachers are often perceived as more authoritative figures (A1, E2).

His understanding of advising has affected how A1 provides feedback as a peer reviewer in a number of ways. First, A1 places much more emphasis on humanising his feedback, for example, by considering ‘ways to work in partnership with the authors and making this “partnership mindset” explicit to the authors through writing’ (A1,E2). One way to operationalise this ‘partnership mindset’ in peer review is to ‘ask a lot of questions’ and provide ‘multiple suggestions’ for the authors to choose from (A1,E2). Furthermore, his knowledge of the difference between feedback as giving advice and feedback as instruction has led him to include feedback, which points authors to additional resources. Below is a feedback point A1 gave in one of his reviews:

The description of the data analysis process was very brief. While we are not aiming at validity and reliability in qualitative studies, it is important for qualitative researchers to describe in detail how the data collected were analysed (e.g. iterative coding, inductive/deductive coding, thematic analysis) in order to ascertain that the findings were credible and trustworthy. See Johnny Saldaña’s ‘The Coding Manual for Qualitative Researchers’.

Another exosystem that we have knowledge about is formal peer-review training courses provided by publishers. These online courses are usually run asynchronously. Even though we did not enrol in these courses, our interest in peer review has led us to skim the content of these courses. Both of us questioned the value of formal peer-review training in developing feedback literacy of peer reviewers. For example, A2 felt that opportunities to review are more important because they ‘put you in that position where you have responsibility and have to think critically about how you are going to respond’ (A2,D2). To A1, formal peer-review training mostly focuses on developing peer reviewers’ ‘understanding of the whole mechanism’ but not providing ‘training on how to give feedback… For example, do you always ask a question without giving the answers you know? What is a good suggestion?’ (A1,D2).

Macrosystem

The two authors have diverse sociocultural experiences because of their family backgrounds and work contexts. When reflecting on their sociocultural experiences, A1 focused on his upbringing in Hong Kong where both of his parents are school teachers and his professional experience as a language teacher in secondary and tertiary education in Hong Kong while A2 discussed her experience of working in academia in Japan as an anglophone.

Observing his parents’ interactions with their students in schools, A1 was immersed in an Asian educational discourse characterised by ‘mutual respect and all sorts of formality’ (A1,E2). After he finished university, A1 became a school teacher and then a university lecturer (equivalent to a teaching fellow in the UK), getting immersed continuously in the etiquette of educational discourse in Hong Kong. Because of this, A1 knows that being professional means to be ‘formal and objective’ and there is a constant expectation to ‘treat people with respect’ (A1,E2). At the same time, his parents are unlike typical Asian parents; they are ‘more open-minded’, which made him more willing to listen and ‘consider different perspectives’ (A1,D2). Additionally, social hierarchy also impacted his approach to giving feedback as a peer reviewer. A1 started his career as a school teacher and then a university lecturer in Hong Kong with no formal research training. After obtaining his BA and MA, it is not until recently that A1 obtained his PhD by Prior Publication. Perhaps because of his background as a frontline teacher, A1 did not regard himself as ‘a formally trained researcher’ and perceived himself as not ‘elite enough to give feedback to other researchers’ (A1,E2). Both his childhood and his self-perceived identity have led to the formation of two feedback strategies: asking questions and providing a structured report mimicking the sections in the manuscript. A1 frequently asks questions in his reports ‘in a bid to offset some of the responsibilities to the authors’ (A1,E2). A1 struggles to decide whether to address authors using second- or third-person pronouns. A1 consistently uses third-person pronouns in his feedback because he wants to sound ‘very formal’ (A1,D2). However, A1 shared that he has recently started using second-person pronouns to make his feedback more interactive.

A2, on the other hand, pondered upon her sociocultural experiences as a school teacher in Australia, her position as an anglophone in a Japanese university, and her status as first-generation high school graduate. Reflecting on her career as a school teacher, A2 shared that her students had high expectations on her feedback:

So if you give feedback that seems unfair, you know … they’ll turn around and say, ‘What are you talking about’? They’re going to react back if your feedback is not clear. I think a lot of them [the students] appreciate the honesty. (A2,D2)

A2 acknowledges that her identity as a native English speaker has given her the advantage to publish extensively in international journals because of her high level of English proficiency and her access to ‘data from the US and from Australia which are more marketable’ (A2,D2). At the same time, as a native English speaker, she has empathy for her Japanese colleagues who struggle to write proficiently in English and some who even ‘pay thousands of dollars to have their work translated’ (A2,D2). Therefore, when giving feedback as a peer reviewer, she tries not to make a judgement on an author’s English proficiency and will not reject a paper based on the standard of English alone. Finally, as a first-generation scholar without any previous connections to academia, she struggles with belonging and self-confidence. As a result she notes that it usually takes her a long time to complete a review because she would like to be sure what she is saying is ‘right or constructive and is not on the wrong track’ (A2,D2).

Implications and future directions

In investigating the manifestations of the authors’ feedback literacy development, and the ecological systems in which this development occurs, this study unpacks the various sources of influence behind our feedback behaviours as two relatively new but highly commended peer reviewers. The findings show that our feedback literacy development is highly personalised and contextualised, and the sources of influence are diverse and interconnected, albeit largely informal. Our peer-review practices are influenced by our experiences within academia, but influences are much broader and begin much earlier. Peer-review skills were enhanced through direct experience not only in peer review but also in other activities related to the peer-review process, and as such more hands-on, on-site feedback training for peer reviewers may be more appropriate than knowledge-based training. The authors gain valuable insights from seeing the reviews of others, and as this is often not possible until scholars take on more senior roles within journals, co-reviewing is a potential way for ECRs to gain experience (McDowell et al., 2019 ). We draw practical and moral support from various communities, particularly online to promote “intellectual candour”, which refers to honest expressions of vulnerability for learning and trust building (Molloy and Bearman, 2019 , p. 32); in response to this finding we have developed an online community of practice, specifically as a space for discussing issues related to peer review (a Twitter account called “Scholarly Peers”). Importantly, our review practices are a product not only of how we review, but why we review, and as such training should not focus solely on the mechanics of review, but extend to its role within academia, and its impact not only on the quality of scholarship, but on the growth of researchers.

The significance of this study is its insider perspective, and the multifaceted framework that allows the capturing of the complexity of factors that influence individual feedback literacy development of two recognised peer reviewers. It must be stressed that the findings of this study are highly idiosyncratic, focusing on the experiences of only two peer reviewers and the educational research discipline. While the research design is such that it is not an attempt to describe a ‘typical’ or ‘expected’ experience, the scope of the study is a limitation, and future research could be expanded to studies of larger cohorts in order to identify broader trends. In this study, we have not included the reviewer reports themselves, and these reports provide a potentially rich source of data, which will be a focus in our continued investigation in this area. Further research could also investigate the role that peer-review training courses play in the feedback literacy development and practices of new and experienced peer reviewers. Since journal peer review is a communication process, it is equally important to investigate authors’ perspectives and experiences, especially pertaining to how authors interpret reviewers’ feedback based on the ways that it is written.

Data availability

Because of the sensitive nature of the data these are not made available.

Change history

26 november 2021.

A Correction to this paper has been published: https://doi.org/10.1057/s41599-021-00996-3

Abedi Asante L, Abubakari Z (2020) Pursuing PhD by publication in geography: a collaborative autoethnography of two African doctoral researchers. J Geogr High Educ 45(1):87–107. https://doi.org/10.1080/03098265.2020.1803817

Article   Google Scholar  

Boud D, Dawson P (2021). What feedback literate teachers do: An empirically-derived competency framework. Assess Eval High Educ. Advanced online publication. https://doi.org/10.1080/02602938.2021.1910928

Bronfenbrenner U (1986) Ecology of the family as a context for human development. Res Perspect Dev Psychol 22:723–742. https://doi.org/10.1037/0012-1649.22.6.723

Carless D, Boud D (2018) The development of student feedback literacy: enabling uptake of feedback. Assess Eval High Educ 43(8):1315–1325. https://doi.org/10.1080/02602938.2018.1463354

Carless D, Winstone N (2020) Teacher feedback literacy and its interplay with student feedback literacy. Teach High Educ, 1–14. https://doi.org/10.1080/13562517.2020.1782372

Chang H, Ngunjiri FW, Hernandez KC (2013) Collaborative autoethnography. Left Coast Press

Cheung D (2000) Measuring teachers’ meta-orientations to curriculum: application of hierarchical confirmatory factor analysis. The J Exp Educ 68(2):149–165. https://doi.org/10.1080/00220970009598500

Chong SW (2021a) Improving peer-review by developing peer reviewers’ feedback literacy. Learn Publ 34(3):461–467. https://doi.org/10.1002/leap.1378

Chong SW (2021b) Reconsidering student feedback literacy from an ecological perspective. Assess Eval High Educ 46(1):92–104. https://doi.org/10.1080/02602938.2020.1730765

Chong SW (2019) College students’ perception of e-feedback: a grounded theory perspective. Assess Eval High Educ 44(7):1090–1105. https://doi.org/10.1080/02602938.2019.1572067

Chong SW (2018) Interpersonal aspect of written feedback: a community college students’ perspective. Res Post-Compul Educ 23(4):499–519. https://doi.org/10.1080/13596748.2018.1526906

Corden A, Sainsbury R (2006) Using verbatim quotations in reporting qualitative social research: the views of research users. University of York Social Policy Research Unit

Ellis C, Adams TE, Bochner AP (2011) Autoethnography: An Overview. Historical Soc Res, 12:273–290

Ellis C, Bochner A (1996) Composing ethnography: Alternative forms of qualitative writing. Sage

Freda MC, Kearney MH, Baggs JG, Broome ME, Dougherty M (2009) Peer reviewer training and editor support: results from an international survey of nursing peer reviewers. J Profession Nurs 25(2):101–108. https://doi.org/10.1016/j.profnurs.2008.08.007

Fulcher G (2012) Assessment literacy for the language classroom. Lang Assess Quart 9(2):113–132. https://doi.org/10.1080/15434303.2011.642041

Gee JP (1999) Reading and the new literacy studies: reframing the national academy of sciences report on reading. J Liter Res 3(3):355–374. https://doi.org/10.1080/10862969909548052

Gravett K, Kinchin IM, Winstone NE, Balloo K, Heron M, Hosein A, Lygo-Baker S, Medland E (2019) The development of academics’ feedback literacy: experiences of learning from critical feedback via scholarly peer review. Assess Eval High Educ 45(5):651–665. https://doi.org/10.1080/02602938.2019.1686749

Hains-Wesson R, Young K (2016) A collaborative autoethnography study to inform the teaching of reflective practice in STEM. High Educ Res Dev 36(2):297–310. https://doi.org/10.1080/07294360.2016.1196653

Han Y, Xu Y (2019) Student feedback literacy and engagement with feedback: a case study of Chinese undergraduate students. Teach High Educ, https://doi.org/10.1080/13562517.2019.1648410

Heesen R, Bright LK (2020) Is Peer Review a Good Idea? Br J Philos Sci, https://doi.org/10.1093/bjps/axz029

Hollywood A, McCarthy D, Spencely C, Winstone N (2019) ‘Overwhelmed at first’: the experience of career development in early career academics. J Furth High Educ 44(7):998–1012. https://doi.org/10.1080/0309877X.2019.1636213

Horn SA (2016) The social and psychological costs of peer review: stress and coping with manuscript rejection. J Manage Inquiry 25(1):11–26. https://doi.org/10.1177/1056492615586597

Hughes S, Pennington JL, Makris S (2012) Translating Autoethnography Across the AERA Standards: Toward Understanding Autoethnographic Scholarship as Empirical Research. Educ Researcher, 41(6):209–219

Kandiko CB(2010) Neoliberalism in higher education: a comparative approach. Int J Art Sci 3(14):153–175. http://www.openaccesslibrary.org/images/BGS220_Camille_B._Kandiko.pdf

Keashly L, Neuman JH (2010) Faculty experiences with bullying in higher education-causes, consequences, and management. Adm Theory Prax 32(1):48–70. https://doi.org/10.2753/ATP1084-1806320103

Kelly J, Sadegieh T, Adeli K (2014) Peer review in scientific publications: benefits, critiques, & a survival guide. J Int Fed Clin Chem Labor Med 25(3):227–243. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4975196/

Google Scholar  

Kumar KL (2020) Understanding and expressing academic identity through systematic autoethnography. High Educ Res Dev, https://doi.org/10.1080/07294360.2020.1799950

Lapadat JC (2017) Ethics in autoethnography and collaborative autoethnography. Qual Inquiry 23(8):589–603. https://doi.org/10.1177/1077800417704462

Levi T, Inbar-Lourie O (2019) Assessment literacy or language assessment literacy: learning from the teachers. Lang Assess Quarter 17(2):168–182. https://doi.org/10.1080/15434303.2019.1692347

London MS, Smither JW (2002) Feedback orientation, feedback culture, and the longitudinal performance management process. Hum Res Manage Rev 12(1):81–100. https://doi.org/10.1016/S1053-4822(01)00043-2

Malecka B, Boud D, Carless D (2020) Eliciting, processing and enacting feedback: mechanisms for embedding student feedback literacy within the curriculum. Teach High Educ, 1–15. https://doi.org/10.1080/13562517.2020.1754784

Mavrogenis AF, Quaile A, Scarlat MM (2020) The good, the bad and the rude peer-review. Int Orthopaed 44(3):413–415. https://doi.org/10.1007/s00264-020-04504-1

McDowell GS, Knutsen JD, Graham JM, Oelker SK, Lijek RS (2019) Co-reviewing and ghostwriting by early-career researchers in the peer review of manuscripts. ELife 8:e48425. https://doi.org/10.7554/eLife.48425

Article   CAS   PubMed   PubMed Central   Google Scholar  

Merga MK, Mason S, Morris JE (2018) Early career experiences of navigating journal article publication: lessons learned using an autoethnographic approach. Learn Publ 31(4):381–389. https://doi.org/10.1002/leap.1192

Miles MB, Huberman AM (1994) Qualitative data analysis: An expanded sourcebook (2nd edn.). Sage

Molloy E, Bearman M (2019) Embracing the tension between vulnerability and credibility: ‘Intellectual candour’ in health professions education. Med Educ 53(1):32–41. https://doi.org/10.1111/medu.13649

Article   PubMed   Google Scholar  

Molloy E, Boud D, Henderson M (2019) Developing a learning-centred framework for feedback literacy. Assess Eval High Educ 45(4):527–540. https://doi.org/10.1080/02602938.2019.1667955

Neal JW, Neal ZP (2013) Nested or networked? Future directions for ecological systems theory. Soc Dev 22(4):722–737. https://doi.org/10.1111/sode.12018

Noble C, Billett S, Armit L, Collier L, Hilder J, Sly C, Molloy E (2020) “It’s yours to take”: generating learner feedback literacy in the workplace. Adv Health Sci Educ Theory Pract 25(1):55–74. https://doi.org/10.1007/s10459-019-09905-5

Price M, Rust C, O’Donovan B, Handley K, Bryant R (2012) Assessment literacy: the foundation for improving student learning. Oxford Centre for Staff and Learning Development

Silbiger NJ, Stubler AD (2019) Unprofessional peer reviews disproportionately harm underrepresented groups in STEM. PeerJ 7:e8247. https://doi.org/10.7717/peerj.8247

Article   PubMed   PubMed Central   Google Scholar  

Starck JM (2017) Scientific peer review: guidelines for informative peer review. Springer Spektrum

Steelman LA, Wolfeld L (2016) The manager as coach: the role of feedback orientation. J Busi Psychol 33(1):41–53. https://doi.org/10.1007/s10869-016-9473-6

Stiggins RJ (1999) Evaluating classroom assessment training in teacher education programs. Educ Meas: Issue Pract 18(1):23–27. https://doi.org/10.1111/j.1745-3992.1999.tb00004.x

Street B (1997) The implications of the ‘new literacy studies’ for literacy Education. Engl Educ 31(3):45–59. https://doi.org/10.1111/j.1754-8845.1997.tb00133.x

Sughrua WM (2019) A nomenclature for critical autoethnography in the arena of disciplinary atomization. Cult Stud Crit Methodol 19(6):429–465. https://doi.org/10.1177/1532708619863459

Sutton P (2012) Conceptualizing feedback literacy: knowing, being, and acting. Innov Educ Teach Int 49(1):31–40. https://doi.org/10.1080/14703297.2012.647781

Article   MathSciNet   Google Scholar  

Tynan BR, Garbett DL (2007) Negotiating the university research culture: collaborative voices of new academics. High Educ Res Dev 26(4):411–424. https://doi.org/10.1080/07294360701658617

Vygotsky LS (1978) Mind in society: The development of higher psychological processes. Harvard University Press

Wall S (2006) An autoethnography on learning about autoethnography. Int J Qual Methods 5(2):146–160. https://doi.org/10.1177/160940690600500205

Article   ADS   MathSciNet   Google Scholar  

Warne V (2016) Rewarding reviewers-sense or sensibility? A Wiley study explained. Learn Publ 29:41–40. https://doi.org/10.1002/leap.1002

Wilkinson S (2019) The story of Samantha: the teaching performances and inauthenticities of an early career human geography lecturer. High Educ Res Dev 38(2):398–410. https://doi.org/10.1080/07294360.2018.1517731

Winstone N, Carless D (2019) Designing effective feedback processes in higher education: a learning-focused approach. Routledge

Winstone NE, Mathlin G, Nash RA (2019) Building feedback literacy: students’ perceptions of the developing engagement with feedback toolkit. Front Educ 4:1–11. https://doi.org/10.3389/feduc.2019.00039

Xu Y, Carless D (2016) ‘Only true friends could be cruelly honest’: cognitive scaffolding and social-affective support in teacher feedback literacy. Assess Eval High Educ 42(7):1082–1094. https://doi.org/10.1080/02602938.2016.1226759

Download references

Author information

Authors and affiliations.

Queen’s University Belfast, Belfast, UK

Sin Wang Chong

Nagasaki University, Nagasaki, Japan

Shannon Mason

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sin Wang Chong .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

We acknowledge that research data are collected from human subjects (from the two authors), such data are collected in accordance with the standards and guidelines of the School Research Ethics Committee at the School of Social Sciences, Education and Social Work, Queen’s University Belfast (Ref: 005_2021).

Informed consent

Since the participants are the two authors, there is no informed consent form.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplemental material file #1, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Chong, S.W., Mason, S. Demystifying the process of scholarly peer-review: an autoethnographic investigation of feedback literacy of two award-winning peer reviewers. Humanit Soc Sci Commun 8 , 266 (2021). https://doi.org/10.1057/s41599-021-00951-2

Download citation

Received : 02 August 2021

Accepted : 12 October 2021

Published : 12 November 2021

DOI : https://doi.org/10.1057/s41599-021-00951-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

peer review process academic research

  • Technical Support
  • Find My Rep

You are here

What is peer review.

Peer review is ‘a process where scientists (“peers”) evaluate the quality of other scientists’ work. By doing this, they aim to ensure the work is rigorous, coherent, uses past research and adds to what we already know.’ You can learn more in this explainer from the Social Science Space.  

A picture showing a manuscript with annotations, a notebook, and a journal.

Peer review brings academic research to publication in the following ways:

  • Evaluation – Peer review is an effective form of research evaluation to help select the highest quality articles for publication.
  • Integrity – Peer review ensures the integrity of the publishing process and the scholarly record. Reviewers are independent of journal publications and the research being conducted.
  • Quality – The filtering process and revision advice improve the quality of the final research article as well as offering the author new insights into their research methods and the results that they have compiled. Peer review gives authors access to the opinions of experts in the field who can provide support and insight.

Types of peer review

  • Single-anonymized  – the name of the reviewer is hidden from the author.
  • Double-anonymized  – names are hidden from both reviewers and the authors.
  • Triple-anonymized  – names are hidden from authors, reviewers, and the editor.
  • Open peer review comes in many forms . At Sage we offer a form of open peer review on some journals via our Transparent Peer Review program , whereby the reviews are published alongside the article. The names of the reviewers may also be published, depending on the reviewers’ preference.
  • Post publication peer review can offer useful interaction and a discussion forum for the research community. This form of peer review is not usual or appropriate in all fields.

To learn more about the different types of peer review, see page 14 of ‘ The Nuts and Bolts of Peer Review ’ from Sense about Science.

Please double check the manuscript submission guidelines of the journal you are reviewing in order to ensure that you understand the method of peer review being used.

  • Journal Author Gateway
  • Journal Editor Gateway
  • Transparent Peer Review
  • How to Review Articles
  • Using Sage Track
  • Peer Review Ethics
  • Resources for Reviewers
  • Reviewer Rewards
  • Ethics & Responsibility
  • Sage Editorial Policies
  • Publication Ethics Policies
  • Sage Chinese Author Gateway 中国作者资源
  • Open Resources & Current Initiatives
  • Discipline Hubs

Peer Review in Academia

  • Open Access
  • First Online: 03 January 2022

Cite this chapter

You have full access to this open access chapter

Book cover

  • Eva Forsberg 5 ,
  • Lars Geschwind 6 ,
  • Sara Levander 5 &
  • Wieland Wermke 7  

4222 Accesses

4 Citations

In this chapter, we outline the notion of peer review and its relation to the autonomy of the academic profession and the contract between science and society. This is followed by an introduction of some key themes regarding the practices of peer review. Next, we specify some reasons to further explore different practices of peer review. Briefly, the state of the art is presented. Finally, the structure of this volume and its individual contributions are presented.

You have full access to this open access chapter,  Download chapter PDF

Similar content being viewed by others

peer review process academic research

Reviewing Peer Review: A Flawed System: With Immense Potential

Mark Lauria

peer review process academic research

Guidelines for Peer Review. A Survey of International Practices

peer review process academic research

How to Do a Peer Review?

  • Peer review
  • Scientific communication

Introduction

Over the past few decades, peer review has become an object of great professional and managerial interest (Oancea, 2019 ) and, increasingly, academic scrutiny (Bornmann, 2011 ; Grimaldo et al., 2018 ). Nevertheless, calls for further research are numerous (Tennant & Ross-Hellauer, 2020 ). This volume is in answer to such interest and appeals. We aim to present a variety of peer-review practices in contemporary academic life as well as the principled foundation of peer review in scientific communication and authorship. This volume is unique in that it covers many different practices of peer review and their theoretical foundations, providing both an introduction into the very complex field and new empirical and conceptual accounts of peer review for the interested reader. The contributions are produced by internationally recognized scholars, almost all of whom participated in the conference ‘Scientific Communication and Gatekeeping in Academia in the 21st Century’, held in 2018 at Uppsala University, Sweden. Footnote 1 The overall objective of this volume is explorative; framings relevant to the specific contexts, practices and discourses examined are set by the authors of each chapter. However, some common conceptual points of departure may be laid down at the outset.

Peer review is a context-dependent, relational concept that is increasingly used to denote a vast number of evaluative activities engaged in by a wide variety of actors both inside and outside of academia. By peer review, we refer to peers’ assessments and valuations of the merits and performances of academics, higher education institutions, research organizations and higher education systems. Mostly, these activities are part of more encompassing social evaluation practices, such as reviews of manuscripts, grant proposals, tenure and promotion and quality evaluations of institutions and their research and educational programmes. Thus, scholarly peer review comprises evaluation practices within both the wider international scientific community and higher education systems. Depending on differences related to scientific communities and national cultures, these evaluations may include additional gatekeepers, internal as well as external to academia, and thus the role of the peer may vary.

The roots of peer review can be found in the assessment practices of reviewers and editors of scholarly journals in deciding on the acceptance of papers submitted for publishing. Traditionally, only peers (also known as referees) with recognized scholarly standing in a relevant field of research were acknowledged as experts (Merton, 1942/ 1973 ). Due to the differentiation and increased use of peer review, the notion of a peer employed in various evaluation practices may be extended. Who qualifies as an expert in different peer-review practices and with what implications are empirical issues.

Even though peer review is a familiar phenomenon in most scholarly evaluations, there is a paucity of studies on peer review within the research field of evaluation. Peer review has, however, been described as the most familiar collegial evaluation model, with academic research and higher education as its paradigm area of application and with an ability to capture and judge qualities as its main advantage (Vedung, 2002 ). Following Scriven ( 2003 ), we define evaluation as a practice ‘determining the merit, worth or significance of things’ (p. 15). Scriven ( 1980 ) identifies four steps involved in evaluation practices, which are also frequently used in peer review, either implicitly enacted and negotiated or explicitly stated (Ozeki, 2016 ). These steps concern (1) the criteria of merit, that is, the dimensions of an object being evaluated; (2) the standards of merit, that is, the level of performance in a given dimension; (3) the measuring of performance relative to standards; and (4) a value judgement of the overall worth.

Consequently, the notion of peer review refers to evaluative activities in academia conducted by equals that distribute merit, value and worth. In these processes of selection and legitimation, issues referring to criteria, standards, rating and ranking are significant. Often, peer reviews are embedded in wider evaluation practices of research, education and public outreach. To capture contemporary evaluations of academic work, we will include a number of different review practices, including some in which the term peer is employed in a more extended sense.

The Many Face(t)s of Peer-Review Practices

Depending on the site in which peer review is used, the actors involved differ, as do their roles. The same applies to potential guidelines, purposes, discourses, use of professional judgement and metrics, processes and outcome of the specific peer-review practice. These are all relative to the site in which the review is used and will briefly be commented upon below.

The Interplay of Primary and Secondary Peer Review

It is possible to make a distinction between primary and secondary peer reviews (British Academy, 2007 ). As stated, the primary role of peer review is to assess manuscripts for publishing, followed by the examination and judgement of grant applications. Typically, many other peer-review practices, so-called secondary peer review, involve summaries of outcomes of primary reviews. Thus, we might view primary and secondary reviews as folded into each other, where, for example, reviews of journal articles are prerequisite to later evaluation of the research quality of an institution, in recruitment and promotion, and so forth (Helgesson, 2016 ). Hence, the consequences of primary reviews can hardly be overstated.

Traditionally, both forms of primary peer review (assessment of manuscripts and grant applications) are ex ante evaluations; that is, they are conducted prior to the activity (e.g. publishing and research). With open science, open access journals and changes in the transparency of peer review, open and public peer reviews have partly opened the black box of reviews and the secrecy of the process and its actors (Sabaj Meruane et al., 2016 ). Accordingly, publishing may include both ex ante and ex post evaluations. These forms of evaluation can also be found among secondary reviews, with degree-awarding accreditation an example of the former and reviews of disciplines an example of the latter.

Sites and Reviewer Tasks and Roles

Without being exhaustive, we can list a number of sites where peer review is conducted as part of more comprehensive evaluations: international, regional and national higher education agencies conduct accreditation, quality audits and evaluations of higher education institutions; funding agencies distribute grants for projects and fellowships; higher education institutions evaluate their research, education and public outreach at different levels and assess applications for recruitment, tenure and promotion; the scientific community assesses manuscripts for publication, evaluates doctoral theses and conference papers and allocates awards. The evaluation roles are concerned with the provision of human and financial resources, the evaluation of research products and the assessment of future strategies as a basis for policy and priorities. All of these activities are regularly performed by researchers and interlinked in an evaluation spiral in which the same research may be reviewed more than once (Langfeldt & Kyvik, 2015 ). If we consider valuation and assessment more generally, the list can be extended almost infinitely, with supervision and seminar discussions being typical activities in which valuation plays a central part. Hence, scholars are accustomed to being assessed and to evaluating others.

The role and the task of the reviewer differ also in relation to whether the act of reviewing is performed individually, in teams or in a blending of the two forms. In the evaluation of research grants, the latter is often the case, with reviewers first individually rating or ranking the applications, followed by panel discussions and joint rankings as bases for the final decision made by a committee. In peer review for publishing, there might be a desk rejection by the editor, but if not, two or more external reviewers assess a manuscript and recommend that the editor accept, revise or reject it. It is then up to the editor to decide what to do next and to make the final decision. The process and the expected roles of the involved editor, reviewer and authors may vary depending on whether it is a private publisher or a journal linked to a scientific association, for example. Whether the reviewer should be considered an advisor, an independent assessor, a juror or a judge depends on the context and the task set for the reviewer within the specific site and its policies and practices as well as on various praxes developed over time (Tennant & Ross-Hellauer, 2020 ).

Power-making in the Selection of Expertise

The selection process is at the heart of peer review. Through valuations and judgements, peers are participants in decisions on inclusion and exclusion: What project has the right qualities to be allocated funding? Which paper is good enough to be published? And who has the right track record to be promoted or offered a fellowship? When higher education institutions and scholars increasingly depend on external funding, peer review becomes key in who gets an opportunity to conduct research and enter or continue a career trajectory as a researcher and, in many systems, a higher education teacher. In other words, peer review is a cornerstone of the academic career system (Merton, 1968 ; Boyer, 1990 ) and heavily influences what kinds of scientific knowledge will be furthered (Lamont, 2009 ; Aagaard et al., 2015 ).

The interaction involved in peer review may be remote, online or local, including face-to-face collaboration, and it may involve actors with different interests. Moreover, interaction may be extended to the whole evaluation enterprise. For example, evaluations of higher education institutions and their research and education often include members of national agencies, scholarly experts and external stakeholders. Scholarly experts may be internal or external to the higher education institutions and of lower, comparable or higher rank than the subjects of evaluation, and reviewers may be blind or known to those being evaluated and vice versa. Scholarly expertise may also refer to a variety of specialists, for example, to scholars with expertise in a specific research topic, in evaluation technology, in pedagogy or public outreach. A more elaborated list of features to be considered in the allocation of experts to various review practices can be found in a peer-review guide by the European Science Foundation ( 2011 ). At times the notion of peer is extended beyond the classical idea to one with demonstrated competence to make judgements within a particular research field. Who qualifies as a reviewer is contingent on who has the authority to regulate the activity in which the evaluation takes place and who is in the position to suggest and, not least, to select reviewers. This is a delicate issue, imbued with power, and one that we need to further explore, preferably through comparative studies involving different peer-review practices in varying contexts.

Acting as a peer reviewer has become a valuable asset in the scholarly track record. This makes participating as a reviewer important for junior researchers. Therefore, such participation not only is a question of being selected but also increasingly involves self-election. More opportunities are provided by ever more review activities and the prevalence of evaluation fatigue among senior researchers. The limited credit, recognition and rewards for reviewers may also contribute to limited enthusiasm amongst seniors (Research Information Network CIC, 2015 ). Moreover, several tensions embedded in review practices can add to the complexity of the process and influence the readiness to review. The tensions involve potential conflicts between the role of the reviewer or evaluator and the researcher’s role: time conflict (research or evaluate), peer expertise versus impartiality (especially qualified colleagues are often excluded under conflict-of-interest rules), neutral judge versus promoter of research interests (double expectation, deviant assessments versus unanimous conclusions, peer review versus quantitative indicators, and scientific autonomy versus social responsibility) (Langfeldt & Kyvik, 2015 ). Despite noted challenges, classical peer review is still the key mechanism by which professional autonomy and the guarding of research quality are achieved. Thus, it is argued that it is an academic duty and an obligation, in particular for senior scholars, to accept tasks as reviewers (Caputo, 2019 ). Nevertheless, the scholarly exchange value should be addressed in future discussions of gatekeeping in academia.

The Academic Genres of Peer Review

Peer reviews are rooted in more encompassing discourses, such as those concerning norms of science, involving notions of quality and excellence founded in different sites endogenous or exogenous to science. Texts subject to or employed or produced in peer-review practices represent a variety of academic genres, including review reports, editors’ letters, applicants’ proposals, submitted manuscripts, guidelines, applicant dossiers and curriculum vitae (CVs), testimonials, portfolios and so on. Different genres are interlinked in chains, creating systems of genres. A significant aspect of systems is intertextuality, or the fact that texts within a specific system refer to, anticipate and shape each other. The interdependence of texts is about how they relate to situational and formal expectations—in this case, of the specific peer-review practice. It is also about how one text makes references to another text; for example, review reports often refer to guidelines, calls, announcements or texts in application dossiers. The interdependence can also be seen in how the texts interact in academic communities (Chen & Hyon, 2005 ): who the intended readers of a given text are, what the purpose of the text is, how the text is used in the review and decision process, and so on. Conclusively, the genre systems of peer review vary depending on epistemic traditions, national culture and regulations of higher education systems and institutions.

Given this diversity, we are dealing with a great number of genre systems involving different kinds of texts and interrelations embedded in power and hierarchies. A significant feature of peer-review texts as a category is the occluded genres, that is, genres that are more or less closed to the public (Swales, 1996 ). Depending on the context, the list of occluded genres varies. For example, the submission letters, submitted manuscripts, review reports and editor–author correspondence involved in the eventual publication of articles in academic journals are not made publicly available, while in the context of recruitment and promotion, occluded genres include application letters, testimonials and evaluation letters to committees. And for research grants, the research proposals, individual review reports and panel reports tend to remain entirely internal to the grant-making process. However, in some countries (e.g. in Sweden, due to the principle of openness, or offentlighetsprincipen ), several of these types of texts may be publicly available.

The request for open science has also initiated changes to the occluded genres of peer review. After a systematic examination, Ross-Hellauer ( 2017 ) proposed ‘open peer review’ as an umbrella term for a variety of review models in line with open science, ‘including making reviewer and author identities open, publishing review reports and enabling greater participation in the peer review process’ (p. 1). From 2005 onwards, there has been a big upswing of these definitions. This correlates with the rise of the openness agenda, most visible in the review of journal articles and within STEM and interdisciplinary research.

Time and space are central categories in most peer-review genres and the systems to which they belong. While review practices often look to the past, imagined futures also form the background for valuation. The future orientation is definitely present in audits, in assessments of grant proposals and in reviews of candidates’ track records. The CV, a key text in many review practices, may be interpreted in terms of an applicant’s career trajectory, thus emphasizing how temporality and spatiality interact within a narrative infrastructure, for example how scholars move between different academic institutions over time (Hammarfelt et al., 2020 ). Texts may also feed both backwards and forwards in the peer-review process. For example, guidelines and policy on grant evaluations and distribution may be negotiated and acted upon by both applicants and reviewers. Candidates may also address reviewers as significant others in anticipating the forthcoming reviewer report (Serrano Velarde, 2018 ). These expectations on the part of the applicant can include prior experiences and perceptions of specific review practices, processes and outcomes in specific circumstances.

Turning to the reviewer report, it is worth noting that they are often written in English, especially ones assessing manuscripts and frequently those on research proposals and recruitment applications as well. Commonly seen within the academic genre of peer review is the use of indirect speech, which can be linked to the review report’s significance as related to the identity of the person being evaluated (Paltridge, 2017 ). Two key notions, politeness and face, have been used to describe the evaluative language of review reports and how reviewers interact with evaluees. There are differences related to content and to whether a report is positive or negative overall in its evaluation. For example, reviewers of manuscripts invoke certain structures of knowledge, using different structures when suggesting to reject, revise or accept and when asking for changes. To maintain social relationships, reviewers draw on different politeness strategies to save an author’s face. Strategies employed may include ‘apologizing (‘I am sorry to have to’) and impersonalizing an issue (‘It is generally not acceptable to’)’ (Paltridge, 2017 , p. 91). Largely, requests for changes are made as directions, suggestions, clarifications and recommendations. Thus, for both evaluees and researchers of peer reviews, particular genre competences are required to decode and act upon the reports. For beginning scholars unfamiliar with the world of peer review or for scholars from a different language or cultural background than the reviewer, it might be challenging to interpret, negotiate and act upon reviewer reports.

Criteria and the Professional Judgement of Quality

According to the classical idea of peer review, only a peer can properly recognize quality within a given field. Although, in both research and scholarly debate, shortcomings have been emphasized regarding the trustworthiness, efficacy, expense, burden and delay of peer review (Bornmann, 2013 ; Research Information Network CIC, 2015 ), many critics still find peer review as the least-worst system, in the absence of viable alternatives. Overall, scholars stand behind the idea of peer review even though they often have concerns regarding the different practices of peer review (Publons, 2018 ).

Calls for accountability and social relevance have been made, and there have been requests for formalization, standardization, transparency and openness (Tennant & Ross-Hellauer, 2020 ). While the idea of formalization of peer review refers to rules, including the development of policy and guidelines for different forms of peer review, standardization rather emphasizes the setting of standards through the employment of specific tools for evaluation (i.e. criteria and indicators used for assessment, rating or ranking and decision-making). An interesting question is whether standardization will impact the extent and the way peers are used in different sites of evaluation (Westerheijden et al., 2007 ). We may add, who will be considered a peer and what will the matching between the evaluator and the evaluation object or evaluee look like?

It is widely acknowledged that criteria is an essential element of any procedure for judging merit (Scriven, 1980 ; Hug & Aeschbach, 2020 ). This is the case regardless of whether criteria are determined in advance or if they are explicitly expressed or implicitly manifested in the process of assessment. The notion of peer review has been supplemented in various ways, implicating changes to the practice and records of peer review. Increasingly, review reports combine classical peer review with metrics of different kinds. Accordingly, quantitative measures, taken as proxies for quality, have entered practices of peer review. Today, blended forms are rather common, especially in evaluations of higher education institutions, where narrative and metric summaries often supplement each other and inform a judgement.

In general, quantitative indicators (e.g. number of publications, journal impact factors, citations) are increasingly applied, even though their capacity to capture quality is questioned, especially within the social sciences, humanities and the arts. Among the main reasons given for the rapid growth of demands for metrics, one of the arguments we find is that classic peer review alone cannot meet the quest for accountability and transparency, and bibliometric evaluations may appear cheaper, more objective and legitimated. Moreover, metrics may give an impression of accessibility for policy and management (Gläser & Laudel, 2007 ; Söderlind & Geschwind, 2019 ). However, tensions between classical peer review and quantitative indicators have been identified and are hotly debated (Langfeldt & Kyvik, 2011 ). The dramatic expansion of the use of metrics has brought with it gaming and manipulation practices to enhance reputation and status, ‘including coercive citation, forced joint authorship, ghostwriting, h-index manipulation, and many others’ (Oravec, 2019 , p. 859). Warnings are also issued against the use of bibliometric indicators at the individual level. A combination of peer narratives and metrics is, however, considered a possibility to improve an overall evaluation, given due awareness of the limitations of quantitative data as proxies for quality.

The literature on peer review has focused more on the weighting of criteria than on the meaning referees assign to the criteria they use (Lamont, 2009 ). Even though some criteria, such as originality, trustworthiness and relevance, are frequently used in the assessment of academic work and proposals, our knowledge of how reviewers ascribe value to, assess and negotiate them remains limited (Hug & Aeschbach, 2020 ). However, Joshua Guetzkow, Michèle Lamont and Grégoire Mallard ( 2004 ) show that panellists in the humanities, history and the social sciences define originality much more broadly than what is usually the case in the natural sciences.

Criteria, indicators and comparisons are unstable: they are situational and dependent on context and a referee’s personal experience of scientific work (Kaltenbrunner & de Rijcke, 2020 ). We are dealing here with assessments in situations of uncertainty and of entities not easily judged or compared. The concept of judgement devices has been used to capture how reviewers delegate the judgement of quality to proxies, reducing the complexity of comparison. For example, the employment of central categories in a CV, which references both temporal and spatial aspects of scholars’ trajectories, makes comparison possible (Hammarfelt, 2017 ). In a similar way, the theory of anchoring effects has been used to explore reviewers’ abilities to discern, assess, compare and communicate what scientific quality is or may be (Roumbanis, 2017 ). Anchoring effects have their roots in heuristic principles used as shortcuts in everyday problem solving, especially when a judgement involves intuition. Reduction of complexity is visible also in how reviewers first collect criteria that consist of information that has an eliminatory function. Next, they search for positive signs of evidence in order to make a final judgement (Musselin, 2002 ). Dependent on context and situations, reviewers tend to select different criteria from a repertoire of criteria (Hug & Aeschbach, 2020 ).

On the one hand, the complexity of academic evaluations requires professional judgement: scholars sufficiently grounded in a field of research and higher education are entrusted with interpreting and negotiating criteria, indicators and merits. Still, the practice of peer review has to be safeguarded against the risk of conservatism as well as epistemic and social biases (Kaltenbrunner & de Rijcke, 2020 ). On the other hand, changes in the governance of higher education institutions and research, as well as marketization, managerialism, digitalization and calls for accountability, have increased the diversity of peer review and introduced new ways to capture and employ criteria and indicators. The long-term consequences of these changes need to be monitored, not least because of how they challenge the self-regulation and autonomy of the academic profession (Oancea, 2019 ).

How to understand, assess, measure and value quality in research, the career of a scholar or the performances of a higher education institution are complex issues. Turning to the notion of quality in a general sense will not solve the problem, since it has so many facets and has been perceived in so many different ways, including as fitness for purpose, as eligible, as excellent and as value for money (Westerheijden et al., 2007 ), all notions in need of contextualization and further elaboration to achieve some sense (see also Elken & Wollscheid, 2016 ).

When presenting a framework to study research quality, Langfeldt et al. ( 2020 ) identify three key dimensions: (1) quality notions originating in research fields and in research policy spaces; (2) three attributes important for good research and drawn on existing studies, namely, originality/novelty, plausibility/reliability and value or usefulness; and (3) five sites where notions of research quality emerge, are contested and are institutionalized, comprising researchers, knowledge communities, research organizations, funding agencies and national policy arenas. This multidimensional framework and its components highlight issues that are especially relevant to studies of peer review. The sites identify arenas where peer review functions as a mechanism through which notions of research quality are negotiated and established. The consideration of notions of quality endogenous and exogenous to scientific communities and the various attributes of good research can also be directly linked to referees’ distribution of merit, value and worth in peer-review practices under changing circumstances.

The Autonomy of a Profession and a Challenged Contract

Historical analyses link peer review to the distribution of authority and the negotiations and reformulations of the public status of science (Csiszar, 2016 ). At stake in renegotiations of the contract between science and society are the professional autonomy of scholars and their work. Peer review is contingent on the prevailing contract and is critical in maintaining the credibility and legitimacy of research and higher education (Bornmann, 2011 ). The professional autonomy of scholars raises the issue of self-regulation. Its legitimacy ultimately comes down to who decides what, particularly concerning issues of research quality and scientific communication (Clark, 1989 ).

Over the past 40 years, major changes have taken place in many OECD (Organisation for Economic Co-operation and Development) countries in the governance of public science and higher education, changes which have altered the relative authority of different groups and organizations (Whitley, 2011 ). The former ability of scientific elites to exercise endogenous control over science has, particularly since the 1960s, become more contested and subject to public policy priorities. A more heterogeneous and complex higher education system has been followed by the exogeneity of governance mechanisms, formal rules and procedures, and the institutionalization of quality assurance procedures and performance monitoring. Expectations of excellence, competition for resources and reputation, and the coordination of research priorities and intellectual judgement have changed across disciplinary and national boundaries to varying degrees (Whitley, 2011 ). These developments can be seen as expressions of the evaluative state (Neave, 1998 ), the audit society (Power, 1997 ) and as part of an institutionalized evaluation machinery (Dahler Larsen, 2012 ).

Changes in the principles of governance are underpinned by persistent tensions around accountability, evaluation, measurement, demarcation, legitimation, agency and identity in research (Oancea, 2019 ). Besides the primary form of recognition through peer review, the weakened autonomy of academic fields has added new evaluative procedures and institutions. Academic evaluations, such as accreditations, audits and quality assurances, and evaluations of research performance and social impact now exist alongside more traditional forms (Hansen et al., 2019 ).

Higher education institutions worldwide have experienced the emergence and manifestations of the quality movement, which is part of interrelated processes such as massification, marketization and managerialism. Through organizations at international, national and institutional levels, a variety of technologies have been introduced to identify, measure and compare the performance of higher education institutions (Westerheijden et al., 2007 ). These developments have emphasized external standards and the use of bibliometrics and citation indexes, which have been criticized for rendering the evaluations more mechanical (Hamann & Beljean, 2017 ). Mostly, peer review, often in combination with self-evaluation, is also employed in the more recently introduced forms of evaluation (Musselin, 2013 ). Accordingly, peer review, in one form or another, is still a key mechanism monitoring the flow of scientific knowledge, ideas and people through the gates of the scientific community and higher education institutions (Lamont, 2009 ).

Autonomy may be defined as ‘the quality or state of being self-governing’ (Ballou, 1998 , p. 105). Autonomy is thus the capacity of an agent to determine their own actions through independent choice, in this case within a system of principles and laws to which the agent is dedicated. The academic profession governs itself by controlling its members. Academics control academics, peers control peers, in order to maintain the status and indeed the autonomy of the profession. Fundamentally, professionals are licensed to act within a valuable knowledge domain. By training, examination and acknowledgement, professionals are legitimated (at least politically) experts of their domain. The rationale of licence and the esotericism of professional knowledge raise the question of how professionals and their work can be evaluated and by which standards. There are rules of conduct and ethical norms, but these are ultimately owned and controlled by the academic profession. From this perspective, we can understand peer review as the structural element that holds academia together.

The increase of peer-review practices in academia can be compared with other professions that also must work harder than before to maintain their status and autonomy. In many cases, their competence and quality must be displayed much more visibly today. Pluralism and individualism in society have also resulted in a plurality of expertise and a decrease of mono-vocational functional systems. A mystique of academic knowledge (as in ‘the research says’) is not as acceptable in public opinion today as it once was. The term ‘postmodern professionals’ is suggested to describe experts who expend more effort in the dramaturgy of their competences than people in their positions might have in the past in order to generate trust in clients and in society (Pfadenhauer, 2003 ). Media makes professional competences, performances and failures much more visible and contributes to trust or mistrust in professions. In a pluralist society, extensive use of peer review may indeed function as a strategy to make apparent quality visible and secure the autonomy of the academic profession, which owns the practice of peer review and knows how to adjust it to its needs.

While most academic evaluations exist across scientific communities and disciplines, the criteria of evaluation can differ substantially between and within communities (Hamann & Beljean, 2017 ). Thus, research on peer review needs to take disciplinary and interdisciplinary similarities and differences seriously. Obviously, the impact of the intellectual and social organization of the sciences (Whitley, 1984 ), the mode of research (Nowotny et al., 2001 ), the tribes and territories (Becher, 1989 ; Becher & Trowler, 2001 ; Trowler et al., 2014 ) and the epistemic cultures (Knorr Cetina, 1999 ) need to be better represented in future research. Then, examinations of peer review may contribute also to a fuller understanding of the contract between science and society and the challenges directed towards the professional autonomy of academics.

Why Study Peer Review?

As an ideal, peer review has been described as ‘the linchpin of science’ (Ziman, 1968 , p. 148) and a key mechanism in the distribution of status and recognition (Merton, 1968 ) as well as part and parcel of collegiality and meritocracy (Cole & Cole, 1973 ). Above all, peer review is considered a gatekeeper regarding the quality of science both in various specialized knowledge communities and in research policy spaces (Langfeldt et al., 2020 ). Peer review is often taken as a hallmark of quality, expected to both guard and enhance quality. Early on, peer review, or refereeing, was linked to moral institutionalized imperatives. Perhaps most known are those formulated in the Ethos of Science by Merton (1942/ 1973 ): communism, universalism, disinterestedness and organized scepticism, or CUDOS. These norms and their counter-norms (individualism, particularism, interestedness and dogmatism) have frequently been the focus of peer-review studies. Norms on how scientific work is or should be carried out and how researchers should behave reflect the purpose of science, and ideas of how science should be governed, and are thus directly linked to the autonomy of the academic profession (Panofski, 2010 ). In short, research into peer review goes to the very heart of academia and its relation to society. This calls for scrutiny.

With changing circumstances, peer review is more often employed, and its purposes, forms and functions are increasingly diversified. Today, academic evaluations permeate every corner of the scientific enterprise, and the traditional form of peer review, rooted in scientific communication, has migrated. Thus, we have seen peer review evolve to be undertaken in all key aspects of academic life: research, teaching, service and collaboration with society (Tennant & Ross-Hellauer, 2020 ). Increasingly, peer review is regarded as the standard, not only for published scholarship but also for academic evaluations in general. Ideally, peer review is considered to guarantee quality in research and education while upholding the norms of science and preserving the contract between science and society. The diversity and the migration of review practices and its consequences should be followed closely.

In the course of a career, scholars are recurrently involved as both reviewers and reviewees, and this is becoming more and more frequent. As stated in a report on peer review by the British Academy ( 2007 ), the principle of judge not, that ye be not judged is impossible to follow in academic life. On the contrary, the selection of work for publishing, the allocation of grants and fellowships, decisions on tenure and promotion, and quality evaluations all depend upon the exercise of judgement. ‘The distinctive feature of this academic judgement is that it is reciprocal. Its guiding motto is: judge only if you in turn are prepared to be judged’ (British Academy, 2007 , p. vii).

Indeed, we lack comprehensive statistics on peer review and the involvement of scholars in its diverse practices. However, investigations like the Wiley study (Warne, 2016 ) and Publons’ ( 2018 ) Global State of Peer Review (2018), both focused on reviews of manuscripts, implicate the widespread and increasing use of peer review. In 2016, roughly 2.9 million peer-reviewed articles were indexed in Web of Science, and a total of 2.5 million manuscripts were rejected. Estimated reviews each year amount to 13.7 million. Together, the continuous rise of submissions and the increase in evaluations using peer reviews expose the system and its actors to ever more pressure.

Peer-review activities produce an incredible amount of talk and gossip in academia. In particular, academic appointments have contributed to the organizational ‘sagas’ described by Clark ( 1972 ). In systems where fierce competition for a limited number of chairs (professorships) is the norm, much is at stake. A single decision, one way or another, can make or break an academic career, and the same is true in relation to recurring judgements and decisions on tenure and promotion (Gunneriusson, 2002 ). Research on the emotional and socio-psychological consequences of peer rejection or low ratings and rankings is seldom conducted. While rejection may function as either a threat or a challenge to scholarly identities, Horn ( 2016 ) argues that rejection is a source of stigmatization pervading the entire academic community. In a similar vein, scholars have to adjust to the maxim of ‘publish or perish’ and the demands of reviewers, even when these are against the scholars’ own convictions. Some researchers consider this a form of ‘intellectual prostitution’ (Frey, 2003 ), and reviewer fatigue is spreading through the scientific community. For example, it is widely recognized that editors sometimes have trouble finding reviewers. Obviously, peer review has become a concern to scholars of all kinds and to their identities and everyday practices and careers.

The mundane reality of peer-review practice is quite different from the ideology of peer review, and our knowledge is rather restricted and fragmented (Grimaldo et al., 2018 ). The roots of peer review can be traced through the seventeenth century and book censorship, the development of academic journals in the eighteenth century and the gatekeeping of scientific communication. As a regular activity, peer review is, however, a latecomer in the scientific community, and it is unevenly distributed across nations and disciplines (Biagioli, 2002 ). For example, publication practices, discourses and the lingua franca differ between knowledge communities. Traditional peer review is a more prominent feature of the natural sciences and medicine than of the humanities, the social sciences and the arts. This is also reflected in research on peer review. In a similar way, data show that US researchers supply by far the most reviews of manuscripts for journals, while China reviews substantially less. Nevertheless, review output is increasing in all regions and especially so in emerging regions (Publons, 2018 ).

Even though there are differences, peer review is a fundamental tool in the negotiation and establishment of a scholars’ merits and research, of higher education quality and of excellence. Peer review is also considered a tool to prevent misconduct, such as the fraudulent presentation of findings or plagiarism. Thus, peer review may fulfil functions of gatekeeping, maintenance and enhancement. Peer reviews can also be linked to struggles over which form of capital should be the gold standard and over gaining as much capital as possible (Maton, 2005 ). At stake is, on the one hand, scholastic capital, and on the other hand, academic capital linked to administrative power and control over resources (Bourdieu, 1996 ).

The introduction of ever new sites for peer review, changing qualifications of reviewers and calls for open science, as well as the increased use of metrics, increase the need for further research. Moreover, the cost and the amount of time spent on different kinds of reviews and their potential impact on the identity, recognition and status of scholars and higher education institutions make peer review especially worthy of systematic studies beyond professional narratives and anecdotes. Peer review has both advocates and critics, although the great majority of researchers are positive to the idea of peer review. Many critics find peer review costly, time consuming, conservative and socially and epistemically biased. In sum, there are numerous reasons to study peer review. It is almost impossible to overstate the central role of peer review in the academic enterprise, and the results of empirical evidence are inconclusive and the research field emergent and fragmented (Bornmann, 2011 ; Batagelj et al., 2017 ).

State of the Art of Research on Peer Review

There is a lack of consensus on what peer review is and on its purposes, practices, outcomes and impact on the academic enterprise (Tennant & Ross-Hellauer, 2020 ). The term peer review was relatively unknown before 1970. Referee was the more commonly applied notion, used primarily in relation to the evaluation of manuscripts and scientific communication (Batagelj et al., 2017 ). This lack of clarity has affected how the research field of peer review has been identified and described.

During the past few decades, a number of researchers have provided syntheses of research on peer review in the forms of quantitative meta- and network analyses as well as qualitative configurative analyses. Some are more general in character (Sabaj Meruane et al., 2016 ; Batagelj et al., 2017 ; Grimaldo et al., 2018 ), though the main focus is often research in the natural and medical sciences and peer review for publishing and, to some extent, for grant funding. Others are more concerned with either a specific practice of peer review or different critical topics. Below, we mainly use these recent systematic reviews to depict the research field of peer review, to identify the limits of our knowledge on the subject and to elaborate why we need to study it further.

Academic evaluations, like peer reviews, have been examined from a number of perspectives (Hamann & Beljean, 2017 ). From a functionalist approach, we can explore how well evaluative procedures serve their purposes—especially those of validity, reliability and fairness—and how well they handle various potential biases. The power-analytical perspective makes critical inquiries into dysfunctional effects of structural inequalities like nepotism and unequal opportunities for resource accumulation. The perspective on the performativity of evaluations and evaluative devices focuses on the organizational impact of the devices, on ranking and on the ways indicators incite strategic behaviour. The social-constructive perspective on evaluation emphasizes that ideas such as merits and originality are socially and historically context dependent. There is also a pragmatist perspective that stresses the situatedness of evaluative practices and interactions (e.g. how panellists reach consensus). More and more frequently used are analytical tools from the field of the sociology of valuation and evaluation, which emphasizes knowledge production as contextualization and the existence and impact of insecurities in the performative situations (Lamont, 2012 ; Mallard et al., 2009 ; Serrano Velarde, 2018 ). Some researchers highlight the variety of academic communities and the intradisciplinary, interdisciplinary and transdisciplinary aspects of research today as significant explanatory factors for evaluative practices (Hamann & Beljean, 2017 ). We may add changes in the governance of higher education institutions and research and the introduction of new evaluation practices as equally important (Whitley, 2011 ; Oancea, 2019 ).

In a network analysis of research on peer review from 1950 to 2016 Batagelj et al. ( 2017 ) identified 23,000 indexed records in Web of Science and, above all, a main corpus of 47 articles and books. These texts, which were cited in the most influential publications on peer review, focus on science, scholarship, systematic reviews, peers, peer reviews and quantitative and qualitative analysis. The most cited article allows for an expansion of this list to include the institutionalization of evaluation in science, open peer reviews, bias and the effects of peer review on the quality of research. Most items belonging to the corpus were published relatively early, with only a few published after the year 2000. However, overview papers were published more recently, mainly in the past decade.

The research field of peer review has been described as an emergent field marked by three development stages (Batagelj et al., 2017 ). The first stage, before 1983, includes seminal work mostly presented in social science and philosophy journals. Main topics include scientific productivity, bibliographies, knowledge, citation measures as measures of scientific accomplishment, scientific output and recognition, evaluations in science, referee systems, journal evaluations, the peer-evaluation system, review processes and peer-review practices. During the second stage, 1983–2002, biomedical journals were influential. Key topics focused on the effects of blinding on review quality, research into peer review, guidelines for peer reviewing, monitoring peer-review performance, open peer review, bias in the peer-review system, measuring the quality of editorial peer review, and the development of meta-analysis and systematic reviews approaches. Finally, in the third stage, 2003–2016, we find research on peer review mainly in specialized science studies journals such as Scientometrics . The most frequent topics include peer review of grant proposals, bias, referee selection and links between editors, referees and authors.

Another quantitative analysis (Grimaldo et al., 2018 ) of articles published in English from 1969 to 2015 and indexed in the citation database Scopus found very few publications before 1970, and fewer than around 100 per year until 2004. Then, from 2004 to 2015 the numbers increased rapidly, 12% per year on average. Half the records were journal articles, books, chapters and conference papers, and the rest were mostly editorial notes, commentaries, letters and literature reviews. Scholars from English-speaking countries, especially the United States, predominated, but authors from prominent European institutions were also found. A fragmented, potentially interdisciplinary research field dominated by medicine, sociology and behavioural sciences and with signs of uneven sharing of knowledge was identified. The research was typically pursued in small collaborative networks. Articles on peer reviews were published mostly by JAMA , Behavioral and Brain Science and Scientometrics . The most important topics were peer review in relation to quality assurance and improvement, publishing, research, open access, evaluation and assessment, bibliometrics and ethics. Among the authors of the top five most influential articles we find Merton, Zuckermann, Horrobin, Bornmann and Siegelmann. Grimaldo et al.’s ( 2018 ) analysis revealed the presence of structural problems, such as difficulties in accessing data, partly due to confidentiality and lack of interest from editorial boards, administrative bodies and funding agencies. More positively, the analysis pointed to digitalization and open science as favourable tools for increases in research, cooperation and knowledge sharing.

In an overview (Sabaj Meruane et al., 2016 ) of empirical studies on peer-review processes, almost two thirds of the first-named authors had doctoral backgrounds in medicine, psychology, bibliometrics or scientometrics, and around one fifth in sociology of science or science and technology studies. There is definitely a lack of integration of other fields, such as those within the social sciences, the humanities and the arts and education in the study of peer-review processes. The following topics were empirically researched, in descending order: sociodemographic variables (83%), sociometric or scientometric data (47%), evaluation criteria (36%), bias (31%), rates of acceptance/rejection/revision (25%), predictive validity (24%), consensus among reviewers (17%) and discourse analysis of isolated or related texts (14%). The analysis indicates that ‘the texts interchanged by the actors in the process are not prominent objects of study in the field’ (Sabaj Meruane et al., 2016 , p. 188). Further, the authors identified a number of gaps in the research: The field conceives of peer review more as a system than as a process. Moreover, bibliometric studies constitute an independent field of empirical research on peer review. Only a few studies combine analysis of indicators with content or functional analysis. In a similar way, research on science production, reward systems and evaluation patterns rarely includes actual texts that are interchanged in the peer-review process. Discourse analysis, in turn, rarely uses data other than the reviewer report and socio-demographics. Due to ethical issues and confidentiality, discourse studies and text analyses of reviewer reports are less frequent.

It might be risky to state that peer review is an under-studied object of research, considering the vast number of publications devoted to the topic. Nevertheless, it appears that the field of peer-review research has yet to be fully defined, and empirical research in the field has to be more comprehensively done. A common problem the authors consider important to examine is the consequences of the same actor being able to fulfil different roles (e.g. author, reviewer, editor) in various single reviews. Above all, the field requires not only further but also more comprehensive approaches, and in addition, the black box of peer review needs to be fully open (Sabaj Meruane et al., 2016 ).

Among syntheses focusing on specific topics, those of trustworthiness and bias as well as the employment and negotiation of and the meaning ascribed to criteria in various evaluation practices or in different disciplines are relatively common. In a review of literature published on the topic of peer review, the state of research on journal, fellowship and grant peer review is analysed, focusing on three quality criteria: reliability, fairness and predictive validity (Bornmann, 2011 ). The interest was directed towards the norms of science, ensuring that results were not incidental, that certain groups or individuals were not favoured or disadvantaged, and that selection of publications and scholars were aligned to scientific performances. Predictive validity was far less studied in primary research than reliability and fairness. Another overview articulates and critiques conceptions and normative claims of bias (Lee et al., 2013 ). The authors raise questions about existing norms and conclude that peer review is social and that a diversity of norms and opinions among communities and referees may be desirable and beneficial. Bias is also studied in research on who gets tenure with respect to both meritocratic and non-meritocratic factors, such as ascription and social and academic capital (Lutter & Schröder, 2016 ). These authors show that network size, individual reputation and gender matter.

Epistemic differences point to the necessity of studying peer review within a variety of disciplines and transdisciplinary contexts. An interview study of panellists serving on fellowship grants within the social sciences and humanities shows that evaluators generally draw on four epistemological styles: constructivist, comprehensive, positivist and utilitarian (Mallard et al., 2009 ). Moreover, peer reviewers employ the epistemological style most appropriate to the field of the proposal under review. In the future, more attention has to be paid to procedural fairness, including from a comparative perspective. In another systematic review of criteria used to assess grant applications, it is suggested that forthcoming research should also focus on the applicant, include data from non-Western countries and examine a broad spectrum of research fields (Hug & Aeschbach, 2020 ).

As shown in this introductory chapter, the research field devoted to peer review covers a great number of evaluation practices embedded in different contexts. As it is an emergent and fragmented field in need of integration, there are certainly many possible ways to make contributions to the research field of peer review. On the agenda we find issues related to the foundation of science: the ethos of science and the ideology of peer review, the production and dissemination of knowledge, professional self-regulation and open science. There are also questions concerning the development of theoretical framing and methodological tools adapted to the study of diverse review practices in shifting contexts and at various interacting levels. Not least, in response to calls for more comprehensive and integrated research, it is necessary to open the black boxes of peer review and analyse, in empirical studies, the different purposes, discourses, genres, relations and processes involved.

A single book cannot take on all the above-mentioned challenges ahead of us. However, following this brief introduction to the field, the volume brings together research on review practices often studied in isolation. We include studies ranging from the practice of assessing manuscripts submitted for publication to the more recent practice of open review. In addition, more encompassing and general issues are considered, as well as specificities of different peer-review practices. This is further developed below, where the structure of the volume and the contributions of each chapter are presented.

The Structure and Content of the Volume

The structure of the volume falls into three main parts. In the first part, Rudolf Stichweh and Raf Vanderstraeten continue the introduction begun in this chapter. They discuss the term peer review and the contexts of its emergence. In Chap. 2 , Rudolf Stichweh explains the genesis of inequalities and hierarchies in modern science. He illuminates the forms and mechanisms of scientific communication on the basis of which the social structures of science are built: publications, co-authorships and multiple authorships, citations as units of information and as social rewards, and peer review as an evaluation of publications (and of projects and careers). Stichweh demonstrates how, in all institutional dimensions of higher education, differences arise between successful and less successful participations. Success generates influence and social attractiveness (e.g. as a co-author). Influential and attractive participants are recruited into positions where they assess the achievements of others and thereby limit and control inclusion in publications, funding and careers.

Vanderstraeten, in Chap. 3 , puts forward that with the expansion of educational research in the twentieth century, interested ‘amateurs’ have been driven out of the field, and the scientific community of peers has become the dominant point of orientation. Authorship and authority became more widely distributed; peer review was institutionalized to monitor the flow of ideas within scientific literature. Reference lists in journals demonstrated the adoption of cumulative ideals about science. Vanderstraeten’s historical analysis of education journals shows the social changes that contributed to the ascent of an ‘imagined’ community of expert peers in the course of the twentieth century.

Part II of this volume focuses mainly on how peer-review practices have emerged in many parts of higher education institutions. From being scholarly publication practices in early times, peer review appears to be internationally the most significant performative practice in higher education and research. In this part, the various scholars provide insight into such processes. Don F. Westerheijden, in Chap. 4 , revisits the policy issue of the balance between peer review and performance indicators as the means to assess quality in higher education. He shows the paradoxes and unintended effects that emerge when peer review is the main method in the quality assurance procedures of higher education institutions as a whole. Westerheijden argues that attempted solutions of using self-assessments and performance indicators as well as specifically trained assessors increase complaints about bureaucracy from within the academic community.

In Chap. 5 , Hanne Foss Hansen sheds light on how peer review as an evaluation concept has developed over time and discusses which roles peer review plays today. She presents a typology distinguishing between classical peer review, informed and standard-based peer review, modified peer review and extended peer review. Peer review today can be found with all these faces. Peter Dahler Larsen argues in Chap. 6 that gatekeepers in institutional review processes who know the future and use this knowledge in a pre-emptive or precautionary way play a key role in the construction of reality, which comes out of Bibliometric Research Indicators, widely used internationally. By showing that human judgement sometimes enhances or multiplies the effects of ‘evaluation machineries’, this chapter contributes to an understanding of mechanisms that lead to constitutive effects of evaluation systems in research.

In Chap. 7 , Agnes Ers and Kristina Tegler Jerselius explore a national framework for quality assurance in higher education and argue that such systems’ forms are dynamic, since they change over time. Ers and Tegler Jerselius show how the method of peer review has evolved over time and in what way it has been affected by changes made in the system. Gustaf Nelhans engages in Chap. 8 with the performative nature of bibliometric indicators and explores how they influence scholarly practice at macro levels (in national funding systems), meso levels (within universities) and individual levels (in the university employees’ practice). Nelhans puts forward that the common-sense ‘representational model of bibliometric indicators’ is questionable in practice, since it cannot capture the qualities of research in any unambiguous way.

In Chap. 9 , Lars Geschwind and Kristina Edström discuss the loyalty of academic staff to their disciplines or scientific fields. They show how this loyalty is reflected in evaluation practices. They elaborate on the extent to which peer reviewers act as advocates for those they evaluate. By doing so, Geschwind and Edström problematize potential evaluator roles. In Chap. 10 , Malcom Tight closes Part II of this book. Drawing on his extensive review experiences in various areas of higher education institutions, he assesses how ‘fit for purpose’ peer review is in twenty-first-century academe. He focuses on different practices of peer review in the contemporary higher education system and questions how well they work, how they might be improved and what the alternatives are.

Whereas Part II of this volume focuses on the relation and impact of higher education institutions considering education quality and research output, Part III illuminates different particular peer-review practices. Eva Forsberg, Sara Levander and Maja Elmgren examine in Chap. 11 peer-review practices in the promotion of what is called ‘excellent’ or ‘distinguished’ university teachers. While research merits have long been the prioritized criteria in the recognition of institutions and scholars, teaching is often downplayed. To counteract this tendency, various systems to upgrade the value of education and to promote teaching excellence have been introduced by higher education institutions on a global scale. The authors show that the intersection between promotion, peer review and excellent teaching affects not only the peer-review process but also the notion of the excellent or distinguished university teacher.

In Chap. 12 , Tine S. Prøitz discusses how the role of scholarly peers in systematic review is analysed and presented. Peer evaluation is an essential element of quality assurance of the strictly defined methods of systematic review. The involvement of scholarly peers in the systematic review processes has similarities with traditional peer-review processes in academic publishing, but there are also important differences. In systematic review, peers are not only re-judging already reviewed and published research, but also gatekeeping the given standards, guidelines and procedures of the review method.

Liv Langfeldt presents in Chap. 13 processes of grant peer review. There are no clear norms for assessments, and there may be a large variation in what criteria reviewers emphasize and how they are emphasized. Langfeldt argues that rating scales and budget restrictions can be more important than review guidelines for the kind of criteria applied by the reviewers. The decision-making methods applied by the review panels when ranking proposals are found to have substantial effects on the outcome. Chapters 14 and 15 focus on peer-review practices in the recruitment of professors. First, Sara Levander, Eva Forsberg, Sverker Lindblad and Gustav Jansson Bjurhammer analyse the initial step of the typecasting process in the recruitment of full professors. They show that the field of professorial recruitment is characterized by heterogeneity and no longer has a basis in one single discipline. New relations between research, teaching and society have emerged. Moreover, the authority of the professorship has narrowed and the amount of responsibilities have increased. Then, Björn Hammarfeldt focuses on discipline—specific practices for evaluating publications oeuvres. He examines how ‘value’ is enacted with special attention to the kind of tools, judgements, indicators and metrics that are used. Value is indeed enacted differently in the various disciplines.

In the last chapter of the book, Chap. 16 , Tea Vellamo, Jonna Kosonen, Taru Siekkinen and Elias Pekkola investigate practices of tenure track recruitment. They show that criteria of this process can exceed notions of individual merits and include assessments of the strategic visions of universities and departments. The use of the tenure track model can be seen as a shift both for identity building related to a university’s strategy and for using more managerial power in recruitment more generally.

We dedicate this book to our beloved colleague and friend, professor Rita Foss Lindblad, who was involved in the project but passed away in 2018.

Funded by Riksbankens Jubileumsfond (F17-1350:1). The keynotes of the conference are accessible on video at https://media.medfarm.uu.se/play/kanal/417 . For more information on the conference, see www.konferens.edu.uu.se/scga2018-en .

Aagaard, K., Bloch, C., & Schneider, J. W. (2015). Impacts of performance-based research funding systems: The case of the Norwegian Publication Indicator. Research Evaluation, 24 (2), 106–117.

Article   Google Scholar  

Ballou, K. A. (1998). A concept analysis of autonomy. Journal of Professional Nursing, 14 (2), 102–110.

Batagelj, V., Ferligoj, A., & Squazzoni, F. (2017). The emergence of a field: A network analysis of research on peer review. Scientometrics, 113 (1), 503–532. https://doi.org/10.1007/s11192-017-2522-8

Becher, T. (1989). Academic tribes and territories: Intellectual inquiry and the cultures of disciplines . Society for Research into Higher Education.

Google Scholar  

Becher, T., & Trowler, P. R. (2001). Academic tribes and territories. Intellectual inquiry and the culture of disciplines . Open University Press.

Biagioli, M. (2002). From book censorship to academic peer review. Emergences Journal for the Study of Media & Composite Cultures, 12 (1), 11–45. https://doi.org/10.1080/1045722022000003435

Bornmann, L. (2011). Scientific peer review. Annual Review of Information, Science and Technology, 45 , 197–245. https://doi.org/10.1002/aris.2011.1440450112

Bornmann, L. (2013). Evaluations by peer review in science. Springer Science Reviews, 1 (1–4). https://doi.org/10.1007/s40362-012-0002-3

Bourdieu, P. (1996). Homo academicus . Polity.

Boyer, E. L. (1990). Scholarship reconsidered: Priorities of the professoriate . The Carnegie Foundation for the Advancement of Teaching.

British Academy. (2007). Peer review: The challenges for the humanities and social sciences. Retrieved December 1, 2020, from https://www.thebritishacademy.ac.uk/documents/197/Peer-review-challenges-for-humanities-social-sciences.pdf

Caputo, R. K. (2019). Peer review: A vital gatekeeping function and obligation of professional scholarly practice. Families in Society: The Journal of Contemporary Social Services, 100 (1), 6–16. https://doi.org/10.1177/1044389418808155

Chen, R., & Hyon, S. (2005). Faculty evaluation as a genre system: Negotiating intertextuality and interpersonality. Journal of Applied Linguistics, 2 ( 2 ), 153–184. https://doi.org/10.1558/japl.v2i2.153

Clark, B. R. (1972). The organizational saga in higher education. Administrative Science Quarterly, 17 , 178–184.

Clark, B. R. (1989). The academic life: Small worlds, different worlds. Educational Researcher, 18 (5), 4–8. https://doi.org/10.2307/1176126

Cole, J. R., & Cole, S. (1973). Social stratification in science . University of Chicago Press.

Csiszar, A. (2016). Peer review: Troubled from the start. Nature, 532 (7599), 306–308. https://doi.org/10.1038/532306a

Dahler Larsen, P. (2012). The evaluation society . Stanford University Press.

Elken, M., & Wollscheid, S. (2016). The relationship between research and education: Typologies and indicators. A literature review . Nordic Institute for Innovative Studies in Research and Education (NIFU).

European Science Foundation. (2011). European peer review guide. Integrating policies and practices into coherent procedures .

Frey, B. S. (2003). Publishing as prostitution? Choosing between one’s own ideas and academic success. Public Choice, 116 (1/2), 205–223. https://doi.org/10.1023/A:1024208701874

Gläser, J., & Laudel, G. (2007). The social construction of bibliometric evaluations. In R. Whitley & J. Gläser (Eds.), The changing governance of the sciences. The advent of research evaluation systems . Springer.

Grimaldo, F., Marušić, A., & Squazzoni, F. (2018). Fragments of peer review: A quantitative analysis of the literature (1969–2015). PLOS ONE, 13 (2), e0193148. https://doi.org/10.1371/journal.pone.0193148

Guetzkow, J., Lamont, M., & Mallard, G. (2004). What is originality in the humanities and the social sciences? American Sociological Review 2004, 69 , 190. https://doi.org/10.1177/000312240406900203

Gunneriusson, H. (2002). Det historiska fältet: svensk historievetenskap från 1920-tal till 1957 . Uppsala: Acta Universitatis Upsaliensis.

Hamann, J., & Beljean, S. (2017). Academic evaluation in higher education. In J. C. Shin & P. Teixeira (Eds.), Encyclopedia of international higher education systems and institutions . https://doi.org/10.1007/978-94-017-9553-1_295-1

Chapter   Google Scholar  

Hammarfelt, B. (2017). Recognition and reward in the academy: Valuing publication oeuvres in biomedicine, economics and history. Aslib Journal of Information Management, 69 (5), 607–623. https://doi.org/10.1108/AJIM-01-2017-0006

Hammarfelt, B., Rushforth, D., & de Rijcke, S. (2020). Temporality in academic evaluation: ‘Trajectoral thinking’ in the assessment of biomedical researchers. Valuation Studies, 7 (1), 33–63. https://doi.org/10.3384/VS.2001-5992.2020.7.1.33

Hansen, H. F., Aarrevaara, T., Geschwind, L., & Stensaker, B. (2019). Evaluation practices and impact: Overload? In R. Pinheiro, L. Geschwind, H. Foss Hansen, & K. Pulkkinen (Eds.), Reforms, organizational change and performance in higher education: A comparative account from the Nordic countries . Palgrave Macmillan.

Helgesson, C.-F. (2016). Folded valuations? Valuation Studies, 4 (2), 93–102. https://doi.org/10.3384/VS.2001-5992.164293

Horn, S. A. (2016). The social and psychological costs of peer review: Stress and coping with manuscript rejection. Journal of Management Inquiry, 25 (1), 11–26. https://doi.org/10.1177/1056492615586597

Hug, S. E., & Aeschbach, M. (2020). Criteria for assessing grant applications: A systematic review. Palgrave Communications, 6 (30). https://doi.org/10.1057/s41599-020-0412-9

Kaltenbrunner, W., & de Rijcke, S. (2020). Filling in the gaps: The interpretation of curricula vitae in peer review. Social Studies of Science, 49 (6), 863–883. https://doi.org/10.1177/0306312719864164

Knorr Cetina, K. (1999). Epistemic cultures . Harvard University Press.

Book   Google Scholar  

Lamont, M. (2009). How professors think. Inside the curious world of academic judgment . Harvard University Press.

Lamont, M. (2012). Toward a comparative sociology of valuation and evaluation. Annual Review of Sociology, 38 (21), 201–221. https://doi.org/10.1146/annurev-soc-070308-120022

Langfeldt, L., & Kyvik, S. (2011). Researchers as evaluators: Tasks, tensions and politics. Higher Education, 62 (2), 199–212. https://doi.org/10.1007/s10734-010-9382-y

Langfeldt, L., & Kyvik, S. (2015). Intrinsic tensions and future challenges of peer review. In RJ Yearbook 2015/2016 . Riksbankens Jubileumsfond & Makadam Publishers.

Langfeldt, L., Nedeva, M., Sörlin, S., & Thomas, D. A. (2020). Co-existing notions of research quality: A framework to study context-specific understandings of good research. Minerva, 58 , 115–137. https://doi.org/10.1007/s11024-019-09385-2

Lee, C. J., Sugimoto, G. R., Zhang, G., & Cronin, B. (2013). Bias in peer review. Journal of the American Society for Information Science and Technology, 64 (1), 2–17. https://doi.org/10.1002/asi.22784

Lutter, M., & Schröder, M. (2016). Who becomes a tenured professor, and why? Panel data evidence from German sociology, 1980–2013. Research Policy, 45 , 999–1013. https://doi.org/10.1016/j.respol.2016.01.019

Mallard, G., Lamont, M., & Guetskow, J. (2009). Fairness as appropriateness: Negotiating epistemological differences in peer review. Science Technology Human Values . https://doi.org/10.1177/0162243908329381

Maton, K. (2005). A question of autonomy: Bourdieu’s field approach and higher education policy. Journal of Education Policy, 20 (6), 687–704. https://doi.org/10.1080/02680930500238861

Merton, R. K. (1968). The Matthew effect in science. Science, 159 (3810), 56–63. https://doi.org/10.1126/science.159.3810.56

Merton R. K. (1973). The sociology of science: Theoretical and empirical investigations (Norman W. Storer, Ed.). University of Chicago Press. (Original work published 1942)

Musselin, C. (2002). Diversity around the profile of the ‘good’ candidate within French and German universities. Tertiary Education and Management, 8 (3), 243–258. https://doi.org/10.1080/13583883.2002.9967082

Musselin, C. (2013). How peer review empowers the academic profession and university managers: Changes in relationships between the state, universities and the professoriate. Research Policy, 42 (5), 1165–1173. https://doi.org/10.1016/j.respol.2013.02.002

Neave, G. (1998). The evaluative state reconsidered. European Journal of Education, 33 (3), 265–284. https://www.jstor.org/stable/1503583

Nowotny, H., Scott, P. B., & Gibbons, M. T. (2001). Re-thinking science: Knowledge and the public in an age of uncertainty . Polity Press.

Oancea, A. (2019). Research governance and the future(s) of research assessment. Palgrave Communications, 5 , 27. https://doi.org/10.1057/s41599-018-0213-6

Oravec, A. (2019). Academic metrics and the community engagement of tertiary education institutions: Emerging issues in gaming, manipulation, and trust. Tertiary Education and Management. https://doi.org/10.1007/s11233-019-09026-z

Ozeki, S. (2016). Three Empirical Investigations into the Logic of Evaluation and Valuing Practices . Dissertations. 2470. https://scholarworks.wmich.edu/dissertations/2470

Paltridge, B. (2017). The discourse of peer review. Reviewing submission to academic journals . Macmillan Publishers.

Panofski, A. L. (2010). In C. J. Calhoun (Ed.), Robert K. Merton: Sociology of science and sociology as science . Columbia University Press.

Pfadenhauer, M. (2003). Professionalität. Eine wissenssoziologische Rekonstruktion institutionalisierter Kompetenzdarstellungskompetenz [Professionalism. A reconstruction of institutionalized proficiency in displaying competence]. Springer.

Power, M. (1997). The Audit Society. Rituals of verification . Oxford University Press.

Publons. (2018). Global state of peer review. Online.

Research Information Network CIC. (2015). Scholarly communication and peer review. The current landscape and future trends. A report commissioned by the Wellcome Trust. Retrieved May 2015, from https://wellcome.org/sites/default/files/scholarly-communication-and-peer-review-mar15.pdf

Ross-Hellauer, T. (2017). What is open peer review? A systematic review (version 2; peer review: 4 approved). F1000Research, 2017, 6 (588). Last updated: 17 May 2019. Included in Science Policy Research Gateway. https://doi.org/10.12688/f1000research.11369.2

Roumbanis, L. (2017). Academic judgments under uncertainty: A study of collective anchoring effects in Swedish research council panel groups. Social Studies of Science, 47 (1), 95–116. https://doi.org/10.1177/0306312716659789

Sabaj Meruane, O., González Vergara, C., & Pina-Stranger, Á. (2016). What we still don’t know about peer review. Journal of Scholarly Publishing, 47 (2), 180–212. https://doi.org/10.3138/jsp.47.2.180

Scriven, M. (1980). The logic of evaluation . Edgepress.

Scriven, M. (2003). Evaluation theory and metatheory. In T. Kellaghan, D. L. Stufflebeam, & L. A. Wingate (Eds.), International handbook of educational evaluation (pp. 15–30). Kluwer Academic Publishers.

Serrano Velarde, K. (2018). The way we ask for money… The emergence and institutionalization of grant writing practices in academia. Minerva, 56 (1), 85–107. https://doi.org/10.1007/s11024-018-9346-4

Söderlind, J., & Geschwind, L. (2019). Making sense of academic work: The influence of performance measurement in Swedish universities. Policy Reviews in Higher Education, 3 (1), 75–93. https://doi.org/10.1080/23322969.2018.1564354

Swales, J. M. (1996). Occluded genres in the academy. The case of the submission letter. In E. Ventola & A. Mauranen (Eds.), Academic writing: Intercultural and textual issues . ProQuest Ebook Central. http://ebookcentral.proquest.com/lib/uu/detail.action?docID=680373

Tennant, J. P., & Ross-Hellauer, T. (2020). The limitations to our understanding of peer review. Research Integrity and Peer Review, 5 (6). https://doi.org/10.1186/s41073-020-00092-1

Trowler, P., Saunders, M., & Bamber, V. (Eds.). (2014). Tribes and territories in the 21st century. Rethinking the significance of disciplines in higher education . Routledge.

Vedung, E. (2002). Utvärderingsmodeller [Evaluation models]. Socialvetenskaplig tidskrift, 9 (2–3), 118–143.

Warne, V. (2016). Rewarding reviewers—sense or sensibility? A Wiley study explained. Learned Publishing, 29 , 41–50. https://doi.org/10.1002/leap.1002

Westerheijden, D. F., Stensaker, B., & Joao Rosa, M. (Eds.). (2007). Quality assurance in higher education. Trends in regulation, translation and transformation . Springer.

Whitley, R. (1984). The intellectual and social organization of the sciences . Clarendon Press.

Whitley, R. (2011). Changing governance and authority relationships in the public sciences. Minerva, 49 , 359–385. https://doi.org/10.1007/s11024-011-9182-2

Ziman, J. M. (1968). Public knowledge . The University of Chicago Press.

Download references

Author information

Authors and affiliations.

Department of Education, Uppsala University, Uppsala, Sweden

Eva Forsberg & Sara Levander

Department of Learning in Engineering Sciences, KTH Royal Institute of Technology, Stockholm, Sweden

Lars Geschwind

Department of Special Education, Stockholm University, Stockholm, Sweden

Wieland Wermke

You can also search for this author in PubMed   Google Scholar

Editor information

Editors and affiliations.

Eva Forsberg

Department Learning in Engineering Sciences, KTH Royal Institute of Technology, Stockholm, Sweden

Sara Levander

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2022 The Author(s)

About this chapter

Forsberg, E., Geschwind, L., Levander, S., Wermke, W. (2022). Peer Review in Academia. In: Forsberg, E., Geschwind, L., Levander, S., Wermke, W. (eds) Peer review in an Era of Evaluation. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-75263-7_1

Download citation

DOI : https://doi.org/10.1007/978-3-030-75263-7_1

Published : 03 January 2022

Publisher Name : Palgrave Macmillan, Cham

Print ISBN : 978-3-030-75262-0

Online ISBN : 978-3-030-75263-7

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

The peer review process

The peer review process can be broadly summarized into 10 steps, although these steps can vary slightly between journals. Explore what’s involved, below.

Editor Feedback: “Reviewers should remember that they are representing the readers of the journal. Will the readers of this particular journal find this informative and useful?”

Peer Review Process

1. Submission of Paper

The corresponding or submitting author submits the paper to the journal. This is usually via an online system such as ScholarOne Manuscripts. Occasionally, journals may accept submissions by email.

2. Editorial Office Assessment

The Editorial Office checks that the paper adheres to the requirements described in the journal’s Author Guidelines. The quality of the paper is not assessed at this point.

3. Appraisal by the Editor-in-Chief (EIC)

The EIC checks assesses the paper, considering its scope, originality and merits. The EiC may reject the paper at this stage.

4. EIC Assigns an Associate Editor (AE)

Some journals have Associate Editors ( or equivalent ) who handle the peer review. If they do, they would be assigned at this stage.

5. Invitation to Reviewers

The handling editor sends invitations to individuals he or she believes would be appropriate reviewers. As responses are received, further invitations are issued, if necessary, until the required number of reviewers is secured– commonly this is 2, but there is some variation between journals.

6. Response to Invitations

Potential reviewers consider the invitation against their own expertise, conflicts of interest and availability. They then accept or decline the invitation to review. If possible, when declining, they might also suggest alternative reviewers.

7. Review is Conducted

The reviewer sets time aside to read the paper several times. The first read is used to form an initial impression of the work. If major problems are found at this stage, the reviewer may feel comfortable rejecting the paper without further work. Otherwise, they will read the paper several more times, taking notes to build a detailed point-by-point review. The review is then submitted to the journal, with the reviewer’s recommendation (e.g. to revise, accept or reject the paper).

8. Journal Evaluates the Reviews

The handling editor considers all the returned reviews before making a decision. If the reviews differ widely, the editor may invite an additional reviewer so as to get an extra opinion before making a decision.

9. The Decision is Communicated

The editor sends a decision email to the author including any relevant reviewer comments. Comments will be anonymous if the journal follows a single-anonymous or double-anonymous peer review model. Journals with following an open or transparent peer review model will share the identities of the reviewers with the author(s).

10. Next Steps

An editor's perspective.

Listen to a podcast from Roger Watson, Editor-in-Chief of Journal of Advanced Nursing, as he discusses 'The peer review process'.

If accepted , the paper is sent to production. If the article is rejected or sent back for either major or minor revision , the handling editor should include constructive comments from the reviewers to help the author improve the article. At this point, reviewers should also be sent an email or letter letting them know the outcome of their review. If the paper was sent back for revision , the reviewers should expect to receive a new version, unless they have opted out of further participation. However, where only minor changes were requested this follow-up review might be done by the handling editor.

Peer Review Process: Understanding The Pathway To Publication

Demystifying peer review process: Insights into the rigorous evaluation process shaping scholarly research and ensuring academic quality.

' src=

The process of peer review plays a vital role in the world of academic publishing, ensuring the quality and credibility of scholarly research. This process is a critical evaluation system where experts in the field assess the merit, validity, and originality of research manuscripts before publication. Through a comprehensive examination of the peer review process, this article aims to explain its stages, importance, and best practices. Researchers and aspiring authors, using a peer review process, can navigate an evaluation process effectively, enhance the integrity of their work, and contribute to the advancement of scientific knowledge.

What Is Peer Review?

Peer review is a critical evaluation process that academic work undergoes before being published in a journal. It serves as a filter, fact-checker, and redundancy-detector, ensuring that the published research is original, impactful, and adheres to the best practices of the field. The primary purposes of peer review are twofold. Firstly, it acts as a quality control mechanism, ensuring that only high-quality research is published, especially in reputable journals, by assessing the validity, significance, and originality of the study. Secondly, it aims to improve the quality of manuscripts deemed suitable for publication by providing authors with suggestions for improvement and identifying any errors that need correction. The process subjects the manuscript to the scrutiny of experts (peers) in the field, who review and provide feedback in one or more rounds of review and revision, depending on the journal’s policies and the topic of the work.

Related article: The History of Peer Review: Enhance The Quality Of Publishing

peer review process

The Importance Of Peer Review In Science

Peer review in science is important for several reasons. It ensures quality, validates research findings, provides constructive feedback, fosters collaboration, and maintains public trust in scientific research. It provides valuable insights, suggestions, and alternative perspectives that can enhance the quality of the research. Authors benefit from this iterative process, as it allows them to address any weaknesses or gaps in their work and improve the clarity and coherence of their findings.

Also read: What Is A Peer-Reviewed Article And Where Can We Find It?

Additionally, peer review serves as a platform for constructive criticism and feedback, it contributes to the advancement of scientific knowledge by fostering intellectual dialogue and collaboration. Through the critical assessment of research manuscripts, reviewers may identify potential areas for further investigation or propose alternative hypotheses, stimulating further research and discovery.

Types Of Peer Review Process

Peer review has various models. The specific type of peer review employed can differ between journals, even within the same publisher. Before submitting the paper, it is crucial to become acquainted with the peer review policy of the selected journal, this ensures that the review process aligns with expectations. To understand the different models, we will outline the most prevalent types of peer review.

Single-Anonymous Peer Review

Single-anonymous peer review, also known as single-blind review, is a prevalent model employed by science and medicine journals. In this process, the reviewers are aware of the author’s identity, but the author remains unaware of the reviewers’ identities. This approach maintains a level of anonymity to ensure impartial evaluation and minimize biases. The reviewers assess the manuscript based on its merits, scientific rigor, and adherence to the journal’s guidelines. Single-anonymous peer review helps maintain objectivity and fairness in the review process, allowing for an unbiased assessment of the research work.

Related article: The Role Of Single-Blind Review In Research Papers

Double-Anonymous Peer Review

Double-anonymous peer review, also known as double-blind review, is a method employed in many humanities and social sciences journals. In this process, the identities of both the author and the reviewers are concealed. The reviewers are unaware of the author’s identity, and vice versa. This type of review aims to minimize bias and ensure a fair evaluation of the manuscript based solely on its content and merit. By maintaining anonymity, double-anonymous peer review promotes impartiality and enhances the credibility and objectivity of the peer review process.

Triple-Anonymized Peer Review

Triple-anonymized review, also known as triple-blind review, ensures anonymity for both the reviewers and the author. At the submission stage, articles are anonymized to minimize any potential bias toward the author(s). The editor and reviewers do not have knowledge of the author’s identity. However, it is important to note that fully anonymizing articles/authors at this level can be challenging. The editor and/or reviewers still can deduce the author’s identity through their writing style, subject matter, citation patterns, or other methodologies, similar to double anonymized review.

Open Peer Review

Open peer review is a diverse and evolving model with various interpretations. It generally involves reviewers being aware of the author’s identity and, at some stage, their identities being disclosed to the author. However, there is no universally accepted definition for open peer review, with over 122 different definitions identified in a recent study. This approach introduces transparency to the peer review process by allowing authors and reviewers to engage in a more direct and open dialogue. The level of openness may vary, with some forms of open peer review including public reviewer comments and even post-publication commentary. Open peer review aims to foster collaboration, accountability, and constructive feedback within the scientific community.

Post-Publication Peer Review

Post-publication peer review is a distinct model where the review process takes place after the initial publication of the paper. It can occur in two ways: either the paper undergoes a traditional peer review before being published online, or it is published online promptly after basic checks without undergoing extensive pre-publication review. Once the paper is published, reviewers, including invited experts or even readers, have the opportunity to contribute their comments, assessments, or reviews. This form of peer review allows for ongoing evaluation and discussion of the research, providing a platform for additional insights, critiques, and discussions that can contribute to the refinement and further understanding of the published work. Post-publication peer review emphasizes the importance of continued dialogue and engagement within the scientific community to ensure the quality and validity of published research.

Registered Reports

Registered Reports is a unique peer review process that involves two distinct stages. The first stage occurs after the study design has been developed but before data collection or analysis has taken place. At this point, the manuscript undergoes peer review, providing valuable feedback on the research question and the experimental design. If the manuscript successfully passes this initial peer review, the journal grants an in-principle acceptance (IPA), indicating that the article will be published contingent upon the completion of the study according to the pre-registered methods and the submission of an evidence-based interpretation of the results. This approach ensures that the research is evaluated based on its scientific merit rather than the significance or outcome of the findings. Registered Reports aim to enhance the credibility and transparency of research by focusing on the quality of the research question and methodology rather than the outcome, reducing bias and providing a more robust foundation for scientific knowledge.

Peer Review Process

The peer review process is a critical component of academic publishing that ensures the quality, validity, and integrity of scholarly research. It involves a rigorous evaluation of research manuscripts by experts in the same field to determine their suitability for publication. While the specific steps may vary among journals, the general process follows several key stages.

Submission: Authors submit their research manuscript to a journal, adhering to the journal’s guidelines and formatting requirements.

Editorial Evaluation: The editor assesses the manuscript’s alignment with the journal’s scope, relevance, and overall quality. They may reject the manuscript at this stage if it does not meet the journal’s criteria.

Peer Review Assignment: If the manuscript passes the initial evaluation, the editor selects appropriate experts in the field to conduct the peer review. Reviewers are chosen based on their expertise, ensuring a thorough and unbiased evaluation.

Peer Review: The reviewers carefully examine the manuscript, assessing its methodology, validity of results, clarity of writing, and contribution to the field. They provide constructive feedback, identify strengths and weaknesses, and recommend revisions.

Decision: Based on the reviewers’ feedback, the editor decides the manuscript. The decision can be acceptance, acceptance with revisions, major revisions, or rejection. The author(s) are notified of the decision along with any specific feedback.

Revision: If the manuscript requires revisions, the author(s) make necessary changes based on the reviewers’ comments and suggestions. They address each point raised by the reviewers and provide a detailed response outlining the modifications made.

Final Decision: The editor re-evaluates the revised manuscript to ensure that all requested changes have been adequately addressed. The editor then makes the final decision regarding its acceptance.

Publication: Once accepted, the manuscript undergoes the final stages of copyediting, formatting, and proofreading before being published in the journal. It becomes accessible to the wider academic community, contributing to the body of knowledge in the respective field.

Potential Problems Of Peer Review

While peer review is an essential component of the scholarly publishing process, it is not without its potential problems. Some of the key challenges and limitations of peer review include:

Bias and Subjectivity: Reviewers may possess personal biases that can influence their manuscript assessment, potentially leading to unfair evaluations or inconsistent judgments. Subjectivity in the interpretation of research findings and methodology can also impact the review process.

Delays in Publication: Peer review can be a time-consuming process, with reviewers taking varying lengths of time to provide feedback. This can result in delays in the publication of research, potentially hindering the timely dissemination of important findings.

Lack of Standardization: Reviewers’ expertise, qualifications, and reviewing criteria may vary, leading to inconsistencies in the evaluation process. The lack of standardized guidelines for reviewing can result in discrepancies in the quality and rigor of the peer review process across different journals and disciplines.

Inefficiency and Burden: Reviewers are typically unpaid volunteers who dedicate their time and expertise to reviewing manuscripts. The increasing volume of submissions and shortage of qualified reviewers can place a significant burden on the peer review system, potentially leading to delays and compromised quality.

Limited Scope for Detecting Errors: While peer review aims to identify and rectify errors or methodological flaws in manuscripts, it is not foolproof. Reviewers may not always have access to the raw data or the resources to conduct a thorough replication of the study, making it challenging to detect certain types of errors or misconduct.

Publication Bias: Peer review can inadvertently contribute to publication bias, as journals may have a preference for publishing positive or statistically significant results, potentially neglecting studies with null or negative findings. This can create an imbalanced representation of research in the literature.

120% Growth In Citations For Articles With Infographics

Mind the Graph platform provides valuable support to scientists by offering a range of features that enhance their research impact. One notable benefit is the use of infographics, which has been shown to significantly boost the visibility and recognition of scientific articles. This helps to capture the attention of readers, promote a better understanding of research findings, and increase the likelihood of citations and recognition within the scientific community. Sign up for free now!

illustrations-banner

Subscribe to our newsletter

Exclusive high quality content about effective visual communication in science.

Content tags

en_US

A middle aged man sits at a computer against a wall full of books.

Peer review isn’t perfect − I know because I teach others how to do it and I’ve seen firsthand how it comes up short

peer review process academic research

Director of the Center for Teaching and Learning, Quinnipiac University

Disclosure statement

JT Torres does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Quinnipiac University provides funding as a member of The Conversation US.

View all partners

When I teach research methods, a major focus is peer review . As a process, peer review evaluates academic papers for their quality, integrity and impact on a field, largely shaping what scientists accept as “knowledge.” By instinct, any academic follows up a new idea with the question, “Was that peer reviewed?”

Although I believe in the importance of peer review – and I help do peer reviews for several academic journals – I know how vulnerable the process can be. Not only have academics questioned peer review reliability for decades, but the retraction of more than 10,000 research papers in 2023 set a new record.

I had my first encounter with the flaws in the peer review process in 2015, during my first year as a Ph.D. student in educational psychology at a large land-grant university in the Pacific Northwest.

My adviser published some of the most widely cited studies in educational research. He served on several editorial boards. Some of the most recognized journals in learning science solicited his review of new studies. One day, I knocked on his office door. He answered without getting up from his chair, a printed manuscript splayed open on his lap, and waved me in.

“Good timing,” he said. “Do you have peer review experience?”

I had served on the editorial staff for literary journals and reviewed poetry and fiction submissions, but I doubted much of that transferred to scientific peer review.

“Fantastic.” He smiled in relief. “This will be real-world learning.” He handed me the manuscript from his lap and told me to have my written review back to him in a week.

I was too embarrassed to ask how one actually does peer review, so I offered an impromptu plan based on my prior experience: “I’ll make editing comments in the margins and then write a summary about the overall quality?”

His smile faded, either because of disappointment or distraction. He began responding to an email.

“Make sure the methods are sound. The results make sense. Don’t worry about the editing.”

Ultimately, I fumbled my way through, saving my adviser time on one less review he had to conduct. Afterward, I did receive good feedback and eventually became a confident peer reviewer. But at the time, I certainly was not a “peer.” I was too new in my field to evaluate methods and results, and I had not yet been exposed to enough studies to identify a surprising observation or to recognize the quality I was supposed to control. Manipulated data or subpar methods could easily have gone undetected.

Effects of bias

Knowledge is not self-evident. A survey can be designed with a problematic amount of bias , even if unintentional.

Observing a phenomenon in one context, such as an intervention helping white middle-class children learn to read, may not necessarily yield insights for how to best teach reading to children in other demographics. Debates over “the science of reading” in general have lasted decades, with researchers arguing over constantly changing “recommendations ,” such as whether to teach phonics or the use of context cues.

A correlation – a student who bullies other students and plays violent video games – may not be causation . We do not know if the student became a bully because of playing violent video games. Only experts within a field would be able to notice such differences, and even then, experts do not always agree on what they notice.

Four researchers look at an open notebook.

As individuals, we can very often be limited by our own experiences. Let’s say in my life I only see white swans. I might form the knowledge that only white swans exist. Maybe I write a manuscript about my lifetime of observations, concluding that all swans are white. I submit that manuscript to a journal, and a “peer,” someone who also has observed a lot of swans, says, “Wait a minute, I’ve seen black swans.” That peer would communicate back to me their observations so that I can refine my knowledge.

The peer plays a pivotal role evaluating observations, with the overall goal of advancing knowledge. For example, if the above scenario were reversed, and peer reviewers who all believed that all swans were white came across the first study observing a black swan, the study would receive a lot of attention as researchers scrambled to replicate that observation. So why was a first-year graduate student getting to stand in for an expert? Why would my review count the same as a veteran’s review? One answer: The process relies almost entirely on unpaid labor .

Despite the fact that peers are professionals, peer review is not a profession.

As a result, the same overworked scholars often receive the bulk of the peer review requests. Besides the labor inequity, a small pool of experts can lead to a narrowed process of what is publishable or what counts as knowledge, directly threatening diversity of perspectives and scholars .

Without a large enough reviewer pool, the process can easily fall victim to politics, arising from a small community recognizing each other’s work and compromising conflicts of interest. Many of the issues with peer review can be addressed by professionalizing the field, either through official recognition or compensation.

Value despite challenges

Despite these challenges, I still tell my students that peer review offers the best method for evaluating studies and advancing knowledge. Consider the statistical phenomenon suggesting that groups of people are more likely to arrive at “right answers” than individuals.

In his book “ The Wisdom of Crowds ,” author James Surowiecki tells the story of a county fair in 1906, where fairgoers guessed the weight of an ox. Sir Francis Galton averaged the 787 guesses and arrived at 1,197 pounds. The ox weighed 1,198 pounds.

When it comes to science and the reproduction of ideas, the wisdom of the many can account for individual outliers. Fortunately, and ironically, this is how science discredited Galton’s take on eugenics, which has overshadowed his contributions to science .

As a process, peer review theoretically works. The question is whether the peer will get the support needed to effectively conduct the review.

  • Peer review
  • Retractions
  • Academic journal
  • Scholarship
  • Higher ed attainment

peer review process academic research

GRAINS RESEARCH AND DEVELOPMENT CORPORATION CHAIRPERSON

peer review process academic research

Project Officer, Student Program Development

peer review process academic research

Faculty of Law - Academic Appointment Opportunities

peer review process academic research

Operations Manager

peer review process academic research

Audience Development Coordinator (fixed-term maternity cover)

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Saudi J Anaesth
  • v.13(Suppl 1); 2019 Apr

The peer review process

Dmitry tumin.

1 Department of Anesthesiology and Pain Medicine, Nationwide Children's Hospital, Columbus, Ohio, USA

2 Department of Pediatrics, Nationwide Children's Hospital and The Ohio State University, Columbus, Ohio, USA

Joseph Drew Tobias

3 Department of Anesthesiology and Pain Medicine, The Ohio State University College of Medicine, Columbus, Ohio, USA

The peer review process provides a foundation for the credibility of scientific findings in medicine. The following article discusses the history of peer review in scientific and medical journals, the process for the selection of peer reviewers, and how journal editors arrive at a decision on submitted manuscripts. To aid authors who are invited to revise their manuscripts for further consideration, we outline steps for considering reviewer comments and provide suggestions for organizing the author's response to reviewers. We also examine ethical issues in peer review and provide recommendations for authors interested in becoming peer reviewers themselves.

Introduction

The review of research articles by peer experts prior to their publication is considered a mainstay of publishing in the medical literature.[ 1 , 2 ] This peer review process serves at least two purposes. For journal editors, peer review is an important tool for evaluating manuscripts submitted for publication. Reviewers assess the novelty and importance of the study, the validity of the methods, including the statistical analysis, the quality of the writing, the presentation of the data, and the connections drawn between the study findings and the existing literature. For authors, peer review is an important source of feedback on scientific writing and study design, and may aid in professionalization of junior researchers still learning the conventions of their field. Nevertheless, peer review can be frustrating, intimidating, or mysterious. This can deter authors from publishing their work or lead them to seek publication in less credible venues that use less rigorous peer review or do not subject manuscripts to peer review at all. In this article, we trace the origins of the scientific peer review system, explain its contemporary workings, and present authors with a brief guide on shepherding their manuscripts through peer review in medical journals.

The History of Scientific Peer Review

The introduction of peer review has been popularly attributed to the Royal Society of Edinburg, which compiled a collection of articles that had undergone peer review in 1731.[ 2 , 3 ] However, this initial process did not meet the criteria of peer review in its modern form, and well into the twentieth century, external and blinded peer review was still far from a requisite for scientific publication. Albert Einstein protested to the editor of an American journal in 1936 that his article was sent out for review, whereas this was not the practice of the German journals to which he had previously contributed.[ 4 ] Nevertheless, by the 1960s, the scientific value of peer review was becoming widely accepted, and in recent years, publication in a peer-reviewed journal has become a standard metric of scientific productivity (for the researchers) and validity (for the study).[ 5 , 6 ] In fact, publication in peer-reviewed quality journals is used to evaluate the quality of research during the academic promotion process. Today, peer review continues to evolve with the introduction of open review (reviewer comments posted publicly with the final article), postpublication review (reviews solicited from readers in an open forum after article publication), and journal review networks (where reviews are transferred from one journal to another when an article is rejected).[ 7 , 8 , 9 ] The constant at the center of this change remains the individual reviewer, who is asked to contribute their expertise to evaluating a manuscript that may or may not ever be shared with a wider scientific audience.

Reviewer Selection

The opacity of the peer review process is due, in part, to the anonymity of the reviewers and authors' lack of familiarity with how reviewers are selected. Typically, reviewers are selected by an editor of the journal, although depending on the size and organization of the journal, this may be the Editor-in-Chief, an Associate Editor, a Managing Editor, or an Editorial Assistant. Some journals permit authors to suggest their own reviewers, although the extent to which editors use these suggestions is variable. Authors may also be asked specifically or allowed to oppose reviewers, if they feel that certain scholars cannot grant their manuscript an unbiased hearing. Again, it is at the editors' discretion whether these requests are heeded. It has been suggested that these “opposed” reviewers may even be deliberately selected to ensure critical evaluation of a controversial manuscript. Alternatively, for very specific and narrow subject areas, there may be a limited number of appropriately qualified reviewers.

In general, reviewers may be of any academic rank and from a wide range of medical disciplines. A reviewer may be selected for their expertise in the topic of the study, but also for their general methodological expertise, or because they have been a reliable reviewer for the journal in the past. Qualified reviewers may not be invited if they cannot be reached by the editorial team, if they tend to submit late or uninformative reviews, or if they are too closely connected with the manuscript authors (e.g., colleagues at the same institution) and therefore may not provide an unbiased review. The reviewers initially selected by the editors may decline the invitation to review, mandating that the editors seek other reviewers. Unfortunately, this process of waiting for a response from the initial invitation to review (aside from the time taken to review) is one of the more common causes resulting in a delay in getting a response from the journal when a manuscript is submitted. The invited reviewer may pass the review on to a junior faculty member to allow them to participate and experience the academic peer review process. This may be performed with the permission of the editor, and noted after the review is submitted to the editor when the invited reviewer identifies that another person has participated in the process.

The initially received reviews may conflict with one another, leading the editors to cast a wider net for experts who will agree to review a submission. Because many factors may delay the completion of the review process, editors may proactively invite more reviews than they require and decide on the manuscript after a minimum number of reviews have been completed. The use of email and the internet has greatly facilitated communication for the review process, which used to be accomplished via telephone and postal mail. In most instances, an initial email is sent to the reviewer inquiring regarding their availability and interest. They are then asked to agree to review, at which time, a secondary email with a link to the journal site, the manuscript, and the review forms is sent.

How Reviewers Assess a Manuscript

From the reviewer's perspective, participation in the review process begins with an invitation from the journal editors to consider reviewing a submitted manuscript. If they accept, the reviewers will be able to access the submitted manuscript files, and sometimes the authors' cover letter, and other article metadata (e.g., the authors' list of preferred reviewers, figures, tables, etc.). Some journals ask reviewers to complete a structured questionnaire regarding the manuscript, rating its attributes on a numeric scale, or answering specific questions about each article section. All journals permit the submission of free-response evaluations. It is these evaluations that typically carry the greatest weight in the editors' final decision. The free-text reviewer reports also give the authors specific instructions about revising their manuscript and responding to the concerns that are raised. Reviewers may also submit confidential free-response comments to the editors (not seen by the authors) and indicate to the editors if they would be willing to review a revised version of the manuscript. In the end, the reviewer is asked to indicate their final recommendation to accept the manuscript without changes, accept after minor revisions, reconsider after major revisions, or reject. Some journals may offer additional variations on these recommendations, such as “reject but allow resubmission,” discussed below.

Regardless of the requested format for reviews, reviewers will typically evaluate several key aspects of submitted manuscripts. For original research studies, these will include the importance of the research question, the rigor of the methods, the completeness, accuracy, and novelty of the study and its results, and the validity of conclusions drawn from the data. The presentation of the manuscript, including the writing style, structure, grammar, and syntax also determine how the manuscript is received by the reviewers. Although the study design and results may be valid, these findings may be lost if the presentation is not precise or if there are grammar and spelling errors.

Reviewers also consider whether the study adds to existing knowledge in the field, whether it was ethically conducted, and whether it may be subject to any conflicts of interest. The editor and the reviewers also evaluate the study content and decide whether it is valuable and relevant to the readers of the journal. Although the study may be valid and well performed, it may be decided that the subject matter fits more appropriately in a journal of a different specialty. Along those lines, there may be overlap in the interests and fit of journals in different specialties, so that common topics in anesthesiology research may be of interest to journals from surgical specialties, pain medicine, or healthcare quality and patient safety, depending on the article content.

Some reviewers may submit their comments in paragraph form, building a narrative of the study's strengths and weaknesses section by section, whereas others may submit a short summary of the study followed by a list of criticisms or suggested corrections. Less commonly, reviewers may annotate the original manuscript with specific changes and questions or using the track-changes function of the word-processing software. Although the reviewers may recommend a specific editorial decision (e.g., recommend accepting an article with revisions, recommend rejection) in their comments to authors, this is generally discouraged by most journals and does not override the final decision reached by the editorial team. The ultimate decision generally resides with the section editor or the editor-in-chief, once they have seen and evaluated the comments of the reviewers. Depending on the format of the journal, the manuscript may be reviewed by one to five individuals. When there are specific statistical questions or advanced methods used, a separate review of the analytic methods may be required. For high-profile journals with high Impact Factors, a recommendation to accept may be required from all reviewers to receive a favorable editorial decision. At times, if there is a split decision, an additional reviewer or member of the editorial board may be asked to evaluate the manuscript to break the tie.

Almost all journals practice blinded review, where the reviewers' identities are not revealed to the authors. Double-blind review, where authors' identities are concealed from reviewers, although previously uncommon in medical journals, has been increasingly used. The editors communicate their decision and reviewers' evaluations to the authors in a decision letter (e-mail), informing of manuscript acceptance or rejection.

Reviews and the Editorial Decision

The comments submitted by external reviewers are collected by the editorial team and considered when determining the overall decision on the submitted article. The reviews may be read directly by the Editor-in-Chief, or by one or more Associate or Section Editors. The first editor reading the reviews might provide a recommendation that is then considered by the more senior editor; or the editors may convene to discuss the reviews and reach a decision as a group. In some journals, editors may write their own summary of the reviewers' criticism (sometimes adding their own) or may point out the critiques they consider most important to their decision. In other journals, editors weigh the number of positive and negative reviews or may reject an article unless all reviewers endorse its acceptance or revision.

Based on the external reviews and their own reading of the manuscript, the editors will reach one of several options regarding the manuscript. Unconditional acceptance of an article on its first submission to a journal (without any requested revisions) is very rare. Sometimes, articles will be conditionally accepted or accepted with minor revisions, meaning that the editors wish the authors to make changes to their manuscript based on the reviewers' comments but will not send the revised manuscript for a further round of external review. Rather, if the comments are generally minor, the editor will ensure that the comments are appropriately addressed in the authors' revision. The more common decision is “major revision,” where editors are willing to consider a revised version of the article but will subject it to further external review, by the original reviewers, a new set of reviewers, or a combination of both. Some journals also use a “reject and resubmit” decision, indicating lower enthusiasm for a resubmitted version of the article but still permitting resubmission, perhaps in an alternative format (e.g., brief report or letter to the editor, vs. full article) or with extensive revisions. For this latter decision, a full review will be accomplished as the revised manuscript is handled in much the same way as a new submission.

If the editors feel an article is a poor fit for their journal or falls too far below its standards, they may reject submissions outright without sending the manuscript for external review. This “desk reject” should not be confused with articles being “unsubmitted” by a managing editor or editorial assistant. The latter can happen due to style or formatting issues with the initial submission, which the author is asked to correct before the manuscript proceeds to review. Having a manuscript “unsubmitted” does not preclude resubmission of a corrected manuscript and is unlikely to affect reviewer assessment and, eventually, editorial decision.

Revising the Manuscript

When the initial editorial decision is positive, but not an unconditional acceptance, authors may elect to revise their manuscript and resubmit it to the same journal with a point-by-point response to the reviewers (discussed in the next section). The primary aim of the authors for this revision should be to address the criticisms and concerns raised during the initial review. Yet, this may be easier said than done when faced with conflicting recommendations, hostile reviews, or simply a large number of suggestions to be accommodated within a strict manuscript word limit. To streamline the process of responding to reviews, we offer the following roadmap as a suggestion.

Address the “fatal flaws”

Reviewers or editors may point out critical weaknesses of the study that prevent it from drawing the intended conclusions or even any conclusions at all. For example, an inaccuracy in the data, a bias in patient recruitment, a limitation of sample size, or a lack of follow-up may be so severe that the manuscript cannot provide credible evidence on the treatment or exposure it is meant to study. In particular, a lack of appropriate ethical approval would disqualify a study from publication, no matter how methodologically rigorous it may have been. In systematic reviews and analyses of existing databases, prior publication of a near-identical paper by a different group may also fundamentally preclude a paper from acceptance. On the rare occasions when the paper's central conclusions are found invalid and cannot be corrected through new analysis or a different framing of the authors' argument, reconceiving the study may be a better approach than attempting to revise and resubmit. At other times, some of these issues may be approached and the editor and reviewers satisfied by adding text to the discussion outlining the limitations of the current study. This may allow authors to acknowledge the concerns expressed by the reviewers and yet not redo their study from the beginning.

Amend the data analysis

More commonly, reviewers ask for changes to the data analysis without implying that these requests invalidate the entire study. We recommend making these changes before any further edits to the manuscript, because the intent is often to see if the paper's original findings are robust. In the best case scenario, any additional analysis will only confirm and strengthen the central conclusions. However, additional analyses sometimes reveal contradicting findings, which the authors should frankly address in the revised manuscript, by pointing out the contradiction and speculating about why different analyses of their data may have reached different conclusions. Especially when the study design was prospectively registered, the authors should explain in the manuscript which analyses were planned a priori and which were added post hoc . In these studies, authors should also avoid changing the pre-specified primary outcome, which would have been used for any a priori power or sample size calculation.

Decline infeasible or inappropriate suggestions

Some requests may not be feasible, for example, when requested data were not collected for a prospective study, or when collecting the data would mean starting chart review from scratch for a retrospective study. At other times, it may not be feasible to comply with the reviewers' requests if they disagree with the study type, the study cohort, or make other requests that would require a new or different study to address. Reviewers could also request changes to the statistical analysis that are not appropriate for the data at hand or for the study aims. In these cases, authors have the choice of rebutting the reviewers' comments while making no change in their manuscript, but an argumentative revision that leans too heavily on this option may be received poorly on re-review, resulting in rejection of the manuscript. In our experience, authors may be successful in responding to the reviews while rebutting one or two of the reviewers' suggestions, but a legitimate argument must be made for the rebuttal, and the reasons clearly stated.

Explain the study rationale and methods

Having completed the revision of the data analysis, authors should check that their methods section includes a complete and correct explanation of how the data were collected and explains how the analysis was performed. It may be appropriate to end the introduction by stating the hypothesis of the study. In the methods section, reviewers will often ask about the ethical committee approval of the study, the site(s) where the study was conducted, patient inclusion and exclusion criteria, the consent process, the procedures involved and the protocol for anesthetic management, and the specific data points that were collected during the study. For prospective clinical studies, authors should also indicate whether the study was submitted to a trial registry (such as ClinicalTrials.gov), and whether this was done before or after study enrollment had started. Clearly stated ethical approval and trial registration information must be provided for all submissions. Explanations may be sought if the editors and reviewers believe that the study did not meet standards for ethical approval, patient consent, or trial registration. Other requests related to methods may ask to clarify how the primary and secondary aims outlined in the introduction were addressed in the analysis, and how the sample size was determined, whether based on a statistical power analysis or logistical considerations (e.g., how many patients could be recruited with available resources). When a statistical power analysis is performed, reviewers may ask for more detail about the assumptions of this analysis and any supporting data from pilot studies or previous publications.

Check the conclusions and limitations

Having revised the introduction, methods, and results, the authors should revise the discussion to make any changes to the conclusions required by new or different study findings. We recommend that authors start the discussion with a review of what the study found, and then discuss how the study findings relate to similar work that has been previously published. An excessively long discussion does not ensure that a study will be published and, in fact, may detract from the quality of the manuscript. For a scientific study (retrospective or prospective), the discussion should not read like a comprehensive review of the literature. Typically, the discussion of study limitations will be expanded in the revised manuscript to include additional study weaknesses pointed out by reviewers, acknowledge suggested changes that could not be made to the study methods, and mention other suggestions for future studies that would build on the current results or answer questions left unanswered by the current study. Reviewers may ask that the conclusions be more specific in addressing the primary aim or hypothesis of the study (stated in the introduction), but they may also encourage authors to go further afield in their discussion, connecting their findings to results from previous publications and describing how their findings support or challenge current clinical practice.

Writing the Response to Reviewers

As seen above, manuscript revision can require more writing and (re)analysis than even the initial submission. Therefore, the aim of the revision memo (response to reviewers) is to summarize for the editors and reviewers how each change addresses the concerns raised on the initial review. This document is handled differently by different journals; some require it to be uploaded as a separate file, others require that the revision memo be entered in a text-box during the online submission process, and still others require that the response to review be included in the cover letter for the resubmitted manuscript. Therefore, authors should pay close attention to the decision letter and its instructions as to how they should submit their response to reviewers and how they should refer to manuscript edits in the revision memo (e.g., by page number, by line number, or copying sections of the revised manuscript into the memo).

Typically, the reviewers' comments should be copied and entered in the response memo so that each comment is numbered and the response clearly listed after it, in a different font style or color. It is equally important to determine how the journal would like the changes tracked in the revised manuscript. Some journals will ask that the authors use the track-changes mode in the word processing software, whereas others may ask for changes to be highlighted or be added in a different color font. Deleted manuscript text may need to be shown in strike-through font or simply removed from the revised submission, depending on the journal. Journals may ask for two copies of the revised manuscript: one showing the changes and one in a clean format that is ready for copyediting.

A typical revision memo will include a short paragraph acknowledging the editorial decision and reviewer comments and briefly summarizing key changes made to the manuscript. This would be followed by a numbered list of comments from the editors and reviewers (as received in the decision letter), with the authors' response to each one. Although not all reviewers and editors submit their comments as a numbered list, the authors may want to break up long sections or paragraphs of the reviews into shorter, numbered comments, to separately describe how each one was addressed in the revision. The authors' responses need not be excessively ingratiating but should respect the reviewers' effort in evaluating the manuscript, and concisely explain what was changed or why a change was not or could not be made. Different reviewers may have conflicting recommendations for revision. This may be as simple as one asking for a more concise definition of a method while another asking for a more detailed explanation. With conflicting reviews, the authors may consider taking the recommendation that is endorsed in the editor's comments (if this is provided), the one that is best aligned with the study aims, or the one that best matches the methods and writing style used in other contemporary papers in the field; and explaining this rationale when responding to the reviewers.

What to Do with a Rejected Manuscript

Based on reviewer reports or their own judgment, editors may reject a manuscript with no option to resubmit. It is essential to read the decision letter closely as some journals will state that they cannot publish a manuscript in its current form but offer to consider a new submission of a substantially revised manuscript (“reject and resubmit,” as mentioned above, in contrast to “revise and resubmit”). When the manuscript is rejected with no option of resubmission, authors may appeal this decision, but this option is rarely exercised and may not change the editors' decision. Appeals are also generally only successful when made by experienced and recognized scholars in the field.

Unless the study is discovered to be so flawed as to preclude publication in any venue, authors will usually consider submitting it to another journal after the initial rejection. Taking a single rejection and tabling a manuscript without further submission is rarely a good option. It is possible that multiple rejections will precede an eventual acceptance for valuable work. Given the amount of time taken to devise, implement, and up a study, we encourage authors to consider resubmission to a new journal, if the study is well conceived and addresses an important problem or question. In this case, the criticisms in the initial review are not binding, but still worth the authors' consideration. Particularly, authors should address any major flaws in the study's approach and conclusions (distinct from reviewers' preferences for additional data analysis unrelated to the primary aims), and correct any factual, spelling, or grammatical errors prior to resubmission. Adding recommended secondary analyses could sometimes strengthen the next submission, although just as often, the reviewers at the next journal may find these additional analyses superfluous, and will have their own set of analyses to recommend.

Becoming a Reviewer

Like any complex skill, navigating the peer review process is best learned through repetition. Becoming a peer reviewer for scientific journals is an important way to hone this skill, as well as providing a service to the scientific community, and adding to one's academic credentials as an expert whose opinion is sought by journal editors. The most common entry point to becoming a reviewer is through scientific publication; the authors of published articles can be contacted by another journal to provide a review on a related submission. One's expertise in a specific area may be noted by the editor who performs a topic search of key words when looking for reviewers. Alternatively, editors and associate editors may call on colleagues who they know are recognized experts in a particular field. Academic mentorship is also important, as mentors may ask junior colleagues and faculty to help them with reviewing article submissions, or may pass their name along to journal editors to be considered for inclusion in the reviewer pool. Once one has successfully reviewed for a journal, they are frequently called upon to review other submissions, especially if their review was returned in a timely manner. Many journals will give a specific timeframe within which the review is to be completed, while others will not. In most cases, a response within 2–4 weeks is considered acceptable. Some journals have now started editorial fellowships that aim to provide an immersive experience in the peer review and publishing process for early-career scientists. Lastly, researchers wishing to become peer reviewers may contact journal editors themselves, or register reviewer accounts in journal online submission systems. Although the general structure of peer review reports is described above, more specific guidance on performing peer review is available in other publications.[ 10 , 11 ]

Peer Review Ethics

Authors, reviewers, and editors have a shared responsibility for the ethical conduct of peer review. This is necessary to sustain the professional and public trust in peer review, as a system of evaluation that is accurate, constructive, and free from bias. Recently reported ethics violations have included authors misrepresenting the identity of suggested reviewers, reviewers plagiarizing a manuscript sent to them for review or recommending its rejection and then conducting a similar study, and editors inappropriately pressuring authors to cite articles published in their journal.[ 12 , 13 , 14 ] Some journals and publishers have also been criticized for circumventing the peer review process for submitted manuscripts.[ 15 ] For reviewers, it is most important that they be unbiased and not have any hidden agendas or personal vendettas to settle. For authors, ethical conduct in peer review includes disclosing the study's ethics committee approval, trial registration, and consent process; disclosing any related or overlapping prior publications; disclosing any actual or potential conflicts of interest; and submitting the manuscript only to one journal. These requirements are typically stated in the journal's guidelines for authors, and may need to be acknowledged in the cover letter accompanying the manuscript. In responding to reviews, authors should also carefully consider whether their revisions still fall within the scope of the ethics committee approval for the study and the informed consent that was obtained, and whether the revised manuscript remains faithful to the aims and study design of any pre-registered trial protocol.

Scientific research is not complete until it is published, but not all research can or should be published. It falls to peer-review to determine the difference. By engaging with the process of peer review, authors can improve the quality of their work as well as gain confidence that it is published in a reputable medium. Furthermore, the fact that a study has been peer reviewed will increase its stature and potential for recognition. However, the peer review process does not assure this. Although responding to reviews can be challenging, we hope that the suggestions sketched out in this article will help authors plan their approach to manuscript revision and resubmission. We also encourage authors to participate in this process as reviewers, so that the labor of peer review is properly shared among the community of scientists.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

Logo for University of Minnesota Libraries

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

What is Academic Research?

After completing this module you will be able to:

  • recognize why information exists, who creates it, and how information of all kinds can be valuable, even when it’s biased.
  • understand what scholarly research is, how to find it, how the process of peer-review works, and how it gets published.
  • identify types of databases and understand why databases are critical for academic research

How to use this module

This module is organized into a number of pages. To navigate, you can either:

  • use the “Previous” and “Next” buttons at the bottom of each page (suggested)

Example screenshot of bottom navigation buttons used in this tutorial.

  • follow the links in the “Contents” menu on the left side of the page

Image showing the side navigation "contents" menu

Introduction to Academic Research Copyright © by matt0341; ampala; and heitz106. All Rights Reserved.

Share This Book

info This is a space for the teal alert bar.

notifications This is a space for the yellow alert bar.

National University Library

Research Process

  • Brainstorming
  • Explore Google This link opens in a new window
  • Explore Web Resources
  • Explore Background Information
  • Explore Books
  • Explore Scholarly Articles
  • Narrowing a Topic
  • Primary and Secondary Resources
  • Academic, Popular & Trade Publications

Scholarly and Peer-Reviewed Journals

  • Grey Literature
  • Clinical Trials
  • Evidence Based Treatment
  • Scholarly Research
  • Database Research Log
  • Search Limits
  • Keyword Searching
  • Boolean Operators
  • Phrase Searching
  • Truncation & Wildcard Symbols
  • Proximity Searching
  • Field Codes
  • Subject Terms and Database Thesauri
  • Reading a Scientific Article
  • Website Evaluation
  • Article Keywords and Subject Terms
  • Cited References
  • Citing Articles
  • Related Results
  • Search Within Publication
  • Database Alerts & RSS Feeds
  • Personal Database Accounts
  • Persistent URLs
  • Literature Gap and Future Research
  • Web of Knowledge
  • Annual Reviews
  • Systematic Reviews & Meta-Analyses
  • Finding Seminal Works
  • Exhausting the Literature
  • Finding Dissertations
  • Researching Theoretical Frameworks
  • Research Methodology & Design
  • Tests and Measurements
  • Organizing Research & Citations This link opens in a new window
  • Scholarly Publication
  • Learn the Library This link opens in a new window

Library FAQs

How do I find scholarly, peer reviewed journal articles?

  • What is the difference between scholarly and peer reviewed journals?
  • How do I determine if a particular journal is peer reviewed?
  • What are empirical articles? How do I locate them in NCU Library?
  • Are dissertations and theses considered scholarly or peer-reviewed resources?
  • Are books peer reviewed? If so, how can I tell or how can I find them?
  • Are law reviews considered to be scholarly and peer-reviewed?
  • Are government sources considered to be scholarly?

Scholarly journals are journals which are well respected for the information and research they provide on a particular subject. They are written by experts in a particular field or discipline and their purpose is to advance the ongoing body of work within their discipline. These articles might present original research data and findings, or take a position on a key question within the field. They can be difficult to read, because their intended audience is other experts and academics, but they are the capstone when it comes to authoritative information.

Scholarly journals are oftentimes peer reviewed or refereed . A peer-reviewed or refereed article has gone through a process where other scholars in the author’s field or discipline critically assess a draft of the article. The actual evaluations are similar to editing notes, where the author receives detailed and constructive feedback from the peer experts. However, these reviews are not made available publicly. For an example peer review of a fictitious article, see the Sample Peer-Review of a Fictitious Manuscript link below.

Please keep in mind that not all scholarly journals go through the peer-review process. However, it is safe to assume that a peer-reviewed journal is also scholarly. In short, “scholarly” means the article was written by an expert for an audience of other experts, researchers or students. “Peer-reviewed” takes it one step further and means the article was reviewed and critiqued by the author’s peers who are experts in the same subject area. The vast majority of scholarly articles are peer reviewed.

However, because there are many different types of peer-review, be sure to evaluate the resource itself to determine if it is suitable for your research needs. For example, law reviews may indicate that they are peer-reviewed, but their "peers" are other students. Please see the Law Reviews FAQ below for more explanation.

If you need help determining whether a scholarly journal is peer reviewed or refereed we recommend using the Ulrichsweb database. Ulrichsweb is the authoritative source of bibliographic and publisher information on more than 300,000 periodicals of all types, including academic and scholarly journals. Find out more about how to use and access Ulrichsweb through NU Library by watching the Ulrichsweb Quick Tutorial Video (link below).

For additional instruction on scholarly vs. peer reviewed journals, please see the Library's Scholarly vs. Peer-Reviewed Journals Quick Tutorial Video (link below).

For information about how to limit your database searches to scholarly/peer-journals, see the following FAQ:

  • Sample Peer-Review of a Fictitious Manuscript
  • Law Reviews FAQ
  • Ulrichsweb Quick Tutorial Video
  • Scholarly vs. Peer-Reviewed Journals Quick Tutorial Video

Peer Review Process

For scholarly information on the peer review process, see the following resources:

  • Chenail, R. (2008). Peer review. In L. M. Given (Ed.), The SAGE encyclopedia of qualitative research methods (pp. 605-606). Thousand Oaks, CA: SAGE Publications Ltd. doi: 10.4135/9781412963909.n313
  • Constantine, N. (2008). Peer review process. In S. Boslaugh (Ed.), Encyclopedia of epidemiology (Vol. 2, pp. 795-796). Thousand Oaks, CA: SAGE Publications Ltd. doi: 10.4135/9781412953948.n343
  • Mark, M. & Chua, P. (2005). Peer review. In S. Mathison (Ed.), Encyclopedia of evaluation (pp. 299-299). Thousand Oaks, CA: SAGE Publications Ltd. doi: 10.4135/9781412950558.n404

Was this resource helpful?

  • << Previous: Academic, Popular & Trade Publications
  • Next: Grey Literature >>
  • Last Updated: Apr 14, 2024 12:14 PM
  • URL: https://resources.nu.edu/researchprocess

National University

© Copyright 2024 National University. All Rights Reserved.

Privacy Policy | Consumer Information

Appalachian State University

Library Guides

Lit/spe 5040 teacher as researcher: research review.

  • Sample Search
  • Search Strategies
  • Literacy Journals and Databases
  • Writing & Citing
  • Additional Help

Education Librarian

Profile Photo

IMC Librarian

Where do you find information.

Most people find information and do research online. But there are many layers to the web that go beyond what you typically see via Google. Explore the various layers by clicking through the Search Sphere.

Library Research Tools

How does the university library work.

Generally, the university library purchases or subscribes to all sorts of specialized information to support the research that happens in the various majors here at App State - research being done by both students and professors .

This information takes many forms: books, ebooks, streaming films, peer-reviewed journals (mostly online), and much more.

Because it supports the majors, the information is mostly organized by discipline or subject in our physical and online spaces.

In other words, all of our physical books about biology 'live' together on the shelves in Belk Library. And all of our online content about biology can accessed through our  databases for biology  available via Belk Library's website.

What's a library database?

Databases are just searchable online collections of information that the library subscribes to. Because these are subscription-based, or "behind the paywall," this is information that only App State students and professors have access to. It's also helpful to know that some larger databases contain research from multiple disciplines.

What is APPsearch?

APPsearch is Belk Library's portal that allows students to search most of our research databases at one time. You can find it on our homepage . Think of it as "Google for the library" - it's a great place to start and is intended to save students time and effort. It allows you to quickly find and access books, ebooks, journal articles, and more. 

Click here to watch a video tutorial on how to use APPsearch.

Anatomy of an AppSearch Results Page

This is an interactive sample search from an AppSearch search results page. In this case, the search used the keywords, college student and anxiety. Click the "I" icons to learn more about the different parts of the page.

Scholarly/Peer Reviewed Articles

What are "scholarly" or "peer-reviewed" articles?

  • Written by scholars or experts on the topic  
  • Content has been critically evaluated by other experts  
  • Contain citations (footnotes and/or bibliography) documenting sources
  • Within the search, choose the Limit for Scholarly/Peer-reviewed.  or
  • When looking at a citation within a database, click the journal title until you reach the Publication Details.  Look for the "Peer reviewed" field.  or
  • Look up the journal title in the Serials Directory .  Look in the "Refereed" field.  Refereed = Peer Reviewed

peer review process academic research

  • << Previous: Home
  • Next: Sample Search >>
  • Last Updated: Feb 7, 2024 9:46 AM
  • URL: https://guides.library.appstate.edu/re5040

University Libraries 218 College Street • PO Box 32026 • Boone, NC 28608 Phone: 828.262.2818

  • Short report
  • Open access
  • Published: 12 April 2024

A modified action framework to develop and evaluate academic-policy engagement interventions

  • Petra Mäkelä   ORCID: orcid.org/0000-0002-0938-1175 1 ,
  • Annette Boaz   ORCID: orcid.org/0000-0003-0557-1294 2 &
  • Kathryn Oliver   ORCID: orcid.org/0000-0002-4326-5258 1  

Implementation Science volume  19 , Article number:  31 ( 2024 ) Cite this article

146 Accesses

13 Altmetric

Metrics details

There has been a proliferation of frameworks with a common goal of bridging the gap between evidence, policy, and practice, but few aim to specifically guide evaluations of academic-policy engagement. We present the modification of an action framework for the purpose of selecting, developing and evaluating interventions for academic-policy engagement.

We build on the conceptual work of an existing framework known as SPIRIT (Supporting Policy In Health with Research: an Intervention Trial), developed for the evaluation of strategies intended to increase the use of research in health policy. Our aim was to modify SPIRIT, (i) to be applicable beyond health policy contexts, for example encompassing social, environmental, and economic policy impacts and (ii) to address broader dynamics of academic-policy engagement. We used an iterative approach through literature reviews and consultation with multiple stakeholders from Higher Education Institutions (HEIs) and policy professionals working at different levels of government and across geographical contexts in England, alongside our evaluation activities in the Capabilities in Academic Policy Engagement (CAPE) programme.

Our modifications expand upon Redman et al.’s original framework, for example adding a domain of ‘Impacts and Sustainability’ to capture continued activities required in the achievement of desirable outcomes. The modified framework fulfils the criteria for a useful action framework, having a clear purpose, being informed by existing understandings, being capable of guiding targeted interventions, and providing a structure to build further knowledge.

The modified SPIRIT framework is designed to be meaningful and accessible for people working across varied contexts in the evidence-policy ecosystem. It has potential applications in how academic-policy engagement interventions might be developed, evaluated, facilitated and improved, to ultimately support the use of evidence in decision-making.

Peer Review reports

Contributions to the literature

There has been a proliferation of theories, models and frameworks relating to translation of research into practice. Few specifically relate to engagement between academia and policy.

Challenges of evidence-informed policy-making are receiving increasing attention globally. There is a growing number of academic-policy engagement interventions but a lack of published evaluations.

This article contributes a modified action framework that can be used to guide how academic-policy engagement interventions might be developed, evaluated, facilitated, and improved, to support the use of evidence in policy decision-making.

Our contribution demonstrates the potential for modification of existing, useful frameworks instead of creating brand-new frameworks. It provides an exemplar for others who are considering when and how to modify existing frameworks to address new or expanded purposes while respecting the conceptual underpinnings of the original work.

Academic-policy engagement refers to ways that Higher Education Institutions (HEIs) and their staff engage with institutions responsible for policy at national, regional, county or local levels. Academic-policy engagement is intended to support the use of evidence in decision-making and in turn, improve its effectiveness, and inform the identification of barriers and facilitators in policy implementation [ 1 , 2 , 3 ]. Challenges of evidence-informed policy-making are receiving increasing attention globally, including the implications of differences in cultural norms and mechanisms across national contexts [ 4 , 5 ]. Although challenges faced by researchers and policy-makers have been well documented [ 6 , 7 ], there has been less focus on actions at the engagement interface. Pragmatic guidance for the development, evaluation or comparison of structured responses to the challenges of academic-policy engagement is currently lacking [ 8 , 9 ].

Academic-policy engagement exists along a continuum of approaches from linear (pushing evidence out from academia or pulling evidence into policy), relational (promoting mutual understandings and partnerships), and systems approaches (addressing identified barriers and facilitators) [ 4 ]. Each approach is underpinned by sets of beliefs, assumptions and expectations, and each raises questions for implementation and evaluation. Little is known about which academic-policy engagement interventions work in which settings, with scarce empirical evidence to inform decisions about which interventions to use, when, with whom, or why, and how organisational contexts can affect motivation and capabilities for such engagement [ 10 ]. A deeper understanding through the evaluation of engagement interventions will help to identify inhibitory and facilitatory factors, which may or may not transfer across contexts [ 11 ].

The intellectual technologies [ 12 ] of implementation science have proliferated in recent decades, including models, frameworks and theories that address research translation and acknowledge difficulties in closing the gap between research, policy and practice [ 13 ]. Frameworks may serve overlapping purposes of describing or guiding processes of translating knowledge into practice (e.g. the Quality Implementation Framework [ 14 ]); or helping to explain influences on implementation outcomes (e.g. the Theoretical Domains Framework [ 15 ]); or guiding evaluation (e.g. the RE-AIM framework [ 16 , 17 ]. Frameworks can offer an efficient way to look across diverse settings and to identify implementation differences [ 18 , 19 ]. However, the abundance of options raises its own challenges when seeking a framework for a particular purpose, and the use of a framework may mean that more weight is placed on certain aspects, leading to a partial understanding [ 13 , 17 ].

‘Action frameworks’ are predictive models that intend to organise existing knowledge and enable a logical approach for the selection, implementation and evaluation of intervention strategies, thereby facilitating the expansion of that knowledge [ 20 ]. They can guide change by informing and clarifying practical steps to follow. As flexible entities, they can be adapted to accommodate new purposes. Framework modification may include the addition of constructs or changes in language to expand applicability to a broader range of settings [ 21 ].

We sought to identify one organising framework for evaluation activities in the Capabilities in Academic-Policy Engagement (CAPE) programme (2021–2023), funded by Research England. The CAPE programme aimed to understand how best to support effective and sustained engagement between academics and policy professionals across the higher education sector in England [ 22 ]. We first searched the literature and identified an action framework that was originally developed between 2011 and 2013, to underpin a trial known as SPIRIT (Supporting Policy In health with Research: an Intervention Trial) [ 20 , 23 ]. This trial evaluated strategies intended to increase the use of research in health policy and to identify modifiable points for intervention.

We selected the SPIRIT framework due to its potential suitability as an initial ‘road map’ for our evaluation of academic-policy interventions in the CAPE programme. The key elements of the original framework are catalysts, organisational capacity, engagement actions, and research use. We wished to build on the framework’s embedded conceptual work, derived from literature reviews and semi-structured interviews, to identify policymakers’ views on factors that assist policy agencies’ use of research [ 20 ]. The SPIRIT framework developers defined its “locus for change” as the policy organisation ( [ 20 ], p. 151). They proposed that it could offer the beginning of a process to identify and test pathways in policy agencies’ use of evidence.

Our goal was to modify SPIRIT to accommodate a different locus for change: the engagement interface between academia and policy. Instead of imagining a linear process in which knowledge comes from researchers and is transmitted to policy professionals, we intended to extend the framework to multidirectional relational and system interfaces. We wished to include processes and influences at individual, organisational and system levels, to be relevant for HEIs and their staff, policy bodies and professionals, funders of engagement activities, and facilitatory bodies. Ultimately, we seek to address a gap in understanding how engagement strategies work, for whom, how they are facilitated, and to improve the evaluation of academic-policy engagement.

We aimed to produce a conceptually guided action framework to enable systematic evaluation of interventions intending to support academic-policy engagement.

We used a pragmatic combination of processes for framework modification during our evaluation activities in the CAPE programme [ 22 ]. The CAPE programme included a range of interventions: seed funding for academic and policy professional collaboration in policy-focused projects, fellowships for academic placements in policy settings, or for policy professionals with HEI staff, training for policy professionals, and a range of knowledge exchange events for HEI staff and policy professionals. We modified the SPIRIT framework through iterative processes shown in Table  1 , including reviews of literature; consultations with HEI staff and policy professionals across a range of policy contexts and geographic settings in England, through the CAPE programme; and piloting, refining and seeking feedback from stakeholders in academic-policy engagement.

A number of characteristics of the original SPIRIT framework could be applied to academic-policy engagement. While keeping the core domains, we modified the framework to capture dynamics of engagement at multiple academic and policy levels (individuals, organisations and system), extending beyond the original unidirectional focus on policy agencies’ use of research. Components of the original framework, the need for modifications, and their corresponding action-oriented implications are shown in Table  2 . We added a new domain, ‘Impacts and Sustainability’, to consider transforming and enduring aspects at the engagement interface. The modified action framework is shown in Fig.  1 .

figure 1

SPIRIT Action Framework Modified for Academic-Policy Engagement Interventions (SPIRIT-ME), adapted with permission from the Sax Institute. Legend: The framework acknowledges that elements in each domain may influence other elements through mechanisms of action and that these do not necessarily flow through the framework in a ‘pipeline’ sequence. Mechanisms of action are processes through which engagement strategies operate to achieve desired outcomes. They might rely on influencing factors, catalysts, an aspect of an intervention action, or a combination of elements

Identifying relevant theories or models for missing elements

Catalysts and capacity.

Within our evaluation of academic-policy interventions, we identified a need to develop the original domain of catalysts beyond ‘policy/programme need for research’ and ‘new research with potential policy relevance’. Redman et al. characterised a catalyst as “a need for information to answer a particular problem in policy or program design, or to assist in supporting a case for funding” in the original framework (p. 149). We expanded this “need for information” to a perceived need for engagement, by either HEI staff or policy professionals, linking to the potential value they perceived in engaging. Specifically, there was a need to consider catalysts at the level of individual engagement, for example HEI staff wanting research to have real-world impact, or policy professionals’ desires to improve decision-making in policy, where productive interactions between academic and policy stakeholders are “necessary interim steps in the process that lead to societal impact” ( [ 24 ], p. 214). The catalyst domain expands the original emphasis on a need for research, to take account of challenges to be overcome by both the academic and policy communities in knowing how, and with whom, to engage and collaborate with [ 25 ].

We used a model proposing that there are three components for any behaviour: capability, opportunity and motivation, which is known as the COM-B model [ 26 ]. Informed by CAPE evaluation activities and our discussions with stakeholders, we mapped the opportunity and motivation constructs into the ‘catalysts’ domain of the original framework. Opportunity is an attribute of the system that can facilitate engagement. It may be a tangible factor such as the availability of seed funding, or a perceived social opportunity such as institutional support for engagement activities. Opportunity can act at the macro level of systems and organisational structures. Motivation acts at the micro level, deriving from an individual’s mental processes that stimulate and direct their behaviours; in this case, taking part in academic-policy engagement actions. The COM-B model distinguishes between reflective motivation through conscious planning and automatic motivation that may be instinctive or affective [ 26 ].

We presented an early application of the COM-B model to catalysts for engagement at an academic conference, enabling an informal exploration of attendees’ subjective views on the clarity and appropriateness, when developing the framework. This application introduces possibilities for intervention development and support by highlighting ‘opportunities’ and ‘motivations’ as key catalysts in the modified framework.

Within the ‘capacity’ domain, we retained the original levels of individuals, organisations and systems. We introduced individual capability as a construct from the COM-B model, describing knowledge, skills and abilities to generate behaviour change as a precursor of academic-policy engagement. This reframing extends the applicability to HEI staff as well as policy professionals. It brings attention to different starting conditions for individuals, such as capabilities developed through previous experience, which can link with social opportunity (for example, through training or support) as a catalyst.

Engagement actions

We identified a need to modify the original domain ‘engagement actions’ to extend the focus beyond the use of research. We added three categories of engagement actions described by Best and Holmes [ 27 ]: linear, relational, and systems. These categories were further specified through a systematic mapping of international organisations’ academic-policy engagement activities [ 5 ]. This framework modification expands the domain to encompass: (i) linear ‘push’ of evidence from academia or ‘pull’ of evidence into policy agencies; (ii) relational approaches focused on academic-policy-maker collaboration; and (iii) systems’ strategies to facilitate engagement for example through strategic leadership, rewards or incentives [ 5 ].

We retained the elements in the original framework’s ‘outcomes’ domain (instrumental, tactical, conceptual and imposed), which we found could apply to outcomes of engagement as well as research use. For example, discussions between a policy professional and a range of academics could lead to a conceptual outcome by considering an issue through different disciplinary lenses. We expanded these elements by drawing on literature on engagement outcomes [ 28 ] and through sense-checking with stakeholders in CAPE. We added capacity-building (changes to skills and expertise), connectivity (changes to the number and quality of relationships), and changes in organisational culture or attitude change towards engagement.

Impacts and sustainability

The original framework contained endpoints described as: ‘Better health system and health outcomes’ and ‘Research-informed health policy and policy documents’. For modification beyond health contexts and to encompass broader intentions of academic-policy engagement, we replaced these elements with a new domain of ‘Impacts and sustainability’. This domain captures the continued activities required in achievement of desirable outcomes [ 29 ]. The modification allows consideration of sustainability in relation to previous stages of engagement interventions, through the identification of beneficial effects that are sustained (or not), in which ways, and for whom. Following Borst [ 30 ], we propose a shift from the expectation that ‘sustainability’ will be a fixed endpoint. Instead, we emphasise the maintenance work needed over time, to sustain productive engagement.

Influences and facilitators

We modified the overarching ‘Policy influences’ (such as public opinion and media) in the original framework, to align with factors influencing academic-policy engagement beyond policy agencies’ use of research. We included influences at the level of the individual (for example, individual moral discretion [ 31 ]), the organisation (for example, managerial practices [ 31 ]) and the system (for example, career incentives [ 32 ]). Each of these processes takes place in the broader context of social, policy and financial environments (that is, potential sources of funding for engagement actions) [ 29 ].

We modified the domain ‘Reservoir of relevant and reliable research’ underpinning the original framework, replacing it with ‘Reservoir of people skills’, to emphasise intangible facilitatory work at the engagement interface, in place of concrete research outputs. We used the ‘Promoting Action on Research Implementation in Health Services’ (PARiHS) framework [ 33 , 34 ], which gives explicit consideration to facilitation mechanisms for researchers and policy-makers [ 13 ] . Here, facilitation expertise includes mechanisms that focus on particular goals (task-oriented facilitation) or enable changes in ways of working (holistic-oriented facilitation). Task-orientated facilitation skills might include, for example, the provision of contacts, practical help or project management skills, while holistic-oriented facilitation involves building and sustaining partnerships or support skills’ development across a range of capabilities. These conceptualisations aligned with our consultations with facilitators of engagement in CAPE. We further extended these to include aspects identified in our evaluation activities: strategic planning, contextual awareness and entrepreneurial orientation.

Piloting and refining the modified framework through stakeholder engagement

We piloted an early version of the modified framework to develop a survey for all CAPE programme participants. During this pilot stage, we sought feedback from the CAPE delivery team members across HEI and policy contexts in England. CAPE delivery team members are based at five collaborating universities with partners in the Parliamentary Office for Science and Technology (POST) and Government Office for Science (GO-Science), and Nesta (a British foundation that supports innovation). The HEI members include academics and professional services knowledge mobilisation staff, responsible for leading and coordinating CAPE activities. The delivery team comprised approximately 15–20 individuals (with some fluctuations according to individual availabilities).

We assessed appropriateness and utility, refined terminology, added domain elements and explored nuances. For example, stakeholders considered the multi-layered possibilities within the domain ‘capacity’, where some HEI or policy departments may demonstrate a belief that it is important to use research in policy, but this might not be the perception of the organisation as a whole. We also sought stakeholders’ views on the utility of the new domains, for example, the identification of facilitator expertise such as acting as a knowledge broker or intermediary; providing training, advice or guidance; facilitating engagement opportunities; creating engagement programmes; and sustainability of engagement that could be conceptualised at multiple levels: personally, in processes or through systems.

Testing against criteria for useful action framework

The modified framework fulfils the properties of a useful action framework [ 20 ]:

It has a clearly articulated purpose: development and evaluation of academic-policy engagement interventions through linear, relational and/or system approaches. It has identified loci for change, at the level of the individual, the organisation or system.

It has been informed by existing understandings, including conceptual work of the original SPIRIT framework, conceptual models identified from the literature, published empirical findings, understandings from consultation with stakeholders, and evaluation activities in CAPE.

It can be applied to the development, implementation and evaluation of targeted academic-policy engagement actions, the selection of points for intervention and identification of potential outcomes, including the work of sustaining them and unanticipated consequences.

It provides a structure to build knowledge by guiding the generation of hypotheses about mechanisms of action in academic-policy engagement interventions, or by adapting the framework further through application in practice.

The proliferation of frameworks to articulate processes of research translation reveals a need for their adaptation when applied in specific contexts. The majority of models in implementation science relate to translation of research into practice. By contrast, our focus was on engagement between academia and policy. There are a growing number of academic-policy engagement interventions but a lack of published evaluations [ 10 ].

Our framework modification provides an exemplar for others who are considering how to adapt existing conceptual frameworks to address new or expanded purposes. Field et al. identified the multiple, idiosyncratic ways that the Knowledge to Action Framework has been applied in practice, demonstrating its ‘informal’ adaptability to different healthcare settings and topics [ 35 ]. Others have reported on specific processes for framework refinement or extension. Wiltsey Stirman et al. adopted a framework that characterised forms of intervention modification, using a “pragmatic, multifaceted approach” ( [ 36 ], p.2). The authors later used the modified version as a foundation to build a further framework to encompass implementation strategies in a range of settings [ 21 ]. Oiumet et al. used the approach of borrowing from a different disciplinary field for framework adaptation, by using a model of absorptive capacity from management science to develop a conceptual framework for civil servants’ absorption of research knowledge [ 37 ].

We also took the approach of “adapting the tools we think with” ( [ 38 ], p.305) during our evaluation activities on the CAPE programme. Our conceptual modifications align with the literature on motivation and entrepreneurial orientation in determining policy-makers’ and researchers’ intentions to carry out engagement in addition to ‘usual’ roles [ 39 , 40 ]. Our framework offers an enabler for academic-policy engagement endeavours, by providing a structure for approaches beyond the linear transfer of information, emphasising the role of multidirectional relational activities, and the importance of their facilitation and maintenance. The framework emphasises the relationship between individuals’ and groups’ actions, and the social contexts in which these are embedded. It offers additional value by capturing the organisational and systems level factors that influence evidence-informed policymaking, incorporating the dynamic features of contexts shaping engagement and research use.

Conclusions

Our modifications extend the original SPIRIT framework’s focus on policy agencies’ use of research, to encompass dynamic academic-policy engagement at the levels of individuals, organisations and systems. Informed by the knowledge and experiences of policy professionals, HEI staff and knowledge mobilisers, it is designed to be meaningful and accessible for people working across varied contexts and functions in the evidence-policy ecosystem. It has potential applications in how academic-policy engagement interventions might be developed, evaluated, facilitated and improved, and it fulfils Redman et al.’s criteria as a useful action framework [ 20 ].

We are testing the ‘SPIRIT-Modified for Engagement’ framework (SPIRIT-ME) through our ongoing evaluation of academic-policy engagement activities. Further empirical research is needed to explore how the framework may capture ‘additionality’, that is, to identify what is achieved through engagement actions in addition to what would have happened anyway, including long-term changes in strategic behaviours or capabilities [ 41 , 42 , 43 ]. Application of the modified framework in practice will highlight its strengths and limitations, to inform further iterative development and adaptation.

Availability of data and materials

Not applicable.

Stewart R, Dayal H, Langer L, van Rooyen C. Transforming evidence for policy: do we have the evidence generation house in order? Humanit Soc Sci Commun. 2022;9(1):1–5.

Article   Google Scholar  

Sanderson I. Complexity, ‘practical rationality’ and evidence-based policy making. Policy Polit. 2006;34(1):115–32.

Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin CJ, Gülmezoglu M, et al. Using Qualitative Evidence in Decision Making for Health and Social Interventions: An Approach to Assess Confidence in Findings from Qualitative Evidence Syntheses (GRADE-CERQual). PLOS Med. 2015;12(10):e1001895.

Article   PubMed   PubMed Central   Google Scholar  

Bonell C, Meiksin R, Mays N, Petticrew M, McKee M. Defending evidence informed policy making from ideological attack. BMJ. 2018;10(362):k3827.

Hopkins A, Oliver K, Boaz A, Guillot-Wright S, Cairney P. Are research-policy engagement activities informed by policy theory and evidence? 7 challenges to the UK impact agenda. Policy Des Pract. 2021;4(3):341–56.

Google Scholar  

Head BW. Toward More “Evidence-Informed” Policy Making? Public Adm Rev. 2016;76(3):472–84.

Walker LA, Lawrence NS, Chambers CD, Wood M, Barnett J, Durrant H, et al. Supporting evidence-informed policy and scrutiny: A consultation of UK research professionals. PLoS ONE. 2019;14(3):e0214136.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Graham ID, Tetroe J, Group the KT. Planned action theories. In: Knowledge Translation in Health Care. John Wiley and Sons, Ltd; 2013. p. 277–87. Available from: https://onlinelibrary.wiley.com/doi/abs/10.1002/9781118413555.ch26 Cited 2023 Nov 1

Davies HT, Powell AE, Nutley SM. Mobilising knowledge to improve UK health care: learning from other countries and other sectors – a multimethod mapping study. Southampton (UK): NIHR Journals Library; 2015. (Health Services and Delivery Research). Available from: http://www.ncbi.nlm.nih.gov/books/NBK299400/ Cited 2023 Nov 1

Oliver K, Hopkins A, Boaz A, Guillot-Wright S, Cairney P. What works to promote research-policy engagement? Evid Policy. 2022;18(4):691–713.

Nelson JP, Lindsay S, Bozeman B. The last 20 years of empirical research on government utilization of academic social science research: a state-of-the-art literature review. Adm Soc. 2023;28:00953997231172923.

Bell D. Technology, nature and society: the vicissitudes of three world views and the confusion of realms. Am Sch. 1973;42:385–404.

Milat AJ, Li B. Narrative review of frameworks for translating research evidence into policy and practice. Public Health Res Pract. 2017; Available from: https://apo.org.au/sites/default/files/resource-files/2017-02/apo-nid74420.pdf Cited 2023 Nov 1

Meyers DC, Durlak JA, Wandersman A. The quality implementation framework: a synthesis of critical steps in the implementation process. Am J Community Psychol. 2012;50(3–4):462–80.

Article   PubMed   Google Scholar  

Cane J, O’Connor D, Michie S. Validation of the theoretical domains framework for use in behaviour change and implementation research. Implement Sci. 2012;7(1):37.

Glasgow RE, Battaglia C, McCreight M, Ayele RA, Rabin BA. Making implementation science more rapid: use of the RE-AIM framework for mid-course adaptations across five health services research projects in the veterans health administration. Front Public Health. 2020;8. Available from: https://www.frontiersin.org/articles/10.3389/fpubh.2020.00194 Cited 2023 Jun 13

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci IS. 2015 Apr 21 10. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4406164/ Cited 2020 May 4

Sheth A, Sinfield JV. An analytical framework to compare innovation strategies and identify simple rules. Technovation. 2022;1(115):102534.

Birken SA, Powell BJ, Shea CM, Haines ER, Alexis Kirk M, Leeman J, et al. Criteria for selecting implementation science theories and frameworks: results from an international survey. Implement Sci. 2017;12(1):124.

Redman S, Turner T, Davies H, Williamson A, Haynes A, Brennan S, et al. The SPIRIT Action Framework: A structured approach to selecting and testing strategies to increase the use of research in policy. Soc Sci Med. 2015;136:147–55.

Miller CJ, Barnett ML, Baumann AA, Gutner CA, Wiltsey-Stirman S. The FRAME-IS: a framework for documenting modifications to implementation strategies in healthcare. Implement Sci. 2021;16(1):36.

CAPE. CAPE. 2021. CAPE Capabilities in Academic Policy Engagement. Available from: https://www.cape.ac.uk/ Cited 2021 Aug 3

CIPHER Investigators. Supporting policy in health with research: an intervention trial (SPIRIT)—protocol for a stepped wedge trial. BMJ Open. 2014;4(7):e005293.

Spaapen J, Van Drooge L. Introducing ‘productive interactions’ in social impact assessment. Res Eval. 2011;20(3):211–8.

Williams C, Pettman T, Goodwin-Smith I, Tefera YM, Hanifie S, Baldock K. Experiences of research-policy engagement in policymaking processes. Public Health Res Pract. 2023. Online early publication. https://doi.org/10.17061/phrp33232308 .

Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6(1):42.

Best A, Holmes B. Systems thinking, knowledge and action: towards better models and methods. Evid Policy J Res Debate Pract. 2010;6(2):145–59.

Edwards DM, Meagher LR. A framework to evaluate the impacts of research on policy and practice: A forestry pilot study. For Policy Econ. 2020;1(114):101975.

Scheirer MA, Dearing JW. An agenda for research on the sustainability of public health programs. Am J Public Health. 2011;101(11):2059–67.

Borst RAJ, Wehrens R, Bal R, Kok MO. From sustainability to sustaining work: What do actors do to sustain knowledge translation platforms? Soc Sci Med. 2022;1(296):114735.

Zacka B. When the state meets the street: public service and moral agency. Harvard university press; 2017. Available from: https://books.google.co.uk/books?hl=en&lr=&id=3KdFDwAAQBAJ&oi=fnd&pg=PP1&dq=zacka+when+the+street&ots=x93YEHPKhl&sig=9yXKlQiFZ0XblHrbYKzvAMwNWT4 Cited 2023 Nov 28

Torrance H. The research excellence framework in the United Kingdom: processes, consequences, and incentives to engage. Qual Inq. 2020;26(7):771–9.

Rycroft-Malone J. The PARIHS framework—a framework for guiding the implementation of evidence-based practice. J Nurs Care Qual. 2004;19(4):297–304.

Stetler CB, Damschroder LJ, Helfrich CD, Hagedorn HJ. A guide for applying a revised version of the PARIHS framework for implementation. Implement Sci. 2011;6(1):99.

Field B, Booth A, Ilott I, Gerrish K. Using the knowledge to action framework in practice: a citation analysis and systematic review. Implement Sci. 2014;9(1):172.

Wiltsey Stirman S, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implement Sci. 2019;14(1):58.

Ouimet M, Landry R, Ziam S, Bédard PO. The absorption of research knowledge by public civil servants. Evid Policy. 2009;5(4):331–50.

Martin D, Spink MJ, Pereira PPG. Multiple bodies, political ontologies and the logic of care: an interview with Annemarie Mol. Interface - Comun Saúde Educ. 2018;22:295–305.

Sajadi HS, Majdzadeh R, Ehsani-Chimeh E, Yazdizadeh B, Nikooee S, Pourabbasi A, et al. Policy options to increase motivation for improving evidence-informed health policy-making in Iran. Health Res Policy Syst. 2021;19(1):91.

Athreye S, Sengupta A, Odetunde OJ. Academic entrepreneurial engagement with weak institutional support: roles of motivation, intention and perceptions. Stud High Educ. 2023;48(5):683–94.

Bamford D, Reid I, Forrester P, Dehe B, Bamford J, Papalexi M. An empirical investigation into UK university–industry collaboration: the development of an impact framework. J Technol Transf. 2023 Nov 13; Available from: https://doi.org/10.1007/s10961-023-10043-9 Cited 2023 Dec 20

McPherson AH, McDonald SM. Measuring the outcomes and impacts of innovation interventions assessing the role of additionality. Int J Technol Policy Manag. 2010;10(1–2):137–56.

Hind J. Additionality: a useful way to construct the counterfactual qualitatively? Eval J Australas. 2010;10(1):28–35.

Download references

Acknowledgements

We are very grateful to the CAPE Programme Delivery Group members, for many discussions throughout this work. Our thanks also go to the Sax Institute, Australia (where the original SPIRIT framework was developed), for reviewing and providing helpful feedback on the article. We also thank our reviewers who made very constructive suggestions, which have strengthened and clarified our article.

The evaluation of the CAPE programme, referred to in this report, was funded by Research England. The funding body had no role in the design of the study, analysis, interpretation or writing the manuscript.

Author information

Authors and affiliations.

Department of Health Services Research and Policy, Faculty of Public Health and Policy, London School of Hygiene and Tropical Medicine, 15-17 Tavistock Place, Kings Cross, London, WC1H 9SH, UK

Petra Mäkelä & Kathryn Oliver

Health and Social Care Workforce Research Unit, The Policy Institute, Virginia Woolf Building, Kings College London, 22 Kingsway, London, WC2B 6LE, UK

Annette Boaz

You can also search for this author in PubMed   Google Scholar

Contributions

PM conceptualised the modification of the framework reported in this work. All authors made substantial contributions to the design of the work. PM drafted the initial manuscript. AB and KO contributed to revisions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Petra Mäkelä .

Ethics declarations

Ethics approval and consent to participate.

Ethics approval was granted for the overarching CAPE evaluation by the London School of Hygiene and Tropical Medicine Research Ethics Committee (reference 26347).

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mäkelä, P., Boaz, A. & Oliver, K. A modified action framework to develop and evaluate academic-policy engagement interventions. Implementation Sci 19 , 31 (2024). https://doi.org/10.1186/s13012-024-01359-7

Download citation

Received : 09 January 2024

Accepted : 20 March 2024

Published : 12 April 2024

DOI : https://doi.org/10.1186/s13012-024-01359-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Evidence-informed policy
  • Academic-policy engagement
  • Framework modification

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

peer review process academic research

The Scholarly Kitchen

What’s Hot and Cooking In Scholarly Publishing

Unveiling Perspectives on Peer Review and Research Integrity: Survey Insights

  • Artificial Intelligence
  • Peer Review
  • Research Integrity

I recently co-authored a paper published in Learned Publishing , “ Adapting Peer Review for the Future: Digital Disruptions and Trust in Peer Review ”, which addresses the intersection of digital disruptions and trust in peer review. The ongoing discourse on the integration of artificial intelligence (AI) in academic publishing has generated diverse opinions, with individuals expressing both support and opposition to its adoption.

The scrutiny of peer review and research integrity has raised questions both in the presence and absence of AI. Is the current inquiry into research integrity during peer review solely prompted by the advent of AI, or has it always been a concern, considering past incidents involving retractions, plagiarism, and other unethical practices?

Harini Calamur, the co-author of the paper says:

“The research integrity problem has many facets. At the top is the sheer volume of papers to be peer reviewed, and the lack of enough peer reviewers. It is next to impossible to be true to each and every paper, and to ensure that everything is as per requirement. Unless we fix that, we are not going to know which paper has dodgy ethics, and which does not. And therefore, institutional science will always be playing a catch-up game in terms of research integrity. The integration of AI offers potential solutions, but it’s important to recognize that technological advancements alone cannot fully address these structural challenges. Fundamental changes in the peer review process are essential for a proactive stance on research integrity.”

Indeed, research integrity has consistently been a cause of concern. Therefore, the blame lies not with AI but rather with the established processes, particularly as the dynamics of academia undergo transformations, pushing against traditional boundaries and introducing a multitude of new opportunities and challenges.

In the era of digital advancements and the expansive information landscape, it becomes imperative to establish robust mechanisms that uphold research integrity and ensure the trustworthiness of academic output. While this is not a novel concern, adapting to this evolving landscape requires time and resources. Take Google, for example. Despite being a valuable information resource, the sheer volume of data makes it challenging to discern which information it offers via search results is credible. The realm of research increasingly resembles this scenario, with continually growing numbers of academic papers published annually. This expansion of literature has strained the peer review process, burdening experienced reviewers and leading to either hasty evaluations or delays in publication. Addressing the implications of the ever-growing volume of publication is essential for maintaining the integrity of academic publishing. Hence, the question arises: How can we refocus the community on research integrity?

To gain a deeper understanding of the challenges and opportunities within the evolving academic landscape, Cactus Communications (CACTUS – full disclosure, my employer) conducted a comprehensive global survey during Peer Review Week 2022. The survey, encompassing 852 participants from diverse academic backgrounds and career stages, explored their perspectives on research integrity and the efficacy of the peer review process.

Bar chart showing survey responses

Figure 1 is a visual representation of participant opinions on the issues that peer review is expected to address. Notably, 78.83% of participants expressed the view that peer review should include identifying plagiarism. But does this really fall in the purview of what a peer reviewer is expected to do? These findings set the stage for a more thorough examination of the expectations and challenges associated with upholding research integrity through the peer review process. What is the responsibility of the peer reviewers and what should instead be handled by the journal’s editors and staff? Given the surge in academic paper production and the presence of cutting-edge plagiarism detection tools available to editorial offices, we need to determine the most efficient allocation of responsibilities.

Additionally, the survey investigated the frequency of reviewers observing potential breaches of research integrity. Out of the 852 participants, 401 engaged in peer review, with 43.39% encountering breaches of research integrity. This highlights the gravity of the issue, emphasizing that it should not be trivialized. Given its prevalence, the industry needs to strategize ways to enhance the rigor and efficiency of the review process.

bar chart showing survey results

Grasping the nature of these incidents is vital for addressing weaknesses in the existing peer review system and investigating potential avenues for improvement.

The survey revealed that 65.33% of respondents identified cases of plagiarism, and 44.53% detected instances of questionable or unethical research practices.

bar chart showing survey results

Does this suggest a misalignment between the feedback given by reviewers and the criteria that peer reviewers are supposed to adhere to when evaluating manuscripts? Are reviewers concentrating on the appropriate aspects? How can the reviews be adjusted to align with the necessary criteria for manuscript acceptance?

Here are possible solutions and considerations:

  • Optimizing Reviewer Feedback: How can we ensure reviewer comments effectively align with the broader goals of peer review to facilitate a more efficient evaluation process? What role do clear expectations regarding the type and depth of comments play in standardizing the peer review system? Is it crucial to focus specifically on aspects influencing research quality to avoid providing unnecessary feedback that does not align with manuscript acceptance criteria? By trying to fix unexpected problems, could reviewers miss important things because they’re too focused on tiny details? Ultimately, how does this contribute to enhancing the overall quality and integrity of scholarly publications? Addressing these questions can help optimize the peer review process, promoting consistency, relevance, and constructive feedback.
  • Roles and Responsibilities: In what ways do clear communication and collaborative efforts play a crucial role in ensuring an efficient and transparent review procedure? How does a mutual comprehension of respective roles uphold the integrity of the research and enhance overall efficiency? What can be done to foster a seamless and effective peer review experience, where each stakeholder contributes to the shared goal of maintaining research integrity and advancing scholarly discourse?
  • AI-driven Transformations: How does the transformative potential of AI-driven systems impact different facets of the peer review process? In what ways does the automation of tasks, such as manuscript screening, reviewer selection, and language enhancement, enhance or detract from the efficiency of the review workflow? Can AI’s contribution extend beyond efficiency to encompass improvements in the overall quality and objectivity of the peer review process? In what ways can AI serve as a positive complement to, rather than a substitute for, human expertise?
  • Reviewer Training and Guidelines: What comprehensive training programs can be implemented to cover both technical and ethical considerations of peer review effectively? In what ways do standardized guidelines contribute to maintaining consistency and fairness during the peer review assessment process? How might we incentivize high-quality and ethical peer review to foster a culture of excellence, possibly through recognition and rewards?
  • Data Privacy and AI Integration: Rigorous measures to protect data privacy become imperative with the implementation of AI in the peer review process. What measures can be implemented to ensure the safeguarding of sensitive information throughout peer review, especially when considering more reliance on AI-driven tools? How do we strike a delicate equilibrium between leveraging the benefits of AI and prioritizing the protection of data privacy? In what ways can the confidentiality and security of sensitive information during the peer review process be maintained?

Optimizing Peer Review Participants – both human and machine

In charting the course forward, it is important to recognize the importance of maintaining a delicate balance between AI capabilities and human expertise in shaping an unparalleled peer review system. This convergence is the very linchpin that fortifies the effectiveness and trustworthiness of the scholarly evaluation process. The continued growth in publication volume makes it essential that peer review time be spent in the most efficient and most effective ways possible.

The efforts of peer reviewers can be supported and enhanced through the collaborative integration of AI tools. AI must not be seen as a replacement but as a powerful complement to human judgment, preserving the indispensable nuanced insights provided by human peer reviewers. As AI-driven tools strategically enhance quality control and detect plagiarism, image manipulation, and data falsification, they have the potential to contribute to a more rigorous evaluation of scholarly submissions. However, the ethical considerations surrounding AI, including addressing algorithmic bias and ensuring data privacy, are pivotal.

Establishing comprehensive guidelines and proactive measures becomes the bedrock for the responsible use of AI in peer review. As we embark on this transformative journey, the question remains: How can we further refine and advance this collaborative synergy between AI and human judgment to foster an even more efficient, objective, and ethically robust peer review process?

Roohi Ghosh

Roohi Ghosh

Roohi Ghosh is the ambassador for researcher success at Cactus Communications (CACTUS). She is passionate about advocating for researchers and amplifying their voices on a global stage.

12 Thoughts on "Unveiling Perspectives on Peer Review and Research Integrity: Survey Insights"

' src=

Thank you for sharing your questionnaire results Roohi. It is very clear that research integrity screening – and indeed arguably data quality control screening – cannot be delegated to academic referees who lack training and access to relevant tools (not to mention time). The answers in the questionnaire are probably partially down to how the question is asked, as ‘peer review’ in fig. 1 is typically conflated with ‘Journal quality control processes’ – it has to be made clear that peer review is just one component of the process. Similarly, I assume the responses on the integrity issues referees came across (fig 3) were not always identified by them during standard peer review in the first instance.

  • By Bernd Pulverer
  • Feb 7, 2024, 9:48 AM

' src=

The wording of the original question ( https://cdn.editage.com/insights/editagecom/production/2023-09/CACTUS-Peer-Review-Week-2022-Research-Integrity-Survey-Report.pdf ) is revealing. The participants were asked: “In your view, peer review should be able to identify which of these problems?”. The report authors and the commenters here seem to interpret the results in terms of the duties of a peer reviewer, but the reference to opinion and the ambiguous use of the word “should” in the question may have conflated a swathe of related opinions and understandings relating to both how peer review is expected to work and how it actually works in practice.

We know from experience that by reviewing a paper in detail, flagrant research integrity issues sometimes surface. I expect some respondents reported what can be detected (as in “sure, if you review a paper properly, you should be able to detect some integrity issues”), not what we should rely on peer review to detect. From the reported information, I don’t really see that these two possibilities can be parsed, which leads me to question how the results have been interpreted.

  • By Peter Gorsuch
  • Feb 9, 2024, 4:13 PM

' src=

I want to highlight that the ambiguity in the question was intentional, as we aimed to capture not only respondents’ perceptions of what peer review should entail but also their understanding of how it actually functions in practice. By examining the dissonance between these two perspectives, we hoped to gain valuable insights into areas where there may be discrepancies or misunderstandings. Your point about the dual roles of respondents as both authors and reviewers is particularly pertinent. The survey results not only shed light on what individuals believe should be the responsibility of reviewers but also offer insights into authors’ expectations and perceptions of the peer review process. This dual perspective underscores the importance of providing adequate training and education to both authors and reviewers to bridge the gap between expectations and reality.

  • Feb 19, 2024, 2:57 AM

Thank you for your response to my article and for highlighting important considerations regarding the questionnaire results. I’d like to clarify that the survey was designed to capture perspectives from both peer reviewers and authors. Consequently, some of the responses may indeed reflect a sort of merging of “peer review” with “journal quality control processes,” as you noted. This observation underscores a significant aspect of the current landscape: the need for clearer delineation of roles and responsibilities between authors and reviewers. Your point about the confusion surrounding these roles is particularly salient. Authors, who later transition into the role of reviewers, must be adequately educated about the distinctions between peer review and journal quality control processes. By ensuring authors understand these differences, we can promote greater clarity and effectiveness in the peer review process overall.

  • Feb 19, 2024, 2:55 AM

Thank you for your response and for highlighting important considerations regarding the questionnaire results. I’d like to clarify that the survey was designed to capture perspectives from both peer reviewers and authors. Consequently, some of the responses may indeed reflect a sort of merging of “peer review” with “journal quality control processes,” as you noted. This observation underscores a significant aspect of the current landscape: the need for clearer delineation of roles and responsibilities between authors and reviewers. Your point about the confusion surrounding these roles is particularly salient. Authors, who later transition into the role of reviewers, must be adequately educated about the distinctions between peer review and journal quality control processes. By ensuring authors understand these differences, we can promote greater clarity and effectiveness in the peer review process overall.

  • Feb 19, 2024, 2:58 AM

' src=

Am I the only one who feels like the respondents do not fully understand the role of peer review? It is not the role of peer reviewers to uncover fabricated data or image manipulation or conflicts of interest. Certainly they can (and have), and it’s appreciated when they do, but it’s not their primary role. Also, submission systems used include tools that help screen for plagiarism but again, not the main purpose of peer review. The primary purpose of peer review is to assess the research submitted, not to uncover bad actors. Again, it is great when it does, but IMO these are things which fall primarily to the editorial office and/or publishers, not to peer reviewers. Now, if we are considering everything that occurs prior to publication “peer review” that’s a different story. Curious to hear what others think!

  • By Adam Etkin
  • Feb 7, 2024, 6:12 PM

Absolutely agree – hence also my earlier comment. Referees cannot be expected to – nor can they – assess research integrity issues systematically. It is appreciated if they alert editors to potential issues of course. The same goes for many academic editors who lack training, experience, tools. It is crucial to distinguish between assessment of the ‘scientific plausibility of data’ (typical peer review in the biosciences), ‘technical peer review of source data’ (still very rare but important) and assessment of integrity.

  • Feb 8, 2024, 3:28 AM

' src=

Agreed, this survey seems like it is probably fairly ambiguous so the respondents were probably a bit confused.

We published a bit more of a systematic piece on this, I would love to know Roohi’s thoughts on this:

Is the future of peer review automated? Schulz R, Barnett A, Bernard R, Brown NJL, Byrne JA, Eckmann P, Gazda MA, Kilicoglu H, Prager EM, Salholz-Hillel M, Ter Riet G, Vines T, Vorland CJ, Zhuang H, Bandrowski A, Weissgerber TL. BMC Res Notes. 2022 Jun 11;15(1):203. doi: 10.1186/s13104-022-06080-6. PMID: 35690782

of course during the pandemic the issue was that we were putting a lot of information out without any review, so the bots were the only way to get any review:

Automated screening of COVID-19 preprints: can we help authors to improve transparency and reproducibility? Weissgerber T, Riedel N, Kilicoglu H, Labbé C, Eckmann P, Ter Riet G, Byrne J, Cabanac G, Capes-Davis A, Favier B, Saladi S, Grabitz P, Bannach-Brown A, Schulz R, McCann S, Bernard R, Bandrowski A. Nat Med. 2021 Jan;27(1):6-7. doi: 10.1038/s41591-020-01203-7. PMID: 33432174

  • By Anita E Bandrowski
  • Feb 9, 2024, 7:02 PM

I went through the paper and found the insights very interesting. Thank you for sharing!

  • Feb 19, 2024, 3:00 AM

Exactly! That’s what the survey brought to the fore—that not everyone, especially authors may understand the role of peer review.

  • Feb 19, 2024, 2:59 AM

I think that there is a lack of clarity, but also that we really can’t offload work to peer reviewers that they are untrained, unpaid, and largely unwilling to do.

Professionals that work for publishers can do this type of work and have been doing it, though the training may also be an issue because these individuals may not be as knowledgeable about the science being described.

The solution that we proposed was that we have trained AI bot(s) that can lend a hand in peer review process where reviewers and editorial staff are not going to look, e.g., in tasks that are usually too tedious for humans such as following a checklist or making sure that all catalog numbers are accurate for reagents.

  • Feb 19, 2024, 4:44 PM

Undoubtedly AI will (is!) dramatically alter(ing) how we quality control and indeed peer review papers, but there is still a clear difference between 1) classical peer review (mostly a high-level plausibility assessment; AI helps assess conceptual advance in this context); 2) technical peer review (requires actual data to analyze, rarely found currently at journals; AI helps with data analysis e.g. stats); 3) Quality control/curation (data structure & deposition, protocols/method reporting etc.; AI assists in checklist compliance); 4) Research integrity screening (data, images; AI tools already assist, currently mostly at the level of duplications). These are all very different activities that require a different skillset and different tools. They they should not be conflated. Needless to say, very few journals currently achieve this.

  • Feb 20, 2024, 3:39 AM

Comments are closed.

Related Articles:

person at desk writing using a laptop with overlay of AI on screen

Next Article:

line drawings of various cooking and eating equipment

Lemieux Library and McGoldrick Learning Commons

Catalog search, site search.

  • Seattle University
  • Lemieux Library

UCOR 1600-01 Politics of the End (Professor Patrick Schoettmer)

  • Scholarly/Peer Reviewed Journals
  • Course Guide
  • Evaluating Sources
  • Find Articles
  • Find Books and More
  • Search Strategically
  • APA Reference List - Examples

Get Research Help

Your librarian is ....

Profile Photo

Scholarly Journals, Popular Magazine and Trade Publications

What is a Scholarly Journal?

Scholarly journals are generally published by and for experts. A publication is considered to be peer reviewed if its articles go through an official editorial process that involves review and approval by the author’s peers (people who are experts in the same subject area.) Articles in scholarly journals present new, previously unpublished research. Scholarly sources will almost always include:

  • Bibliography and footnotes
  • Author’s name and academic credentials

Use scholarly journals for highly focused original research.

Articles in popular magazines tend to be written by staff writers or freelance journalists and are geared towards a general audience . While most magazines adhere to editorial standards, articles do not go through a peer review process and rarely contain bibliographic citations. Popular magazines are periodicals that one typically finds at grocery stores, airport newsstands, or bookstores. Use popular magazines for a general overview of current news and opinions, or firsthand accounts of an event.

Trade publications focus on a specific profession or trade. Articles in trade magazines cover the interest of skilled laborers, technicians, and artisans. Professional magazines cover the interests of professors, librarians, and members of other fields that require advanced degrees. Subject magazines cover a topic of interest to one or more professions. Use trade magazines for overviews of news and research in a particular field .

What are the types of scholarly articles?

Scholarly articles usually fall into one of five major types: empirical studies, review articles theoretical articles, methodological or case studies. A typical article will have an abstract to summarize the article which follows. The article will introduce the problem, present a thesis statement followed by the body/methodology section.  If there is raw data, there will be a results section or if not, it could be a section called the findings section. A discussion section interprets the results in light of other studies. The last section is the conclusion which restates the thesis and suggest future research.

An empirical article contains original research. It can be either quantitative or qualitative. In format, it has an introduction (problem statement/purpose) followed by sections covering methods, results and discussion. Usually arranged chronologically.

Review articles evaluate existing published research and shows how current research relates to previous research. In the introduction, the article will define the problem of the research, then summarizes and evaluates previous research. The conclusion usually recommends possible next steps for inquiry.

Theoretical articles either advance a theory or critique a current theory.

Methodological articles either advances or modifies a methodological approach. Uses empirical data

Case studies use an individual or organzation as an illustration of a problem or solution

  • << Previous: Course Guide
  • Next: Evaluating Sources >>
  • Last Updated: Apr 12, 2024 9:24 AM
  • URL: https://library.seattleu.edu/guides/politicsend

IMAGES

  1. Understand the peer review process

    peer review process academic research

  2. Peer Review Process

    peer review process academic research

  3. Peer Review

    peer review process academic research

  4. A guide to becoming a peer reviewer

    peer review process academic research

  5. How to Publish Your Article in a Peer-Reviewed Journal: Survival Guide

    peer review process academic research

  6. What Are "Peer-Reviewed" Articles?

    peer review process academic research

VIDEO

  1. THIS Got Through Peer Review?!

  2. Peer reviewing may sometimes saves us from being flooded with absolute rubbish

  3. What is Peer Review? #archaeology #academia #publishing #journal

  4. Join the Academic Revolution

  5. What is Peer Review and How Does It Ensure Scientific Accuracy?

  6. How to Be a Peer Reviewer Webinar

COMMENTS

  1. Peer review guidance: a primer for researchers

    The peer review process is essential for evaluating the quality of scholarly works, suggesting corrections, and learning from other authors' mistakes. The principles of peer review are largely based on professionalism, eloquence, and collegiate attitude. As such, reviewing journal submissions is a privilege and responsibility for 'elite ...

  2. Peer Review in Scientific Publications: Benefits, Critiques, & A

    HISTORY OF PEER REVIEW. The concept of peer review was developed long before the scholarly journal. In fact, the peer review process is thought to have been used as a method of evaluating written work since ancient Greece ().The peer review process was first described by a physician named Ishaq bin Ali al-Rahwi of Syria, who lived from 854-931 CE, in his book Ethics of the Physician ().

  3. Reviewers

    Reviewers play a pivotal role in scholarly publishing. The peer review system exists to validate academic work, helps to improve the quality of published research, and increases networking possibilities within research communities. Despite criticisms, peer review is still the only widely accepted method for research validation and has continued ...

  4. What Is Peer Review?

    Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. ... or unanswered questions for readers who weren't involved in the research process. Peer-reviewed articles are considered a highly credible source due to this stringent process ...

  5. Understanding the peer review process: A step-by-step guide for

    The peer review process is a vital component of academic research and publishing. It serves as a quality control mechanism to ensure that scholarly articles meet rigorous standards of accuracy, validity, and significance. For researchers, navigating the peer review process can be both daunting and crucial for their professional growth.

  6. Peer review and the publication process

    Peer review is one of various mechanisms used to ensure the quality of publications in academic journals. It helps authors, journal editors and the reviewer themselves. It is a process that is unlikely to be eliminated from the publication process. All forms of peer review have their own strengths and weaknesses.

  7. Demystifying the process of scholarly peer-review: an ...

    The peer-review process is the longstanding method by which research quality is assured. On the one hand, it aims to assess the quality of a manuscript, with the desired outcome being (in theory ...

  8. What is Peer Review?

    Peer review is 'a process where scientists ("peers") evaluate the quality of other scientists' work. By doing this, they aim to ensure the work is rigorous, coherent, uses past research and adds to what we already know.' You can learn more in this explainer from the Social Science Space. Peer review brings academic research to publication in the following ways: Evaluation -

  9. Peer Review in Academia

    There is a lack of consensus on what peer review is and on its purposes, practices, outcomes and impact on the academic enterprise (Tennant & Ross-Hellauer, 2020).The term peer review was relatively unknown before 1970.Referee was the more commonly applied notion, used primarily in relation to the evaluation of manuscripts and scientific communication (Batagelj et al., 2017).

  10. Peer review

    Peer review has a key role in ensuring that information published in scientific journals is as truthful, valid and accurate as possible. ... Peer review cannot improve poor research, but it can often "correct, enhance and strengthen the statistical analysis of data and can markedly improve presentation and clarity" [4].

  11. The Peer Review Process

    The peer review process can be broadly summarized into 10 steps, although these steps can vary slightly between journals. Explore what's involved, below. Editor Feedback: "Reviewers should remember that they are representing the readers of the journal. Will the readers of this particular journal find this informative and useful?" 1.

  12. A modest proposal to the peer review process: a collaborative and

    A primary goal in the development of the scientific method lies in our ability to test hypotheses and communicate research findings in both a scientifically rigorous and replicable process, one that helps us to better understand human behaviors and promotes the exchanges of innovative ideas among scholars (Parks, 2020).The traditional peer review process (TPR) has been described as a type of ...

  13. Developing doctoral students' / researchers' understanding of the

    The peer-review process (sometimes called the scientific peer-review system or procedures of science) is a core aspect of scholarly publishing. The peer-review process is used by scientific work publication platforms, such as academic journals and funding agencies, to evaluate a manuscript's or proposal's efficacy and novelty.

  14. Towards codes of practice for navigating the academic peer review process

    1. Introduction. Peer review is the bedrock upon which modern academic research and its lasting contributions to science and society are founded. Specifically, peer review operates as the predominant process for assessing the validity, quality, and originality of scientific articles for publication.

  15. A step-by-step guide to peer review: a template for patients and novice

    Further illustrating these developments, patients are now involved in reviewing and making recommendations as part of funding institutions, setting research agendas and priorities, being funded for and leading their own research and leading or coauthoring scholarly publications, and are now participating in the peer review process for academic ...

  16. Peer Review Process: Understanding the Pathway to Publication

    The process of peer review plays a vital role in the world of academic publishing, ensuring the quality and credibility of scholarly research. This process is a critical evaluation system where experts in the field assess the merit, validity, and originality of research manuscripts before publication. Through a comprehensive examination of the ...

  17. The peer review process

    The peer review process. The process of peer review is vital to academic research because it means that articles published in academic journals are of the highest possible quality, validity, and relevance. Journals rely on the dedication and support of reviewers to make sure articles are suitable for publication.

  18. The peer review system is broken. We asked academics how to fix it

    The peer review process is a cornerstone of modern scholarship. Before new work is published in an academic journal, experts scrutinise the evidence, research and arguments to make sure they stack up.

  19. Peer review isn't perfect − I know because I teach others how to do it

    When I teach research methods, a major focus is peer review. As a process, peer review evaluates academic papers for their quality, integrity and impact on a field, largely shaping what scientists ...

  20. The peer review process

    The review of research articles by peer experts prior to their publication is considered a mainstay of publishing in the medical literature. [ 1, 2] This peer review process serves at least two purposes. For journal editors, peer review is an important tool for evaluating manuscripts submitted for publication.

  21. Peer Review in Science: the pains and problems

    The peer-review process. ... In fact, a study shows that a typical academic who works on reviews completes about 4.73 reviews per year. While this number may seem small, ... For example, PubPeer is a site that allows scientists to review and talk about published research, ...

  22. What is Academic Research?

    What is Academic Research? After completing this module you will be able to: recognize why information exists, who creates it, and how information of all kinds can be valuable, even when it's biased. understand what scholarly research is, how to find it, how the process of peer-review works, and how it gets published. identify types of ...

  23. LibGuides: Research Process: Scholarly and Peer-Reviewed Journals

    In short, "scholarly" means the article was written by an expert for an audience of other experts, researchers or students. "Peer-reviewed" takes it one step further and means the article was reviewed and critiqued by the author's peers who are experts in the same subject area. The vast majority of scholarly articles are peer reviewed.

  24. LIT/SPE 5040 Teacher as Researcher: Research Review

    Within the search, choose the Limit for Scholarly/Peer-reviewed. or; When looking at a citation within a database, click the journal title until you reach the Publication Details. Look for the "Peer reviewed" field. or; Look up the journal title in the Serials Directory. Look in the "Refereed" field. Refereed = Peer Reviewed <<

  25. A modified action framework to develop and evaluate academic-policy

    Findings. A number of characteristics of the original SPIRIT framework could be applied to academic-policy engagement. While keeping the core domains, we modified the framework to capture dynamics of engagement at multiple academic and policy levels (individuals, organisations and system), extending beyond the original unidirectional focus on policy agencies' use of research.

  26. (PDF) The peer review process

    The peer review process is a crucial component of medical publishing, ... Finally, I comment on how best to gain experience in the essential academic research skill of manuscript peer review. In ...

  27. Unveiling Perspectives on Peer Review and Research Integrity: Survey

    I recently co-authored a paper published in Learned Publishing, "Adapting Peer Review for the Future: Digital Disruptions and Trust in Peer Review", which addresses the intersection of digital disruptions and trust in peer review. The ongoing discourse on the integration of artificial intelligence (AI) in academic publishing has generated diverse opinions, with individuals expressing both ...

  28. Scholarly/Peer Reviewed Journals

    What is a Scholarly Journal? Scholarly journals are generally published by and for experts. A publication is considered to be peer reviewed if its articles go through an official editorial process that involves review and approval by the author's peers (people who are experts in the same subject area.) Articles in scholarly journals present new, previously unpublished research.