• Research article
  • Open access
  • Published: 06 March 2019

Tools used to assess the quality of peer review reports: a methodological systematic review

  • Cecilia Superchi   ORCID: orcid.org/0000-0002-5375-6018 1 , 2 , 3 ,
  • José Antonio González 1 ,
  • Ivan Solà 4 , 5 ,
  • Erik Cobo 1 ,
  • Darko Hren 6 &
  • Isabelle Boutron 7  

BMC Medical Research Methodology volume  19 , Article number:  48 ( 2019 ) Cite this article

23k Accesses

42 Citations

66 Altmetric

Metrics details

A strong need exists for a validated tool that clearly defines peer review report quality in biomedical research, as it will allow evaluating interventions aimed at improving the peer review process in well-performed trials. We aim to identify and describe existing tools for assessing the quality of peer review reports in biomedical research.

We conducted a methodological systematic review by searching PubMed, EMBASE (via Ovid) and The Cochrane Methodology Register (via The Cochrane Library) as well as Google® for all reports in English describing a tool for assessing the quality of a peer review report in biomedical research. Data extraction was performed in duplicate using a standardized data extraction form. We extracted information on the structure, development and validation of each tool. We also identified quality components across tools using a systematic multi-step approach and we investigated quality domain similarities among tools by performing hierarchical, complete-linkage clustering analysis.

We identified a total number of 24 tools: 23 scales and 1 checklist. Six tools consisted of a single item and 18 had several items ranging from 4 to 26. None of the tools reported a definition of ‘quality’. Only 1 tool described the scale development and 10 provided measures of validity and reliability. Five tools were used as an outcome in a randomized controlled trial (RCT). Moreover, we classified the quality components of the 18 tools with more than one item into 9 main quality domains and 11 subdomains. The tools contained from two to seven quality domains. Some domains and subdomains were considered in most tools such as the detailed/thorough (11/18) nature of reviewer’s comments. Others were rarely considered, such as whether or not the reviewer made comments on the statistical methods (1/18).

Several tools are available to assess the quality of peer review reports; however, the development and validation process is questionable and the concepts evaluated by these tools vary widely. The results from this study and from further investigations will inform the development of a new tool for assessing the quality of peer review reports in biomedical research.

Peer Review reports

The use of editorial peer review originates in the eighteenth century [ 1 ]. It is a longstanding and established process that generally aims to provide a fair decision-making mechanism and improve the quality of a submitted manuscript [ 2 ]. Despite the long history and application of the peer review system, its efficacy is still a matter of controversy [ 3 , 4 , 5 , 6 , 7 ]. About 30 years after the first international Peer Review Congress, there are still ‘scarcely any bars to eventual publication. There seems to be no study too fragmented, no hypothesis too trivial [...] for a paper to end up in print’ (Drummond Rennie, chair of the advisory board) [ 8 ].

Recent evidence suggests that many current editors and peer reviewers in biomedical journals still lack the appropriate competencies [ 9 ]. In particular, it has been shown that peer reviewers rarely receive formal training [ 3 ]. Moreover, their capacity to detect errors [ 10 , 11 ], identify deficiencies in reporting [ 12 ] and spin [ 13 ] has been found lacking.

Some systematic reviews have been performed to estimate the effect of interventions aimed at improving the peer review process [ 2 , 14 , 15 ]. These studies showed that there is still a lack of evidence supporting the use of interventions to improve the quality of the peer review process. Furthermore, Bruce and colleagues highlighted the urgent need to clarify outcomes, such as peer review report quality, that should be used in randomized controlled trials evaluating these interventions [ 15 ].

A validated tool that clearly defines peer review report quality in biomedical research is greatly needed. This will allow researchers to have a structured instrument to evaluate the impact of interventions aimed at improving the peer review process in well-performed trials. Such a tool could also be regularly used by editors to evaluate the work of reviewers.

Herein, as starting point for the development of a new tool, we identify and describe existing tools that assess the quality of peer review reports in biomedical research.

Study design

We conducted a methodological systematic review and followed the standard Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines [ 16 ]. The quality of peer review reports is an outcome that in the long term is related to clinical relevance and patient care. However, the protocol was not registered in PROSPERO, as this review does not contain direct health-related outcomes [ 17 ].

Information sources and search strategy

We searched PubMed, EMBASE (via Ovid) and The Cochrane Methodology Register (via The Cochrane Library) from their inception to October 27, 2017 as well as Google® (search date: October 20, 2017) for all reports describing a tool to assess the quality of a peer review report in biomedical research. Search strategies were refined in collaboration with an expert methodologist (IS) and are presented in the Additional file  1 . We hand-searched the citation lists of included papers and consulted a senior editor with expertise in editorial policies and peer review processes to further identify relevant reports.

Eligibility criteria

We included all reports describing a tool to assess the quality of a peer review report. Sanderson and colleagues defined a tool as ‘any structured instrument aimed at aiding the user to assess the quality [...]’ [ 18 ]. Building on this definition, we defined a quality tool as any structured or unstructured instrument assisting the user to assess the quality of peer review report (for definitions see Table  1 ). We restricted inclusion to the English language.

Study selection

We exported the references retrieved from the search into the reference manager Endnote X7 (Clarivate Analytics, Philadelphia, United States), which was subsequently used to remove duplicates. We reviewed all records manually to verify and remove duplicates that had not been previously detected. A reviewer (CS) screened all titles and abstracts of the retrieved citations. A second reviewer (JAG) carried out quality control on a 25% random sample obtained using the statistical software R 3.3.3 [ 19 ]. We obtained and independently examined the full-text copies of potentially eligible reports for further assessment. In the case of disagreement, consensus was determined by a discussion or by involving a third reviewer (DH). We reported the result of this process through a PRISMA flowchart [ 16 ]. When several tools were reported in the same article, they were included as separate tools. When a tool was reported in more than one article, we extracted data from all related reports.

Data extraction

General characteristics of tools.

We designed a data extraction form using Google® Docs and extracted the general characteristics of the tools. We determined whether the tool was scale or checklist. We defined a tool as a scale when it included a numeric or nominal overall quality score while we considered it as a checklist when an overall quality score was not present. We recorded the total number of items (for definitions see Table 1 ). For scales with more than 1 item we extracted how items were weighted, how the overall score was calculated, and the scoring range. Moreover, we checked whether the scoring instructions were adequately defined, partially defined, or not defined according to the subjective judgement of two reviewers (CS and JAG) (an example of the definition for scoring instructions is shown in Table  2 ). Finally, we extracted all information related to the development, validation, and assessment of the tool’s reliability and if the concept of quality was defined.

Two reviewers (CS and JAG) piloted and refined the data extraction form on a random 5% sample of extracted articles. Full data extraction was conducted by two reviewers (CS and JAG) working independently for all included articles. In the case of disagreement, consensus was obtained by discussion or by involving a third reviewer (DH). Authors of the reports were contacted in cases where we needed further clarification of the tool.

Quality components of the peer review report considered in the tools

We followed the systematic multi-step approach recently described by Gentles [ 20 ], which is based on a constant comparative method of analysis developed within the Grounded Theory approach [ 21 ]. Initially, a researcher (CS) extracted all items included in the tools and for each item identified a ‘key concept’ representing a quality component of peer review reports. Next, two researchers (CS and DH) organized the key concepts into a domain-specific matrix (analogous to the topic-specific matrices described by Gentles). Initially, the matrix consisted of domains for peer review report quality, followed by items representative of each domain and references to literature sources that items were extracted from. As the analysis progressed, subdomains were created and the final version of the matrix included domains, subdomains, items and references.

Furthermore, we calculated the proportions of domains based on the number of items included in each domain for each tool. According to the proportions obtained, we created a domain profile for each tool. Then, we calculated the matrix of Euclidean distances between the domain profiles. These distances were used to perform the hierarchical, complete-linkage clustering analysis, which provided us with a tree structure that we represent in a chart. Through this graphical summary, we were able to identify domain similarities among the different tools, which helped us draw our analytical conclusions. The calculations and graphical representations were obtained using the statistical software R 3.3.3 [ 19 ].

Study selection and general characteristics of reports

The screening process is summarized in a flow diagram (Fig. 1 ). Of the 4312 records retrieved, we finally included 46 reports: 39 research articles; 3 editorials; 2 information guides; 1 was a letter to the editor and 1 study was available only as an abstract (excluded studies are listed in Additional file  2 ; included studies are listed in Additional file  3 ).

figure 1

Study selection flow diagram

General characteristics of the tools

In the 46 reports, we identified 24 tools, including 23 scales and 1 checklist. The tools were developed from 1985 to 2017. Four tools had from 2 to 4 versions [ 22 , 23 , 24 , 25 ]. Five tools were used as an outcome in a randomized controlled trial [ 23 , 25 , 26 , 27 , 28 ]. Table  3 lists the general characteristics of the identified tools. Table  4 presents a more complete descriptive summary of the tools’ characteristics, including types and measures of validity and reliability.

Six scales consisted of a single item enquiring into the overall quality of the peer review report, all of them based on directly asking users to score the overall quality [ 22 , 25 , 29 , 30 , 31 , 32 ]. These tools assessed the quality of a peer review report by using: 1) a 4 or 5 Likert point scale ( n  = 4); 2) as ‘good’, ‘fair’ and ‘poor’ ( n  = 1); and 3) a restricted scale from 80 to 100 (n = 1). Seventeen scales and one checklist had several items ranging in number from 4 to 26. Of these, 10 used the same weight for each item [ 23 , 24 , 27 , 28 , 33 , 34 , 35 , 36 , 37 , 38 ]. The overall quality score was the sum of the score for each item ( n  = 3); the mean of the score of the items ( n  = 6); or the summary score ( n  = 11) (for definitions see Table 1 ). Three scales reported more than one way to assess the overall quality [ 23 , 24 , 36 ]. The scoring system instructions were not defined in 67% of the tools.

None of the tools reported the definition of peer review report quality, and only one described the tool development [ 39 ]. The first version of this tool was designed by a development group composed of four researchers and three editors. It was based on a tool used in an earlier study and that had been developed by reviewing the literature and interviewing editors. Successively, the tool was modified by rewording some questions after some group discussions and a guideline for using the tool was drawn up.

Only 3 tools assessed and reported a validation process [ 39 , 40 , 41 ]. The assessed types of validity included face validity, content validity, construct validity, and preliminary criterion validity. Face and content validity could involve either a sole editor and author or a group of researchers and editors. Construct validity was assessed with multiple regression analysis using discriminant criteria (reviewer characteristics such as age, sex, and country of residence) and convergent criteria (training in epidemiology and/or statistics); or the overall assessment of the peer review report by authors and an assessment of ( n  = 4–8) specific components of the peer review report by editors or authors. Preliminary criterion was assessed by comparing grades obtained by an editor to those obtained by an editor-in-chief using an earlier version of the tool. Reliability was assessed in 9 tools [ 24 , 25 , 26 , 27 , 31 , 36 , 39 , 41 , 42 ]; all reported inter-rater reliability and 2 also reported test-retest reliability. One tool reported the internal consistency measured with the Cronbach’s alpha [ 39 ].

Quality components of the peer review reports considered in the tools with more than one item

We extracted 132 items included in the 18 tools. One item asking for the percentage of co-reviews the reviewer had graded was not included in the classification because it represented a method of measuring reviewer’s performance and not a component of peer review report quality.

We organized the key concepts from each item into ‘topic-specific matrices’ (Additional file  4 ), identifying nine main domains and 11 subdomains: 1) relevance of study ( n  = 9); 2) originality of the study ( n  = 5); 3) interpretation of study results ( n  = 6); 4) strengths and weaknesses of the study ( n  = 12) (general, methods and statistical methods); 5) presentation and organization of the manuscript ( n  = 8); 6) structure of the reviewer’s comments ( n  = 4); 7) characteristics of reviewer’s comments ( n  = 14) (clarity, constructiveness, detail/thoroughness, fairness, knowledgeability, tone); 8) timeliness of the review report ( n  = 7); and 9) usefulness of the review report ( n  = 10) (decision making and manuscript improvement). The total number of tools corresponding to each domain and subdomain is shown in Fig.  2 . An explanation and example of all domains and subdomains is provided in Table  5 . Some domains and subdomains were considered in most tools, such as whether the reviewers’ comments were detailed/thorough ( n  = 11) and constructive ( n  = 9), whether the reviewers’ comments were on the relevance of the study ( n  = 9) and if the peer review report was useful for manuscript improvement ( n  = 9). However, other items were rarely considered, such as whether the reviewer made comments on the statistical methods ( n  = 1).

figure 2

Frequency of quality domains and subdomains

Clustering analysis among tools

We created a domain profile for each tool. For example, the tool developed by Justice et al. consisted of 5 items [ 35 ]. We classified three items under the domain ‘ Characteristics of the reviewer’s comments ’, one under ‘ Timeliness of the review report ’ and one under ‘ Usefulness of the review report ’. According to the aforementioned classification, the domain profile (represented by proportions of domains) for this tool was 0.6:0.2:0.2 for the incorporating domains and 0 for the remaining ones. The hierarchical clustering used the matrix of Euclidean distances among domain profiles, which led to five main clusters (Fig.  3 ).

figure 3

Hierarchical clustering of tools based on the nine quality domains. The figure shows which quality domains are present in each tool. A slice of the chart represents a tool, and each slice is divided into sectors, indicating quality domains (in different colours). The area of each sector corresponds to the proportion of each domain within the tool. For instance, the “Review Rating” tool consists of two domains: Timeliness , meaning that 25% of all its items are encompassed in this domain, and Characteristics of reviewer’s comments occupying the remaining 75%. The blue lines starting from the centre of the chart define how the tools are divided into the five clusters. Clusters #1, #2 and #3 are sub-nodes of a major node grouping all three, meaning that the tools in these clusters have a similar domain profile compared to the tools in clusters #4 and #5

The first cluster consisted of 5 tools developed from 1990 to 2016. All tools included at least one item in the characteristics of the reviewer’s comments domain, representing at least 50% of each domain profile. In the second cluster, there were 3 tools developed from 1994 to 2006. These tools were characterized to incorporate at least one item in the usefulness and timeliness domains. The third cluster included 6 tools that had been developed from 1998 to 2010 and exhibited the most heterogeneous mix of domains. These tools were distinct from the rest because they encompassed items related to interpretation of the study results and originality of the study . Moreover, the third cluster included two tools with different versions and variations. The first, second, and third cluster were linked together in the hierarchical tree that presented tools with at least one quality component grouped in the domain characteristics of the reviewer’s comments. In the fourth cluster, there are 2 tools developed from 2011 to 2017 that consist of at least one component in the strengths and weaknesses domain. Finally, the fifth cluster included 2 tools developed from 2009 to 2012 and which consisted of the same 2 domains. The fourth and fifth clusters were separated from the rest in the hierarchical tree that presented tools with only a few domains.

To the best of our knowledge, this is the first comprehensive review that has systematically identified tools used in biomedical research for assessing the quality of peer review reports. We have identified 24 tools from both the medical literature and an internet search: 23 scales and 1 checklist. One out of four tools consisted of a single item that simply asked the evaluator for a direct assessment of the peer review report’s ‘overall quality’. The remaining tools had between 4 to 26 items in which the overall quality was assessed as the sum of all items, their mean, or as a summary score.

Since a definition of overall quality was not provided, these tools consisted exclusively of a subjective quality assessment by the evaluators. Moreover, we found that only one study reported a rigorous development process of the tool, although it included a very limited number of people. This is of concern because it means that the identified tools were, in fact, not suitable to assess the quality of a peer review report, particularly because they lack a focused theoretical basis. We found 10 tools that were evaluated for validity and reliability; in particular, criterion validity was not assessed for any tool.

Most of the scales with more than one item resulted in a summary score. These scales did not consider how items could be weighted differently. Although commonly used, scales are controversial tools in assessing quality primarily because using a score ‘in summarization weights’ would cause a biased estimation of the measured object [ 43 ]. It is not clear how weights should be assigned to each item of the scale [ 18 ]. Thus different weightings would produce different scales, which could provide varying quality assessments of an individual study [ 44 ].

n our methodological systematic review, we found only one checklist. However, it was neither rigorously developed nor validated and therefore we could not consider it adequate for assessing peer review report quality. We believe that checklists may be a more appropriate means for assessing quality because they do not present an overall score, meaning they do not require a weight for the items.

It is necessary to clearly define what the tool measures. For example, the Risk of Bias (RoB) tool [ 45 ] has a clear aim (to assess trial conduct and not reporting), and it provides a detailed definition of each domain in the tool, including support for judgment. Furthermore, it was developed with transparent procedures, including wide consultation and review of the empirical evidence. Bias and uncertainty can arise when using tools that are not evidence-based, rigorously developed, validated and reliable; and this is particularly true for tools that are used for evaluating interventions aimed at improving the peer review process in RCTs, thus affecting how trial results are interpreted.

We found that most of the items included in the different tools did not cover the scientific aspects of a peer review report nor were constrained to biomedical research. Surprisingly, few tools included an item related to the methods used in the study, and only one inquired about the statistical methods.

In line with a previous study published in 1990 [ 28 ], we believe that the quality components found across all tools could be further organized according to the perspective of either an editor or author, specifically by taking into account the different yet complementary uses of a peer review report. For instance, reviewer’s comments on the relevance of the study and interpretation of the study’s results could assist editors in making an editorial decision, clarity and detail/thoroughness of reviewer’s comments are important attributes which help authors improve manuscript quality. We plan to further investigate the perspectives of biomedical editors and authors towards the quality of peer review reports by conducting an international online survey. We will also include patient editors as survey’s participants as their involvement in the peer review process can further ensure that research manuscripts are relevant and appropriate to end-users [ 46 ].

The present study has strengths but also some limitations. Although we implemented a comprehensive search strategy for reports by following the guidance for conducting methodological reviews [ 20 ], we cannot exclude a possibility that some tools were not identified. Moreover, we limited the eligibility criteria to reports published only in English. Finally, although the number of eligible records we identified through Google® was very limited, it is possible that we introduced selection bias due to a (re)search bubble effect [ 47 ].

Due to the lack of a standard definition of quality, a variety of tools exist for assessing the quality of a peer review report. Overall, we were able to establish 9 quality domains. Between two to seven domains were used among each of the 18 tools. The variety of items and item combinations amongst tools raises concern about variations in the quality of publications across biomedical journals. Low-quality biomedical research implies a tremendous waste of resources [ 48 ] and explicitly affects patients’ lives. We strongly believe that a validated tool is necessary for providing a clear definition of peer review report quality in order to evaluate interventions aimed at improving the peer review process in well-performed trials.

Conclusions

The findings from this methodological systematic review show that the tools for assessing the quality of a peer review report have various components, which have been grouped into 9 domains. We plan to survey a sample of editors and authors in order to refine our preliminary classifications. The results from further investigations will allow us to develop a new tool for assessing the quality of peer review reports. This in turn could be used to evaluate interventions aimed at improving the peer review process in RCTs. Furthermore, it would help editors: 1) evaluate the work of reviewers; 2) provide specific feedback to reviewers; and 3) identify reviewers who provide outstanding review reports. Finally, it might be further used to score the quality of peer review reports in developing programs to train new reviewers.

Abbreviations

Preferred Reporting Items for Systematic Reviews

Randomized controlled trials

Risk of Bias

Kronick DA. Peer review in 18th-century scientific journalism. JAMA. 1990;263(10):1321–2.

Article   CAS   Google Scholar  

Jefferson T, Alderson P, Wager E, Davidoff F. Effects of editorial peer review. JAMA. 2002;287(21):2784–6.

Article   Google Scholar  

Smith R. Peer review: a flawed process at the heart of science and journals. J R Soc Med. 2006;99:178–82.

Baxt WG, Waeckerle JF, Berlin JA, Callaham ML. Who reviews the reviewers? Feasibility of using a fictitious manuscript to evaluate peer reviewer performance. Ann Emerg Med. 1998;32(3):310–7.

Kravitz RL, Franks P, Feldman MD, Gerrity M, Byrne C, William M. Editorial peer reviewers’ recommendations at a general medical journal : are they reliable and do editors care? PLoS One. 2010;5(4):2–6.

Yaffe MB. Re-reviewing peer review. Sci Signal. 2009;2(85):1–3.

Stahel PF, Moore EE. Peer review for biomedical publications : we can improve the system. BMC Med. 2014;12(179):1–4.

Google Scholar  

Rennie D. Make peer review scientific. Nature. 2016;535:31–3.

Moher D. Custodians of high-quality science: are editors and peer reviewers good enough? https://www.youtube.com/watch?v=RV2tknDtyDs&t=454s . Accessed 16 Oct 2017.

Ghimire S, Kyung E, Kang W, Kim E. Assessment of adherence to the CONSORT statement for quality of reports on randomized controlled trial abstracts from four high-impact general medical journals. Trials. 2012;13:77.

Boutron I, Dutton S, Ravaud P, Altman DG. Reporting and interpretation of randomized controlled trials with statistically nonsignificant results. JAMA. 2010;303(20):2058–64.

Hopewell S, Collins GS, Boutron I, Yu L-M, Cook J, Shanyinde M, et al. Impact of peer review on reports of randomised trials published in open peer review journals: retrospective before and after study. BMJ. 2014;349:g4145.

Lazarus C, Haneef R, Ravaud P, Boutron I. Classification and prevalence of spin in abstracts of non-randomized studies evaluating an intervention. BMC Med Res Methodol. 2015;15:85.

Jefferson T, Rudin M, Brodney Folse S, et al. Editorial peer review for improving the quality of reports of biomedical studies. Cochrane Database Syst Rev. 2007;2:MR000016.

Bruce R, Chauvin A, Trinquart L, Ravaud P, Boutron I. Impact of interventions to improve the quality of peer review of biomedical journals: a systematic review and meta-analysis. BMC Med. 2016;14:85.

Moher D, Liberati A, Tetzlaff J, Altman DG, Group TP. Preferred reporting items for systematic reviews and meta-analyses : the PRISMA statement. PLoS Med. 2009;6(7):e1000097.

NHS. PROSPERO International prospective register of systematic reviews. https://www.crd.york.ac.uk/prospero/ . Accessed 6 Nov 2017.

Sanderson S, Tatt ID, Higgins JPT. Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: a systematic review and annotated bibliography. Intern J Epidemiol. 2007;36:666–76.

R Core Team. R: a language and environment for statistical computing. http://www.r-project.org/ . Accessed 4 Dec 2017.

Gentles SJ, Charles C, Nicholas DB, Ploeg J, McKibbon KA. Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research. Syst Rev. 2016;5:172.

Glaser B, Strauss A. The discovery of grounded theory. Chicago: Aldine; 1967.

Friedman DP. Manuscript peer review at the AJR: facts, figures, and quality assessment. Am J Roentgenol. 1995;164(4):1007–9.

Black N, Van Rooyen S, Godlee F, Smith R, Evans S. What makes a good reviewer and a good review for a general medical journal? JAMA. 1998;280(3):231–3.

Henly SJ, Dougherty MC. Quality of manuscript reviews in nursing research. Nurs Outlook. 2009;57(1):18–26.

Callaham ML, Baxt WG, Waeckerle JF, Wears RL. Reliability of editors’ subjective quality ratings of peer reviews of manuscripts. JAMA. 1998;280(3):229–31.

Callaham ML, Knopp RK, Gallagher EJ. Effect of written feedback by editors on quality of reviews: two randomized trials. JAMA. 2002;287(21):2781–3.

Van Rooyen S, Godlee F, Evans S, Black N, Smith R. Effect of open peer review on quality of reviews and on reviewers ’ recommendations : a randomised trial. BMJ. 1999;318(7175):23–7.

Mcnutt RA, Evans AT, Fletcher RH, Fletcher SW. The effects of blinding on the quality of peer review. JAMA. 1990;263(10):1371–6.

Moore A, Jones R. Supporting and enhancing peer review in the BJGP. Br J Gen Pract. 2014;64(624):e459–61.

Stossel TP. Reviewer status and review quality. N Engl J Med. 1985;312(10):658–9.

Thompson SR, Agel J, Losina E. The JBJS peer-review scoring scale: a valid, reliable instrument for measuring the quality of peer review reports. Learn Publ. 2016;29:23–5.

Rajesh A, Cloud G, Harisinghani MG. Improving the quality of manuscript reviews : impact of introducing a structured electronic template to submit reviews. AJR. 2013;200:20–3.

Shattell MM, Chinn P, Thomas SP, Cowling WR. Authors’ and editors’ perspectives on peer review quality in three scholarly nursing journals. J Nurs Scholarsh. 2010;42(1):58–65.

Jawaid SA, Jawaid M, Jafary MH. Characteristics of reviewers and quality of reviews: a retrospective study of reviewers at Pakistan journal of medical sciences. Pakistan J Med Sci. 2006;22(2):101–6.

Justice AC, Cho MK, Winker MA, Berlin JA. Does masking author identity improve peer review quality ? A randomized controlled trial. JAMA. 1998;280(3):240–3.

Henly SJ, Bennett JA, Dougherty MC. Scientific and statistical reviews of manuscripts submitted to nursing research: comparison of completeness, quality, and usefulness. Nurs Outlook. 2010;58(4):188–99.

Hettyey A, Griggio M, Mann M, Raveh S, Schaedelin FC, Thonhauser KE, et al. Peerage of science: will it work? Trends Ecol Evol. 2012;27(4):189–90.

Publons. Publons for editors: overview. https://static1.squarespace.com/static/576fcda2e4fcb5ab5152b4d8/t/58e21609d482e9ebf98163be/1491211787054/Publons_for_Editors_Overview.pdf . Accessed 20 Oct 2017.

Van Rooyen S, Black N, Godlee F. Development of the review quality instrument (RQI) for assessing peer reviews of manuscripts. J Clin Epidemiol. 1999;52(7):625–9.

Evans AT, McNutt RA, Fletcher SW, Fletcher RH. The characteristics of peer reviewers who produce good-quality reviews. J Gen Intern Med. 1993;8(8):422–8.

Feurer I, Becker G, Picus D, Ramirez E, Darcy M, Hicks M. Evaluating peer reviews: pilot testing of a grading instrument. JAMA. 1994;272(2):98–100.

Landkroon AP, Euser AM, Veeken H. Quality assessment of reviewers’ reports using a simple instrument. Obstet Gynecol. 2006;108(4):979–85.

Greenland S, O’Rourke K. On the bias produced by quality scores in meta-analysis, and a hierarchical view of proposed solutions. Biostatistics. 2001;2(4):463–71.

Jüni P, Witschi A, Bloch R. The hazards of scoring the quality of clinical trials for meta-analysis. JAMA. 1999;282(11):1054–60.

Higgins JPT, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Schroter S, Price A, Flemyng E, et al. Perspectives on involvement in the peer-review process: surveys of patient and public reviewers at two journals. BMJ Open. 2018;8:e023357.

Ćurković M, Košec A. Bubble effect: including internet search engines in systematic reviews introduces selection bias and impedes scientific reproducibility. BMC Med Res Methodol. 2018;18(1):130.

Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, et al. How to increase value and reduce waste when research priorities are set. Lancet. 2014;383(9912):156–65.

Kliewer MA, Freed KS, DeLong DM, Pickhardt PJ, Provenzale JM. Reviewing the reviewers: comparison of review quality and reviewer characteristics at the American journal of roentgenology. AJR. 2005;184(6):1731–5.

Berquist T. Improving your reviewer score: it’s not that difficult. AJR. 2017;209:711–2.

Callaham ML, Mcculloch C. Longitudinal trends in the performance of scientific peer reviewers. Ann Emerg Med. 2011;57(2):141–8.

Yang Y. Effects of training reviewers on quality of peer review: a before-and-after study (Abstract). https://peerreviewcongress.org/abstracts_2009.html . Accessed 7 Nov 2017.

Prechelt L. Review quality collector. https://reviewqualitycollector.org/static/pdf/rqdef-example.pdf . Accessed 20 Oct 2017.

Das Sinha S, Sahni P, Nundy S. Does exchanging comments of Indian and non-Indian reviewers improve the quality of manuscript reviews? Natl Med J India. 1999;12(5):210–3.

Callaham ML, Schriger DL. Effect of structured workshop training on subsequent performance of journal peer reviewers. Ann Emerg Med. 2002;40(3):323–8.

Download references

Acknowledgments

The authors would like to thank the MiRoR consortium for their support, Elizabeth Moylan for helping to identify further relevant reports and Melissa Sharp for providing advice during the writing of this article.

This project was supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement no 676207. The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Availability of data and materials

The datasets supporting the conclusions of the present study will be available in the Zenodo repository in the Methods in Research on Research (MiRoR) community [ https://zenodo.org/communities/miror/?page=1&size=20 ].

Author information

Authors and affiliations.

Department of Statistics and Operations Research, Barcelona-Tech, UPC, c/ Jordi Girona 1-3, 08034, Barcelona, Spain

Cecilia Superchi, José Antonio González & Erik Cobo

INSERM, U1153 Epidemiology and Biostatistics Sorbonne Paris Cité Research Center (CRESS), Methods of therapeutic evaluation of chronic diseases Team (METHODS), F-75014, Paris, France

Cecilia Superchi

Paris Descartes University, Sorbonne Paris Cité, Paris, France

Iberoamerican Cochrane Centre, Hospital de la Santa Creu i Sant Pau, C/ Sant Antoni Maria Claret 167, Pavelló 18 - planta 0, 08025, Barcelona, Spain

CIBER de Epidemiología y Salud Pública (CIBERESP), Madrid, Spain

Department of Psychology, Faculty of Humanities and Social Sciences, University of Split, Split, Croatia

Centre d’épidémiologie Clinique, Hôpital Hôtel-Dieu, 1 place du Paris Notre-Dame, 75004, Paris, France

Isabelle Boutron

You can also search for this author in PubMed   Google Scholar

Contributions

All authors provided intellectual contributions to the development of this study. CS, EC and IB had the initial idea and with JAG and DH, designed the study. CS designed the search in collaboration with IS. CS conducted the screening and JAG carried out a quality control of a 25% random sample. CS and JAG conducted the data extraction. CS conducted the analysis and with JAG designed the figures. CS led the writing of the manuscript. IB led the supervision of the manuscript preparation. All authors provided detailed comments on earlier drafts and approved the final manuscript.

Corresponding author

Correspondence to Cecilia Superchi .

Ethics declarations

Ethics approval and consent to participate.

Not required.

Consent for publication

Not applicable.

Competing interests

All authors have completed the ICMJE uniform disclosure form at http://www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that (1) no authors have support from any company for the submitted work; (2) IB is the deputy director of French EQUATOR that might have an interest in the work submitted; (3) no author’s spouse, partner, or children have any financial relationships that could be relevant to the submitted work; and (4) none of the authors has any non-financial interests that could be relevant to the submitted work.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:.

Search strategies. (PDF 182 kb)

Additional file 2:

Excluded studies. (PDF 332 kb)

Additional file 3:

Included studies. (PDF 244 kb)

Additional file 4:

Classification of peer review report quality components. (PDF 2660 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Superchi, C., González, J.A., Solà, I. et al. Tools used to assess the quality of peer review reports: a methodological systematic review. BMC Med Res Methodol 19 , 48 (2019). https://doi.org/10.1186/s12874-019-0688-x

Download citation

Received : 11 July 2018

Accepted : 20 February 2019

Published : 06 March 2019

DOI : https://doi.org/10.1186/s12874-019-0688-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Peer review
  • Quality control
  • Systematic review

BMC Medical Research Methodology

ISSN: 1471-2288

how to check research paper quality

  • UNC Libraries
  • HSL Academic Process
  • Systematic Reviews
  • Step 6: Assess Quality of Included Studies

Systematic Reviews: Step 6: Assess Quality of Included Studies

Created by health science librarians.

HSL Logo

  • Step 1: Complete Pre-Review Tasks
  • Step 2: Develop a Protocol
  • Step 3: Conduct Literature Searches
  • Step 4: Manage Citations
  • Step 5: Screen Citations

Assess studies for quality and bias

Critically appraise included studies, select a quality assessment tool, a closer look at popular tools, use covidence for quality assessment.

  • Quality Assessment FAQs
  • Step 7: Extract Data from Included Studies
  • Step 8: Write the Review

  Check our FAQ's

   Email us

  Chat with us (during business hours)

   Call (919) 962-0800

   Make an appointment with a librarian

  Request a systematic or scoping review consultation

About Step 6: Assess Quality of Included Studies

In step 6 you will evaluate the articles you included in your review for quality and bias. To do so, you will:

  • Use quality assessment tools to grade each article.
  • Create a summary of the quality of literature included in your review.

This page has links to quality assessment tools you can use to evaluate different study types. Librarians can help you find widely used tools to evaluate the articles in your review.

Reporting your review with PRISMA

If you reach the quality assessment step and choose to exclude articles for any reason, update the number of included and excluded studies in your PRISMA flow diagram.

Managing your review with Covidence

Covidence includes the Cochrane Risk of Bias 2.0 quality assessment template, but you can also create your own custom quality assessment template.

How a librarian can help with Step 6

  • What the quality assessment or risk of bias stage of the review entails
  • How to choose an appropriate quality assessment tool
  • Best practices for reporting quality assessment results in your review

After the screening process is complete, the systematic review team must assess each article for quality and bias. There are various types of bias, some of which are outlined in the table below from the Cochrane Handbook.

The most important thing to remember when choosing a quality assessment tool is to pick one that was created and validated to assess the study design(s) of your included articles.

For example, if one item in the inclusion criteria of your systematic review is to only include randomized controlled trials (RCTs), then you need to pick a quality assessment tool specifically designed for RCTs (for example, the Cochrane Risk of Bias tool)

Once you have gathered your included studies, you will need to appraise the evidence for its relevance, reliability, validity, and applicability​.

Ask questions like:

Relevance:  ​.

  • Is the research method/study design appropriate for answering the research question?​
  • Are specific inclusion / exclusion criteria used? ​

Reliability:  ​

  • Is the effect size practically relevant? How precise is the estimate of the effect? Were confidence intervals given?  ​

Validity: ​

  • Were there enough subjects in the study to establish that the findings did not occur by chance?    ​
  • Were subjects randomly allocated? Were the groups comparable? If not, could this have introduced bias?  ​
  • Are the measurements/ tools validated by other studies?  ​
  • Could there be confounding factors?   ​

Applicability:  ​

  • Can the results be applied to my organization and my patient?   ​

What are Quality Assessment tools?

Quality Assessment tools are questionnaires created to help you assess the quality of a variety of study designs.  Depending on the types of studies you are analyzing, the questionnaire will be tailored to ask specific questions about the methodology of the study.  There are appraisal tools for most kinds of study designs.  You should choose a Quality Assessment tool that matches the types of studies you expect to see in your results.  If you have multiple types of study designs, you may wish to use several tools from one organization, such as the CASP or LEGEND tools, as they have a range of assessment tools for many study designs.

Click on a study design below to see some examples of quality assessment tools for that type of study.

Randomized Controlled Trials (RCTs)

  • Cochrane Risk of Bias (ROB) 2.0 Tool Templates are tailored to randomized parallel-group trials, cluster-randomized parallel-group trails (including stepped-wedge designs), and randomized cross-over trails and other matched designs.
  • CASP- Randomized Controlled Trial Appraisal Tool A checklist for RCTs created by the Critical Appraisal Skills Program (CASP)
  • The Jadad Scale A scale that assesses the quality of published clinical trials based methods relevant to random assignment, double blinding, and the flow of patients
  • CEBM-RCT A critical appraisal tool for RCTs from the Centre for Evidence Based Medicine (CEBM)
  • Checklist for Randomized Controlled Trials (JBI) A critical appraisal checklist from the Joanna Briggs Institute (JBI)
  • Scottish Intercollegiate Guidelines Network (SIGN) Checklists for quality assessment
  • LEGEND Evidence Evaluation Tools A series of critical appraisal tools from the Cincinnati Children's Hospital. Contains tools for a wide variety of study designs, including prospective, retrospective, qualitative, and quantitative designs.

Cohort Studies

  • CASP- Cohort Studies A checklist created by the Critical Appraisal Skills Programme (CASP) to assess key criteria relevant to cohort studies
  • Checklist for Cohort Studies (JBI) A checklist for cohort studies from the Joanna Briggs Institute
  • The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses A validated tool for assessing case-control and cohort studies
  • STROBE Checklist A checklist for quality assessment of case-control, cohort, and cross-sectional studies

Case-Control Studies

  • CASP- Case Control Study A checklist created by the Critical Appraisal Skills Programme (CASP) to assess key criteria relevant to case-control studies
  • Tool to Assess Risk of Bias in Case Control Studies by the CLARITY Group at McMaster University A quality assessment tool for case-control studies from the CLARITY Group at McMaster University
  • JBI Checklist for Case-Control Studies A checklist created by the Joanna Briggs Institute

Cross-Sectional Studies

Diagnostic studies.

  • CASP- Diagnostic Studies A checklist for diagnostic studies created by the Critical Appraisal Skills Program (CASP)
  • QUADAS-2 A quality assessment tool developed by a team at the Bristol Medical School: Population Health Sciences at the University of Bristol
  • Critical Appraisal Checklist for Diagnostic Test Accuracy Studies (JBI) A checklist for quality assessment of diagnostic studies developed by the Joanna Briggs Institute

Economic Studies

  • Consensus Health Economic Criteria (CHEC) List 19 yes-or-no questions, one for each category to assess economic evaluations
  • CASP- Economic Evaluation A checklist for quality assessment of economic studies by the Critical Appraisal Skills Programme

Mixed Methods

  • McGill Mixed Methods Appraisal Tool (MMAT) 2018 User Guide See full site for additional information, including FAQ's, references and resources, earlier versions, and more

Qualitative Studies

  • CASP- Qualitative Studies 10 questions to help assess qualitative research from the Critical Appraisal Skills Programme

Systematic Reviews and Meta-Analyses

  • JBI Critical Appraisal Checklist for Systematic Reviews and Research Syntheses An 11-item checklist for evaluating systematic reviews
  • AMSTAR Checklist A 16-question measurement tool to assess systematic reviews
  • AHRQ Methods Guide for Effectiveness and Comparative Effectiveness Reviews A guide to selecting eligibility criteria, searching the literature, extracting data, assessing quality, and completing other steps in the creation of a systematic review
  • CASP - Systematic Review A checklist for quality assessment of systematic review from the Critical Appraisal Skills Programme

Clinical Practice Guidelines

  • National Guideline Clearinghouse Extent of Adherence to Trustworthy Standards (NEATS) Instrument A 15-item instrument using a scale of 1-5 to evaluate a guideline's adherence to the Institute of Medicine's standard for trust worth guidelines
  • AGREE-II Appraisal of Guidelines for Research and Evaluation The Appraisal of Guidelines for Research and Evaluation (AGREE) Instrument evaluates the process of practice guideline development and the quality of reporting

Other Study Designs

  • NTACT Quality Checklists Quality indicator checklists for correlational studies, group experimental studies, single case research studies, and qualitative studies developed by the National Technical Assistance Center on Transition (NTACT). (Users must make an account.)

Below, you will find a sample of four popular quality assessment tools and some basic information about each. For more quality assessment tools, please view the blue tabs in the boxes above, organized by study design.

Covidence uses Cochrane Risk of Bias (which is designed for rating RCTs and cannot be used for other study types) as the default tool for quality assessment of included studies. You can opt to manually customize the quality assessment template and use a different tool better suited to your review. More information about quality assessment using Covidence, including how to customize the quality assessment template, can be found below. If you decide to customize the quality assessment template, you cannot switch back to using the Cochrane Risk of Bias template.

More Information

  • Quality Assessment on the Covidence Guide
  • Covidence FAQs on Quality Assessment Commonly asked questions about quality assessment using Covidence
  • Covidence YouTube Channel A collection of Covidence-created videos
  • << Previous: Step 5: Screen Citations
  • Next: Step 7: Extract Data from Included Studies >>
  • Last Updated: Mar 28, 2024 9:43 AM
  • URL: https://guides.lib.unc.edu/systematic-reviews

Search & Find

  • E-Research by Discipline
  • More Search & Find

Places & Spaces

  • Places to Study
  • Book a Study Room
  • Printers, Scanners, & Computers
  • More Places & Spaces
  • Borrowing & Circulation
  • Request a Title for Purchase
  • Schedule Instruction Session
  • More Services

Support & Guides

  • Course Reserves
  • Research Guides
  • Citing & Writing
  • More Support & Guides
  • Mission Statement
  • Diversity Statement
  • Staff Directory
  • Job Opportunities
  • Give to the Libraries
  • News & Exhibits
  • Reckoning Initiative
  • More About Us

UNC University Libraries Logo

  • Search This Site
  • Privacy Policy
  • Accessibility
  • Give Us Your Feedback
  • 208 Raleigh Street CB #3916
  • Chapel Hill, NC 27515-8890
  • 919-962-1053

Child Care and Early Education Research Connections

Assessing research quality.

This page presents information and tools to help evaluate the quality of a research study, as well as information on the ethics of research.

The quality of social science and policy research can vary considerably. It is important that consumers of research keep this in mind when reading the findings from a research study or when considering whether or not to use data from a research study for secondary analysis.

how to check research paper quality

Announcements

Find announcements, including conferences and meetings, Research Connections newsletters, opportunities, and more.

how to check research paper quality

Search Resources

Search all resources in the Research Connections Library.

how to check research paper quality

Explore Our Topics

Research Connections' resources are organized into topical categories and subcategories.

Key Questions to Ask

This section outlines key questions to ask in assessing the quality of research.

Research Assessment Tools

This section provides resources related to quantitative and qualitative assessment tools.

Ethics of Research

This section provides an overview of three basic ethical principles.

Enago Academy

How to Assess the Quality of Journals

' src=

Authors wishing to publish their research aim to publish in journals with the highest ratings. Publishing in a prestigious journal not only looks good on your CV, but may also give you better career and funding opportunities. Researchers commonly use the journal impact factor (JIF) to assess overall journal quality. However, the JIF has its advantages and disadvantages. Here we describe other factors that you should consider while assessing a journal.

How to Assess a Journal

Your first thought in journal selection is probably to shortlist a journal that reports the best research in the field that will reach your target audience. A good quality journal would be the most read and talked about one. Therefore, you can assess journal quality by three main factors :

  • Citation analysis – sure, we just said the JIF has its drawbacks, but it is still important. It is the average number of times a journal has been cited in a given time period.
  • Peer analysis – have experts in the field reviewed the articles?
  • Circulation and coverage : consider if the journal listed with an indexing service that will reach your target audience?

Considerations of These Criteria

None of the criteria above should be used to evaluate the quality of a journal in isolation because they all have their advantages and disadvantages which we highlight below.

Citation Analysis

It makes sense to rank a journal by its average citation record since this indicates its popularity. It is:

  • Quantitative
  • Regularly updated
  • Calculated by a neutral party

However, its indication of journal quality is questionable because:

  • Not all citations reflect good research.
  • Authors in the same network tend to cite each other heavily.
  • English speaking countries are favored since these countries host citation analysis databases.
  • The type of article influences it. Review articles tend to get more citations than original research articles.

Peer Analysis

This is a high-quality control measure of research before publication in a journal. However, you should look at the reviewer selection criteria at to assess the caliber of the reviewer. Although reviewers are considered experts in their field, they have varied knowledge and experience. This often results in different opinions.

Circulation and Coverage

Journals that reach an international audience quickly, with available electronic copies, will be given preference. You want to publish your article as soon as possible. You also want it to reach your target audience.

Other Factors to Consider

In addition to the criteria mentioned above, there are other factors that one can use to judge the quality of a journal :

Standardized Research Reporting Framework

A standard set by the journal that ensures that all the information you need to assess the research in a paper is present.

Journal Ranking

This is based on the citation record as well as the prestige of the journal. It gives the perception of the journal by researchers.

Discussion Platform

The ability to discuss the merits and shortcomings of a paper adds value to the research community. Therefore, a journal that provides an online discussion platform could be given. This, of course, is only valuable if the research community makes contributions.

Acceptance/Rejection Rates

Some journals publish these rates on their websites. A high rejection rate implies that a journal is very particular about the quality of the research that they publish. However, the disadvantage of this rate is that the journal does not give reasons for rejections. Also, a high rejection rate could simply be a matter of limited publication space.

To get an overall journal rank, and decide which journal would be best for you, you can consider all the above factors. However, you need to remember that you cannot compare journals between fields since the criteria standards will vary.

How do you normally assess the quality of a journal? Do you consider any other factors than those listed above? Let us know in the comments section below.

  • https://www.enago.com/academy/think-check-submit-a-new-approach-to-journal-selection/
  • https://www.enago.com/academy/selecting-the-right-journal-part-1/
  • https://www.enago.com/academy/selecting-the-right-journal-part-2/
  • https://www.enago.com/academy/how-to-select-the-right-journal-for-publication/

Rate this article Cancel Reply

Your email address will not be published.

how to check research paper quality

Enago Academy's Most Popular Articles

High Impact Factor Journals 2024

  • Reporting Research

High Impact Journals 2024 — Key to deciding the right journal​

Publishing a manuscript can be a rewarding yet challenging endeavor. From enunciating research in the…

Beyond Impact Factor

  • Publishing Research

Metrics That Matter: Redefining research success beyond impact factors

In the ever-evolving landscape of academic research publication, success has long been measured by the…

Journal citation

  • Industry News
  • Publishing News

Journal Citation Reports 2021: Identify World’s Leading Journals With Clarivate’s Journal Citation Indicator (JCI)

How difficult does it get for you to select a journal? What if you were…

Rigor

  • Understanding Ethics

Interesting Science vs. Sound Science – All About the Rigor and Transparency Index (RTI)

This guest post is drafted by Martijn Roelandse and Anita Bandrowski from SciCrunch. It highlights…

TOP Factor

  • Selecting Journals

Goodbye Impact Factor – Welcome TOP Factor!

In the highly competitive science race, journal impact factors (based on citations) are often used…

Interesting Science vs. Sound Science – All About the Rigor and Transparency…

Journal Citation Reports: Journal Selection Simplified

how to check research paper quality

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

how to check research paper quality

What should universities' stance be on AI tools in research and academic writing?

IMAGES

  1. An analysis of quality criteria for qualitative research

    how to check research paper quality

  2. (PDF) What is a good quality research manuscript or paper?

    how to check research paper quality

  3. (PDF) ANALYSIS AND EVALUATION OF SERVICE QUALITY, QUALITY IMPROVEMENT

    how to check research paper quality

  4. How to Write a High Quality Research Paper 2023

    how to check research paper quality

  5. (PDF) Checklist for Your Research Paper

    how to check research paper quality

  6. How to write about methodology in a research paper

    how to check research paper quality

VIDEO

  1. Field Notes

  2. PUBLISHING AN OBGYN PAPER IN A JOURNAL

  3. KV Class 10 Students Ko Hard Laga Sst Ka Paper 😓

  4. Get Your Dream Job! Learn Data Science with Innomatics

  5. How to Read a Paper Efficiently (By Prof. Pete Carr)

  6. Best paper quality registers for notes making #library #punjablibrary#student# education# govt exam#

COMMENTS

  1. How do you determine the quality of a journal article? - Scribbr

    1. Where is the article published? The journal (academic publication) where the article is published says something about the quality of the article. Journals are ranked in the Journal Quality List (JQL). If the journal you used is ranked at the top of your professional field in the JQL, then you can assume that the quality of the article is high.

  2. Tools used to assess the quality of peer review reports: a ...

    Background A strong need exists for a validated tool that clearly defines peer review report quality in biomedical research, as it will allow evaluating interventions aimed at improving the peer review process in well-performed trials. We aim to identify and describe existing tools for assessing the quality of peer review reports in biomedical research. Methods We conducted a methodological ...

  3. Step 6: Assess Quality of Included Studies - Systematic ...

    In step 6 you will evaluate the articles you included in your review for quality and bias. To do so, you will: Use quality assessment tools to grade each article. Create a summary of the quality of literature included in your review. This page has links to quality assessment tools you can use to evaluate different study types.

  4. Assessing Research Quality | Research Connections

    Assessing Research Quality. This page presents information and tools to help evaluate the quality of a research study, as well as information on the ethics of research. The quality of social science and policy research can vary considerably. It is important that consumers of research keep this in mind when reading the findings from a research ...

  5. How to review a paper | Science | AAAS

    How to review a paper. A good peer review requires disciplinary expertise, a keen and critical eye, and a diplomatic and constructive approach. Credit: dmark/iStockphoto. As junior scientists develop their expertise and make names for themselves, they are increasingly likely to receive invitations to review research manuscripts.

  6. Research quality: What it is, and how to achieve it ...

    2) Initiating research stream: The researcher (s) must be able to assemble a research team that can achieve the identified research potential. The team should be motivated to identify research opportunities and insights, as well as to produce top-quality articles, which can reach the highest-level journals.

  7. How to Assess the Quality of Journals - Enago Academy

    A good quality journal would be the most read and talked about one. Therefore, you can assess journal quality by three main factors: Citation analysis – sure, we just said the JIF has its drawbacks, but it is still important. It is the average number of times a journal has been cited in a given time period.

  8. Track your impact | Elsevier

    They are used to evaluate the quality of a journal, as well as to determine the influence of your research in your field. In this series of free modules, we walk you through some of the key players in metrics. Measure article, author and journal influences using CiteScore metrics, h-index, article-level metrics, SNIP, SJR, impact factor and more.

  9. How to read a paper: Assessing the methodological quality of ...

    Before changing your practice in the light of a published research paper, you should decide whether the methods used were valid. This article considers five essential questions that should form the basis of your decision. Only a tiny proportion of medical research breaks entirely new ground, and an equally tiny proportion repeats exactly the steps of previous workers. The vast majority of ...

  10. How to Rate the Quality of a Research Paper: Introducing a ...

    This article provides a simple algorithm to rate the quality of the results of research studies. The algorithm was developed especially for use by architects and designers. The algorithm presented here can be used to assess the research methods of a particular study, identify the type of study, and, ultimately, assign the resulting level of ...