• - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • The PRISMA 2020...

The PRISMA 2020 statement: an updated guideline for reporting systematic reviews

PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews

  • Related content
  • Peer review
  • Matthew J Page , senior research fellow 1 ,
  • Joanne E McKenzie , associate professor 1 ,
  • Patrick M Bossuyt , professor 2 ,
  • Isabelle Boutron , professor 3 ,
  • Tammy C Hoffmann , professor 4 ,
  • Cynthia D Mulrow , professor 5 ,
  • Larissa Shamseer , doctoral student 6 ,
  • Jennifer M Tetzlaff , research product specialist 7 ,
  • Elie A Akl , professor 8 ,
  • Sue E Brennan , senior research fellow 1 ,
  • Roger Chou , professor 9 ,
  • Julie Glanville , associate director 10 ,
  • Jeremy M Grimshaw , professor 11 ,
  • Asbjørn Hróbjartsson , professor 12 ,
  • Manoj M Lalu , associate scientist and assistant professor 13 ,
  • Tianjing Li , associate professor 14 ,
  • Elizabeth W Loder , professor 15 ,
  • Evan Mayo-Wilson , associate professor 16 ,
  • Steve McDonald , senior research fellow 1 ,
  • Luke A McGuinness , research associate 17 ,
  • Lesley A Stewart , professor and director 18 ,
  • James Thomas , professor 19 ,
  • Andrea C Tricco , scientist and associate professor 20 ,
  • Vivian A Welch , associate professor 21 ,
  • Penny Whiting , associate professor 17 ,
  • David Moher , director and professor 22
  • 1 School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia
  • 2 Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Amsterdam University Medical Centres, University of Amsterdam, Amsterdam, Netherlands
  • 3 Université de Paris, Centre of Epidemiology and Statistics (CRESS), Inserm, F 75004 Paris, France
  • 4 Institute for Evidence-Based Healthcare, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, Australia
  • 5 University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA; Annals of Internal Medicine
  • 6 Knowledge Translation Program, Li Ka Shing Knowledge Institute, Toronto, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • 7 Evidence Partners, Ottawa, Canada
  • 8 Clinical Research Institute, American University of Beirut, Beirut, Lebanon; Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
  • 9 Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon, USA
  • 10 York Health Economics Consortium (YHEC Ltd), University of York, York, UK
  • 11 Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada; Department of Medicine, University of Ottawa, Ottawa, Canada
  • 12 Centre for Evidence-Based Medicine Odense (CEBMO) and Cochrane Denmark, Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Open Patient data Exploratory Network (OPEN), Odense University Hospital, Odense, Denmark
  • 13 Department of Anesthesiology and Pain Medicine, The Ottawa Hospital, Ottawa, Canada; Clinical Epidemiology Program, Blueprint Translational Research Group, Ottawa Hospital Research Institute, Ottawa, Canada; Regenerative Medicine Program, Ottawa Hospital Research Institute, Ottawa, Canada
  • 14 Department of Ophthalmology, School of Medicine, University of Colorado Denver, Denver, Colorado, United States; Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA
  • 15 Division of Headache, Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, USA; Head of Research, The BMJ , London, UK
  • 16 Department of Epidemiology and Biostatistics, Indiana University School of Public Health-Bloomington, Bloomington, Indiana, USA
  • 17 Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
  • 18 Centre for Reviews and Dissemination, University of York, York, UK
  • 19 EPPI-Centre, UCL Social Research Institute, University College London, London, UK
  • 20 Li Ka Shing Knowledge Institute of St. Michael's Hospital, Unity Health Toronto, Toronto, Canada; Epidemiology Division of the Dalla Lana School of Public Health and the Institute of Health Management, Policy, and Evaluation, University of Toronto, Toronto, Canada; Queen's Collaboration for Health Care Quality Joanna Briggs Institute Centre of Excellence, Queen's University, Kingston, Canada
  • 21 Methods Centre, Bruyère Research Institute, Ottawa, Ontario, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • 22 Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • Correspondence to: M J Page matthew.page{at}monash.edu
  • Accepted 4 January 2021

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.

Systematic reviews serve many critical roles. They can provide syntheses of the state of knowledge in a field, from which future research priorities can be identified; they can address questions that otherwise could not be answered by individual studies; they can identify problems in primary research that should be rectified in future studies; and they can generate or evaluate theories about how or why phenomena occur. Systematic reviews therefore generate various types of knowledge for different users of reviews (such as patients, healthcare providers, researchers, and policy makers). 1 2 To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did (such as how studies were identified and selected) and what they found (such as characteristics of contributing studies and results of meta-analyses). Up-to-date reporting guidance facilitates authors achieving this. 3

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement published in 2009 (hereafter referred to as PRISMA 2009) 4 5 6 7 8 9 10 is a reporting guideline designed to address poor reporting of systematic reviews. 11 The PRISMA 2009 statement comprised a checklist of 27 items recommended for reporting in systematic reviews and an “explanation and elaboration” paper 12 13 14 15 16 providing additional reporting guidance for each item, along with exemplars of reporting. The recommendations have been widely endorsed and adopted, as evidenced by its co-publication in multiple journals, citation in over 60 000 reports (Scopus, August 2020), endorsement from almost 200 journals and systematic review organisations, and adoption in various disciplines. Evidence from observational studies suggests that use of the PRISMA 2009 statement is associated with more complete reporting of systematic reviews, 17 18 19 20 although more could be done to improve adherence to the guideline. 21

Many innovations in the conduct of systematic reviews have occurred since publication of the PRISMA 2009 statement. For example, technological advances have enabled the use of natural language processing and machine learning to identify relevant evidence, 22 23 24 methods have been proposed to synthesise and present findings when meta-analysis is not possible or appropriate, 25 26 27 and new methods have been developed to assess the risk of bias in results of included studies. 28 29 Evidence on sources of bias in systematic reviews has accrued, culminating in the development of new tools to appraise the conduct of systematic reviews. 30 31 Terminology used to describe particular review processes has also evolved, as in the shift from assessing “quality” to assessing “certainty” in the body of evidence. 32 In addition, the publishing landscape has transformed, with multiple avenues now available for registering and disseminating systematic review protocols, 33 34 disseminating reports of systematic reviews, and sharing data and materials, such as preprint servers and publicly accessible repositories. To capture these advances in the reporting of systematic reviews necessitated an update to the PRISMA 2009 statement.

Summary points

To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did, and what they found

The PRISMA 2020 statement provides updated reporting guidance for systematic reviews that reflects advances in methods to identify, select, appraise, and synthesise studies

The PRISMA 2020 statement consists of a 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and revised flow diagrams for original and updated reviews

We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders

Development of PRISMA 2020

A complete description of the methods used to develop PRISMA 2020 is available elsewhere. 35 We identified PRISMA 2009 items that were often reported incompletely by examining the results of studies investigating the transparency of reporting of published reviews. 17 21 36 37 We identified possible modifications to the PRISMA 2009 statement by reviewing 60 documents providing reporting guidance for systematic reviews (including reporting guidelines, handbooks, tools, and meta-research studies). 38 These reviews of the literature were used to inform the content of a survey with suggested possible modifications to the 27 items in PRISMA 2009 and possible additional items. Respondents were asked whether they believed we should keep each PRISMA 2009 item as is, modify it, or remove it, and whether we should add each additional item. Systematic review methodologists and journal editors were invited to complete the online survey (110 of 220 invited responded). We discussed proposed content and wording of the PRISMA 2020 statement, as informed by the review and survey results, at a 21-member, two-day, in-person meeting in September 2018 in Edinburgh, Scotland. Throughout 2019 and 2020, we circulated an initial draft and five revisions of the checklist and explanation and elaboration paper to co-authors for feedback. In April 2020, we invited 22 systematic reviewers who had expressed interest in providing feedback on the PRISMA 2020 checklist to share their views (via an online survey) on the layout and terminology used in a preliminary version of the checklist. Feedback was received from 15 individuals and considered by the first author, and any revisions deemed necessary were incorporated before the final version was approved and endorsed by all co-authors.

The PRISMA 2020 statement

Scope of the guideline.

The PRISMA 2020 statement has been designed primarily for systematic reviews of studies that evaluate the effects of health interventions, irrespective of the design of the included studies. However, the checklist items are applicable to reports of systematic reviews evaluating other interventions (such as social or educational interventions), and many items are applicable to systematic reviews with objectives other than evaluating interventions (such as evaluating aetiology, prevalence, or prognosis). PRISMA 2020 is intended for use in systematic reviews that include synthesis (such as pairwise meta-analysis or other statistical synthesis methods) or do not include synthesis (for example, because only one eligible study is identified). The PRISMA 2020 items are relevant for mixed-methods systematic reviews (which include quantitative and qualitative studies), but reporting guidelines addressing the presentation and synthesis of qualitative data should also be consulted. 39 40 PRISMA 2020 can be used for original systematic reviews, updated systematic reviews, or continually updated (“living”) systematic reviews. However, for updated and living systematic reviews, there may be some additional considerations that need to be addressed. Where there is relevant content from other reporting guidelines, we reference these guidelines within the items in the explanation and elaboration paper 41 (such as PRISMA-Search 42 in items 6 and 7, Synthesis without meta-analysis (SWiM) reporting guideline 27 in item 13d). Box 1 includes a glossary of terms used throughout the PRISMA 2020 statement.

Glossary of terms

Systematic review —A review that uses explicit, systematic methods to collate and synthesise findings of studies that address a clearly formulated question 43

Statistical synthesis —The combination of quantitative results of two or more studies. This encompasses meta-analysis of effect estimates (described below) and other methods, such as combining P values, calculating the range and distribution of observed effects, and vote counting based on the direction of effect (see McKenzie and Brennan 25 for a description of each method)

Meta-analysis of effect estimates —A statistical technique used to synthesise results when study effect estimates and their variances are available, yielding a quantitative summary of results 25

Outcome —An event or measurement collected for participants in a study (such as quality of life, mortality)

Result —The combination of a point estimate (such as a mean difference, risk ratio, or proportion) and a measure of its precision (such as a confidence/credible interval) for a particular outcome

Report —A document (paper or electronic) supplying information about a particular study. It could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report, or any other document providing relevant information

Record —The title or abstract (or both) of a report indexed in a database or website (such as a title or abstract for an article indexed in Medline). Records that refer to the same report (such as the same journal article) are “duplicates”; however, records that refer to reports that are merely similar (such as a similar abstract submitted to two different conferences) should be considered unique.

Study —An investigation, such as a clinical trial, that includes a defined group of participants and one or more interventions and outcomes. A “study” might have multiple reports. For example, reports could include the protocol, statistical analysis plan, baseline characteristics, results for the primary outcome, results for harms, results for secondary outcomes, and results for additional mediator and moderator analyses

PRISMA 2020 is not intended to guide systematic review conduct, for which comprehensive resources are available. 43 44 45 46 However, familiarity with PRISMA 2020 is useful when planning and conducting systematic reviews to ensure that all recommended information is captured. PRISMA 2020 should not be used to assess the conduct or methodological quality of systematic reviews; other tools exist for this purpose. 30 31 Furthermore, PRISMA 2020 is not intended to inform the reporting of systematic review protocols, for which a separate statement is available (PRISMA for Protocols (PRISMA-P) 2015 statement 47 48 ). Finally, extensions to the PRISMA 2009 statement have been developed to guide reporting of network meta-analyses, 49 meta-analyses of individual participant data, 50 systematic reviews of harms, 51 systematic reviews of diagnostic test accuracy studies, 52 and scoping reviews 53 ; for these types of reviews we recommend authors report their review in accordance with the recommendations in PRISMA 2020 along with the guidance specific to the extension.

How to use PRISMA 2020

The PRISMA 2020 statement (including the checklists, explanation and elaboration, and flow diagram) replaces the PRISMA 2009 statement, which should no longer be used. Box 2 summarises noteworthy changes from the PRISMA 2009 statement. The PRISMA 2020 checklist includes seven sections with 27 items, some of which include sub-items ( table 1 ). A checklist for journal and conference abstracts for systematic reviews is included in PRISMA 2020. This abstract checklist is an update of the 2013 PRISMA for Abstracts statement, 54 reflecting new and modified content in PRISMA 2020 ( table 2 ). A template PRISMA flow diagram is provided, which can be modified depending on whether the systematic review is original or updated ( fig 1 ).

Noteworthy changes to the PRISMA 2009 statement

Inclusion of the abstract reporting checklist within PRISMA 2020 (see item #2 and table 2 ).

Movement of the ‘Protocol and registration’ item from the start of the Methods section of the checklist to a new Other section, with addition of a sub-item recommending authors describe amendments to information provided at registration or in the protocol (see item #24a-24c).

Modification of the ‘Search’ item to recommend authors present full search strategies for all databases, registers and websites searched, not just at least one database (see item #7).

Modification of the ‘Study selection’ item in the Methods section to emphasise the reporting of how many reviewers screened each record and each report retrieved, whether they worked independently, and if applicable, details of automation tools used in the process (see item #8).

Addition of a sub-item to the ‘Data items’ item recommending authors report how outcomes were defined, which results were sought, and methods for selecting a subset of results from included studies (see item #10a).

Splitting of the ‘Synthesis of results’ item in the Methods section into six sub-items recommending authors describe: the processes used to decide which studies were eligible for each synthesis; any methods required to prepare the data for synthesis; any methods used to tabulate or visually display results of individual studies and syntheses; any methods used to synthesise results; any methods used to explore possible causes of heterogeneity among study results (such as subgroup analysis, meta-regression); and any sensitivity analyses used to assess robustness of the synthesised results (see item #13a-13f).

Addition of a sub-item to the ‘Study selection’ item in the Results section recommending authors cite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded (see item #16b).

Splitting of the ‘Synthesis of results’ item in the Results section into four sub-items recommending authors: briefly summarise the characteristics and risk of bias among studies contributing to the synthesis; present results of all statistical syntheses conducted; present results of any investigations of possible causes of heterogeneity among study results; and present results of any sensitivity analyses (see item #20a-20d).

Addition of new items recommending authors report methods for and results of an assessment of certainty (or confidence) in the body of evidence for an outcome (see items #15 and #22).

Addition of a new item recommending authors declare any competing interests (see item #26).

Addition of a new item recommending authors indicate whether data, analytic code and other materials used in the review are publicly available and if so, where they can be found (see item #27).

PRISMA 2020 item checklist

  • View inline

PRISMA 2020 for Abstracts checklist*

Fig 1

PRISMA 2020 flow diagram template for systematic reviews. The new design is adapted from flow diagrams proposed by Boers, 55 Mayo-Wilson et al. 56 and Stovold et al. 57 The boxes in grey should only be completed if applicable; otherwise they should be removed from the flow diagram. Note that a “report” could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report or any other document providing relevant information.

  • Download figure
  • Open in new tab
  • Download powerpoint

We recommend authors refer to PRISMA 2020 early in the writing process, because prospective consideration of the items may help to ensure that all the items are addressed. To help keep track of which items have been reported, the PRISMA statement website ( http://www.prisma-statement.org/ ) includes fillable templates of the checklists to download and complete (also available in the data supplement on bmj.com). We have also created a web application that allows users to complete the checklist via a user-friendly interface 58 (available at https://prisma.shinyapps.io/checklist/ and adapted from the Transparency Checklist app 59 ). The completed checklist can be exported to Word or PDF. Editable templates of the flow diagram can also be downloaded from the PRISMA statement website.

We have prepared an updated explanation and elaboration paper, in which we explain why reporting of each item is recommended and present bullet points that detail the reporting recommendations (which we refer to as elements). 41 The bullet-point structure is new to PRISMA 2020 and has been adopted to facilitate implementation of the guidance. 60 61 An expanded checklist, which comprises an abridged version of the elements presented in the explanation and elaboration paper, with references and some examples removed, is available in the data supplement on bmj.com. Consulting the explanation and elaboration paper is recommended if further clarity or information is required.

Journals and publishers might impose word and section limits, and limits on the number of tables and figures allowed in the main report. In such cases, if the relevant information for some items already appears in a publicly accessible review protocol, referring to the protocol may suffice. Alternatively, placing detailed descriptions of the methods used or additional results (such as for less critical outcomes) in supplementary files is recommended. Ideally, supplementary files should be deposited to a general-purpose or institutional open-access repository that provides free and permanent access to the material (such as Open Science Framework, Dryad, figshare). A reference or link to the additional information should be included in the main report. Finally, although PRISMA 2020 provides a template for where information might be located, the suggested location should not be seen as prescriptive; the guiding principle is to ensure the information is reported.

Use of PRISMA 2020 has the potential to benefit many stakeholders. Complete reporting allows readers to assess the appropriateness of the methods, and therefore the trustworthiness of the findings. Presenting and summarising characteristics of studies contributing to a synthesis allows healthcare providers and policy makers to evaluate the applicability of the findings to their setting. Describing the certainty in the body of evidence for an outcome and the implications of findings should help policy makers, managers, and other decision makers formulate appropriate recommendations for practice or policy. Complete reporting of all PRISMA 2020 items also facilitates replication and review updates, as well as inclusion of systematic reviews in overviews (of systematic reviews) and guidelines, so teams can leverage work that is already done and decrease research waste. 36 62 63

We updated the PRISMA 2009 statement by adapting the EQUATOR Network’s guidance for developing health research reporting guidelines. 64 We evaluated the reporting completeness of published systematic reviews, 17 21 36 37 reviewed the items included in other documents providing guidance for systematic reviews, 38 surveyed systematic review methodologists and journal editors for their views on how to revise the original PRISMA statement, 35 discussed the findings at an in-person meeting, and prepared this document through an iterative process. Our recommendations are informed by the reviews and survey conducted before the in-person meeting, theoretical considerations about which items facilitate replication and help users assess the risk of bias and applicability of systematic reviews, and co-authors’ experience with authoring and using systematic reviews.

Various strategies to increase the use of reporting guidelines and improve reporting have been proposed. They include educators introducing reporting guidelines into graduate curricula to promote good reporting habits of early career scientists 65 ; journal editors and regulators endorsing use of reporting guidelines 18 ; peer reviewers evaluating adherence to reporting guidelines 61 66 ; journals requiring authors to indicate where in their manuscript they have adhered to each reporting item 67 ; and authors using online writing tools that prompt complete reporting at the writing stage. 60 Multi-pronged interventions, where more than one of these strategies are combined, may be more effective (such as completion of checklists coupled with editorial checks). 68 However, of 31 interventions proposed to increase adherence to reporting guidelines, the effects of only 11 have been evaluated, mostly in observational studies at high risk of bias due to confounding. 69 It is therefore unclear which strategies should be used. Future research might explore barriers and facilitators to the use of PRISMA 2020 by authors, editors, and peer reviewers, designing interventions that address the identified barriers, and evaluating those interventions using randomised trials. To inform possible revisions to the guideline, it would also be valuable to conduct think-aloud studies 70 to understand how systematic reviewers interpret the items, and reliability studies to identify items where there is varied interpretation of the items.

We encourage readers to submit evidence that informs any of the recommendations in PRISMA 2020 (via the PRISMA statement website: http://www.prisma-statement.org/ ). To enhance accessibility of PRISMA 2020, several translations of the guideline are under way (see available translations at the PRISMA statement website). We encourage journal editors and publishers to raise awareness of PRISMA 2020 (for example, by referring to it in journal “Instructions to authors”), endorsing its use, advising editors and peer reviewers to evaluate submitted systematic reviews against the PRISMA 2020 checklists, and making changes to journal policies to accommodate the new reporting recommendations. We recommend existing PRISMA extensions 47 49 50 51 52 53 71 72 be updated to reflect PRISMA 2020 and advise developers of new PRISMA extensions to use PRISMA 2020 as the foundation document.

We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders. Ultimately, we hope that uptake of the guideline will lead to more transparent, complete, and accurate reporting of systematic reviews, thus facilitating evidence based decision making.

Acknowledgments

We dedicate this paper to the late Douglas G Altman and Alessandro Liberati, whose contributions were fundamental to the development and implementation of the original PRISMA statement.

We thank the following contributors who completed the survey to inform discussions at the development meeting: Xavier Armoiry, Edoardo Aromataris, Ana Patricia Ayala, Ethan M Balk, Virginia Barbour, Elaine Beller, Jesse A Berlin, Lisa Bero, Zhao-Xiang Bian, Jean Joel Bigna, Ferrán Catalá-López, Anna Chaimani, Mike Clarke, Tammy Clifford, Ioana A Cristea, Miranda Cumpston, Sofia Dias, Corinna Dressler, Ivan D Florez, Joel J Gagnier, Chantelle Garritty, Long Ge, Davina Ghersi, Sean Grant, Gordon Guyatt, Neal R Haddaway, Julian PT Higgins, Sally Hopewell, Brian Hutton, Jamie J Kirkham, Jos Kleijnen, Julia Koricheva, Joey SW Kwong, Toby J Lasserson, Julia H Littell, Yoon K Loke, Malcolm R Macleod, Chris G Maher, Ana Marušic, Dimitris Mavridis, Jessie McGowan, Matthew DF McInnes, Philippa Middleton, Karel G Moons, Zachary Munn, Jane Noyes, Barbara Nußbaumer-Streit, Donald L Patrick, Tatiana Pereira-Cenci, Ba’ Pham, Bob Phillips, Dawid Pieper, Michelle Pollock, Daniel S Quintana, Drummond Rennie, Melissa L Rethlefsen, Hannah R Rothstein, Maroeska M Rovers, Rebecca Ryan, Georgia Salanti, Ian J Saldanha, Margaret Sampson, Nancy Santesso, Rafael Sarkis-Onofre, Jelena Savović, Christopher H Schmid, Kenneth F Schulz, Guido Schwarzer, Beverley J Shea, Paul G Shekelle, Farhad Shokraneh, Mark Simmonds, Nicole Skoetz, Sharon E Straus, Anneliese Synnot, Emily E Tanner-Smith, Brett D Thombs, Hilary Thomson, Alexander Tsertsvadze, Peter Tugwell, Tari Turner, Lesley Uttley, Jeffrey C Valentine, Matt Vassar, Areti Angeliki Veroniki, Meera Viswanathan, Cole Wayant, Paul Whaley, and Kehu Yang. We thank the following contributors who provided feedback on a preliminary version of the PRISMA 2020 checklist: Jo Abbott, Fionn Büttner, Patricia Correia-Santos, Victoria Freeman, Emily A Hennessy, Rakibul Islam, Amalia (Emily) Karahalios, Kasper Krommes, Andreas Lundh, Dafne Port Nascimento, Davina Robson, Catherine Schenck-Yglesias, Mary M Scott, Sarah Tanveer and Pavel Zhelnov. We thank Abigail H Goben, Melissa L Rethlefsen, Tanja Rombey, Anna Scott, and Farhad Shokraneh for their helpful comments on the preprints of the PRISMA 2020 papers. We thank Edoardo Aromataris, Stephanie Chang, Toby Lasserson and David Schriger for their helpful peer review comments on the PRISMA 2020 papers.

Contributors: JEM and DM are joint senior authors. MJP, JEM, PMB, IB, TCH, CDM, LS, and DM conceived this paper and designed the literature review and survey conducted to inform the guideline content. MJP conducted the literature review, administered the survey and analysed the data for both. MJP prepared all materials for the development meeting. MJP and JEM presented proposals at the development meeting. All authors except for TCH, JMT, EAA, SEB, and LAM attended the development meeting. MJP and JEM took and consolidated notes from the development meeting. MJP and JEM led the drafting and editing of the article. JEM, PMB, IB, TCH, LS, JMT, EAA, SEB, RC, JG, AH, TL, EMW, SM, LAM, LAS, JT, ACT, PW, and DM drafted particular sections of the article. All authors were involved in revising the article critically for important intellectual content. All authors approved the final version of the article. MJP is the guarantor of this work. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

Funding: There was no direct funding for this research. MJP is supported by an Australian Research Council Discovery Early Career Researcher Award (DE200101618) and was previously supported by an Australian National Health and Medical Research Council (NHMRC) Early Career Fellowship (1088535) during the conduct of this research. JEM is supported by an Australian NHMRC Career Development Fellowship (1143429). TCH is supported by an Australian NHMRC Senior Research Fellowship (1154607). JMT is supported by Evidence Partners Inc. JMG is supported by a Tier 1 Canada Research Chair in Health Knowledge Transfer and Uptake. MML is supported by The Ottawa Hospital Anaesthesia Alternate Funds Association and a Faculty of Medicine Junior Research Chair. TL is supported by funding from the National Eye Institute (UG1EY020522), National Institutes of Health, United States. LAM is supported by a National Institute for Health Research Doctoral Research Fellowship (DRF-2018-11-ST2-048). ACT is supported by a Tier 2 Canada Research Chair in Knowledge Synthesis. DM is supported in part by a University Research Chair, University of Ottawa. The funders had no role in considering the study design or in the collection, analysis, interpretation of data, writing of the report, or decision to submit the article for publication.

Competing interests: All authors have completed the ICMJE uniform disclosure form at http://www.icmje.org/conflicts-of-interest/ and declare: EL is head of research for the BMJ ; MJP is an editorial board member for PLOS Medicine ; ACT is an associate editor and MJP, TL, EMW, and DM are editorial board members for the Journal of Clinical Epidemiology ; DM and LAS were editors in chief, LS, JMT, and ACT are associate editors, and JG is an editorial board member for Systematic Reviews . None of these authors were involved in the peer review process or decision to publish. TCH has received personal fees from Elsevier outside the submitted work. EMW has received personal fees from the American Journal for Public Health , for which he is the editor for systematic reviews. VW is editor in chief of the Campbell Collaboration, which produces systematic reviews, and co-convenor of the Campbell and Cochrane equity methods group. DM is chair of the EQUATOR Network, IB is adjunct director of the French EQUATOR Centre and TCH is co-director of the Australasian EQUATOR Centre, which advocates for the use of reporting guidelines to improve the quality of reporting in research articles. JMT received salary from Evidence Partners, creator of DistillerSR software for systematic reviews; Evidence Partners was not involved in the design or outcomes of the statement, and the views expressed solely represent those of the author.

Provenance and peer review: Not commissioned; externally peer reviewed.

Patient and public involvement: Patients and the public were not involved in this methodological research. We plan to disseminate the research widely, including to community participants in evidence synthesis organisations.

This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/ .

  • Gurevitch J ,
  • Koricheva J ,
  • Nakagawa S ,
  • Liberati A ,
  • Tetzlaff J ,
  • Altman DG ,
  • PRISMA Group
  • Tricco AC ,
  • Sampson M ,
  • Shamseer L ,
  • Leoncini E ,
  • de Belvis G ,
  • Ricciardi W ,
  • Fowler AJ ,
  • Leclercq V ,
  • Beaudart C ,
  • Ajamieh S ,
  • Rabenda V ,
  • Tirelli E ,
  • O’Mara-Eves A ,
  • McNaught J ,
  • Ananiadou S
  • Marshall IJ ,
  • Noel-Storr A ,
  • Higgins JPT ,
  • Chandler J ,
  • McKenzie JE ,
  • López-López JA ,
  • Becker BJ ,
  • Campbell M ,
  • Sterne JAC ,
  • Savović J ,
  • Sterne JA ,
  • Hernán MA ,
  • Reeves BC ,
  • Whiting P ,
  • Higgins JP ,
  • ROBIS group
  • Hultcrantz M ,
  • Stewart L ,
  • Bossuyt PM ,
  • Flemming K ,
  • McInnes E ,
  • France EF ,
  • Cunningham M ,
  • Rethlefsen ML ,
  • Kirtley S ,
  • Waffenschmidt S ,
  • PRISMA-S Group
  • ↵ Higgins JPT, Thomas J, Chandler J, et al, eds. Cochrane Handbook for Systematic Reviews of Interventions : Version 6.0. Cochrane, 2019. Available from https://training.cochrane.org/handbook .
  • Dekkers OM ,
  • Vandenbroucke JP ,
  • Cevallos M ,
  • Renehan AG ,
  • ↵ Cooper H, Hedges LV, Valentine JV, eds. The Handbook of Research Synthesis and Meta-Analysis. Russell Sage Foundation, 2019.
  • IOM (Institute of Medicine)
  • PRISMA-P Group
  • Salanti G ,
  • Caldwell DM ,
  • Stewart LA ,
  • PRISMA-IPD Development Group
  • Zorzela L ,
  • Ioannidis JP ,
  • PRISMAHarms Group
  • McInnes MDF ,
  • Thombs BD ,
  • and the PRISMA-DTA Group
  • Beller EM ,
  • Glasziou PP ,
  • PRISMA for Abstracts Group
  • Mayo-Wilson E ,
  • Dickersin K ,
  • MUDS investigators
  • Stovold E ,
  • Beecher D ,
  • Noel-Storr A
  • McGuinness LA
  • Sarafoglou A ,
  • Boutron I ,
  • Giraudeau B ,
  • Porcher R ,
  • Chauvin A ,
  • Schulz KF ,
  • Schroter S ,
  • Stevens A ,
  • Weinstein E ,
  • Macleod MR ,
  • IICARus Collaboration
  • Kirkham JJ ,
  • Petticrew M ,
  • Tugwell P ,
  • PRISMA-Equity Bellagio group

literature review reporting guidelines

Reporting Standards for Literature Reviews

  • First Online: 11 August 2022

Cite this chapter

literature review reporting guidelines

  • Rob Dekkers 4 ,
  • Lindsey Carey 5 &
  • Peter Langhorne 6  

1636 Accesses

Previous chapters have already referred to reporting of literature reviews. Cases in point are Section  3.5 about evidencing engagement with consulted studies, the assessment of the quality of evidence in Sections  6.4 and 6.5 , and combining quantitative and qualitative syntheses in Section  12.4 . Considering how to report results, conjectures, findings, conclusions and recommendations is an important aspect of conducting a literature review; this applies to literature reviews for empirical studies as well as stand-alone literature reviews. Depending on the purpose of the literature review, the audience may consist of scholars, researchers, practitioners, policymakers and citizens, or even examiners, for example in the case of doctoral theses. Such broad variety of readers also requires paying attention to reporting, presenting of the literature review and making it accessible to intended readers, which includes writing (for the latter, see Sections  15.6 and 15.7 ).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Systematic Process for Investigating and Describing Evidence-based Research.

For a further elaboration of these and related arguments, see Section 17.2 .

Bem DJ (1995) Writing a review article for psychological bulletin. Psychol Bull 118(2):172–177. https://doi.org/10.1037/0033-2909.118.2.172

Article   Google Scholar  

Bin Ali N, Usman M (2019) A critical appraisal tool for systematic literature reviews in software engineering. Inform Softw Technol 112:48–50. https://doi.org/10.1016/j.infsof.2019.04.006

Boote DN, Beile P (2005) Scholars before researchers: on the centrality of the dissertation literature review in research preparation. Educ Res 34(6):3–15. https://doi.org/10.3102/0013189x034006003

Boote DN, Beile P (2006) On “Literature reviews of, and for, educational research”: a response to the critique by Joseph Maxwell. Educ Res 35(9):32–35. https://doi.org/10.3102/0013189x035009032

Booth A (2006) “Brimful of STARLITE”: toward standards for reporting literature searches. J Med Libr Assoc 94(4):421-e205

Google Scholar  

Booth A, Clarke M, Dooley G, Ghersi D, Moher D, Petticrew M, Stewart L (2012) The nuts and bolts of PROSPERO: an international prospective register of systematic reviews. Systemat Rev 1(1):2. https://doi.org/10.1186/2046-4053-1-2

Brown P, Brunnhuber K, Chalkidou K, Chalmers I, Clarke M, Fenton M, Forbes C, Glanville J, Hicks NJ, Moody J, Twaddle S, Timimi H, Young P (2006) How to formulate research recommendations. BMJ 333(7572):804–806. https://doi.org/10.1136/bmj.38987.492014.94

Budgen D, Brereton P, Drummond S, Williams N (2018) Reporting systematic reviews: some lessons from a tertiary study. Inf Softw Technol 95:62–74. https://doi.org/10.1016/j.infsof.2017.10.017

Campbell M, McKenzie JE, Sowden A, Katikireddi SV, Brennan SE, Ellis S, Hartmann-Boyce J, Ryan R, Shepperd S, Thomas J, Welch V, Thomson H (2020) Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline. BMJ 368:l6890. https://doi.org/10.1136/bmj.l6890

Classen S, Winter S, Awadzi KD, Garvan CW, Lopez EDS, Sundaram S (2008) Psychometric testing of SPIDER: data capture tool for systematic literature reviews. Am J Occupat Therapy 62(3):335–348. https://doi.org/10.5014/ajot.62.3.335

Clemen RT (1989) Combining forecasts: areview and annotated bibliography. Int J Forecast 5(4):559–583. https://doi.org/10.1016/0169-2070(89)90012-5

Dekkers R, Barlow A, Chaudhuri A, Saranga H (2020) Theory informing decision-making on outsourcing: a review of four ‘Five-Year’ snapshots spanning 47 years. University of Glasgow, Glasgow

Elamin MB, Flynn DN, Bassler D, Briel M, Alonso-Coello P, Karanicolas PJ, Guyatt GH, Malaga G, Furukawa TA, Kunz R, Schünemann H, Murad MH, Barbui C, Cipriani A, Montori VM (2009) Choice of data extraction tools for systematic reviews depends on resources and review complexity. J Clin Epidemiol 62(5):506–510. https://doi.org/10.1016/j.jclinepi.2008.10.016

Felizardo KR, Salleh N, Martins RM, Mendes E, MacDonell SG, Maldonado JC (2011) Using visual text mining to support the study selection activity in systematic literature reviews. Paper presented at the International Symposium on Empirical Software Engineering and Measurement, Banff, AB, 22–23 September 2011

Ferrari R (2015) Writing narrative style literature reviews. Med Writ 24(4):230–235. https://doi.org/10.1179/2047480615Z.000000000329

Free C, Phillips G, Felix L, Galli L, Patel V, Edwards P (2010) The effectiveness of M-health technologies for improving health and health services: a systematic review protocol. BMC Res Notes 3(1):250. https://doi.org/10.1186/1756-0500-3-250

Garside R (2014) Should we appraise the quality of qualitative research reports for systematic reviews, and if so, how? Innov Eur J Soc Sci Res 27(1):67–79. https://doi.org/10.1080/13511610.2013.777270

Goldman KD, Schmalz KJ (2004) The matrix method of literature reviews. Health Promot Pract 5(1):5–7

Grosso G, Godos J, Galvano F, Giovannucci EL (2017) Coffee, caffeine, and health outcomes: an umbrella review. Ann Rev Nutr 37(1):131–156. https://doi.org/10.1146/annurev-nutr-071816-064941

Haddaway NR, Macura B (2018) The role of reporting standards in producing robust literature reviews. Nat Clim Chang 8(6):444–447. https://doi.org/10.1038/s41558-018-0180-3

Haddaway NR, Macura B, Whaley P, Pullin AS (2018) ROSES reporting standards for systematic evidence syntheses: pro forma, flow-diagram and descriptive summary of the plan and conduct of environmental systematic reviews and systematic maps. Environ Evid 7(1):7. https://doi.org/10.1186/s13750-018-0121-7

Houghton C, Murphy K, Meehan B, Thomas J, Brooker D, Casey D (2017) From screening to synthesis: using nvivo to enhance transparency in qualitative evidence synthesis. J Clin Nurs 26(5–6):873–881. https://doi.org/10.1111/jocn.13443

Hutton B, Salanti G, Caldwell DM, Chaimani A, Schmid CH, Cameron C, Ioannidis JPA, Straus S, Thorlund K, Jansen JP, Mulrow C, Catalá-López F, Gøtzsche PC, Dickersin K, Boutron I, Altman DG, Moher D (2015) The PRISMA extension statement for reporting of systematic reviews incorporating network meta-analyses of health care interventions: checklist and explanations. Ann Internal Med 162(11):777–784. PMID 26030634. https://doi.org/10.7326/m14-2385

Kohl C, McIntosh EJ, Unger S, Haddaway NR, Kecke S, Schiemann J, Wilhelm R (2018) Online tools supporting the conduct and reporting of systematic reviews and systematic maps: a case study on CADIMA and review of existing tools. Environ Evid 7(1):8. https://doi.org/10.1186/s13750-018-0115-5

Lawal AK, Rotter T, Kinsman L, Sari N, Harrison L, Jeffery C, Kutz M, Khan MF, Flynn R (2014) Lean management in health care: definition, concepts, methodology and effects reported (systematic review protocol). Systemat Rev 3(1):103. https://doi.org/10.1186/2046-4053-3-103

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D (2009) The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol 62(10):e1–e34. https://doi.org/10.1016/j.jclinepi.2009.06.006

Macdonald S, Kam J, Aardvark et al (2007a) Quality journals and gamesmanship in management studies. J Inform Sci 33(6), 702–717. https://doi.org/10.1177/0165551507077419

Macdonald S, Kam J (2007b) Ring a Ring o’ Roses: quality journals and gamesmanship in management studies. J Manag Stud 44(4):640–655. https://doi.org/10.1111/j.1467-6486.2007.00704.x

MacLure M (2005) ‘Clarity bordering on stupidity’: where’s the quality in systematic review? J Educ Policy 20(4):393–416. https://doi.org/10.1080/02680930500131801

Maggio LA, Tannery NH, Kanter SL (2011) Reproducibility of literature search reporting in medical education reviews. Acad Med 86(8):1049–1054. https://doi.org/10.1097/ACM.0b013e31822221e7

Marangunić N, Granić A (2015) Technology acceptance model: a literature review from 1986 to 2013. Univ Access Inf Soc 14(1):81–95. https://doi.org/10.1007/s10209-014-0348-1

Maxwell JA (2006) Literature reviews of, and for, educational research: a commentary on Boote and Beile’s “Scholars before researchers.” Educ Res 35(9):28–31. https://doi.org/10.3102/0013189x035009028

Moher D, Liberati A, Tetzlaff J, Altman DG (2009) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 339:b2535. https://doi.org/10.1136/bmj.b2535

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA & PRISMA-P Group (2015) Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Systemat Rev 4(1):1. https://doi.org/10.1186/2046-4053-4-1

Moja LP, Telaro E, D’Amico R, Moschetti I, Coe L, Liberati A (2005) Assessment of methodological quality of primary studies by systematic reviews: results of the metaquality cross sectional study. BMJ 330(7499):1053. https://doi.org/10.1136/bmj.38414.515938.8F

Newman MEJ (2003) The structure and function of complex networks. SIAM Rev 45(2):167–256. https://doi.org/10.1137/S003614450342480

Neyeloff JL, Fuchs SC, Moreira LB (2012) Meta-analyses and forest plots using a microsoft excel spreadsheet: step-by-step guide focusing on descriptive data analysis. BMC Res Notes 5(1):52. https://doi.org/10.1186/1756-0500-5-52

O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA (2014) Standards for reporting qualitative research: a synthesis of recommendations. Acad Med 89(9):1245–1251. https://doi.org/10.1097/acm.0000000000000388

O’Mara-Eves A, Thomas J, McNaught J, Miwa M, Ananiadou S (2015) Using text mining for study identification in systematic reviews: a systematic review of current approaches. Syst Rev 4(1):5. https://doi.org/10.1186/2046-4053-4-5

Olorisade BK, Brereton P, Andras P (2017) Reproducibility of studies on text mining for citation screening in systematic reviews: evaluation and checklist. J Biomed Inform 73:1–13. https://doi.org/10.1016/j.jbi.2017.07.010

Otero-Cerdeira L, Rodríguez-Martínez FJ, Gómez-Rodríguez A (2015) Ontology matching: aliterature review. Expert Syst Appl 42(2):949–971. https://doi.org/10.1016/j.eswa.2014.08.032

Oxman AD, Guyatt GH (1988) Guidelines for reading literature reviews. Can Med Assoc J 138(8):697–703

Pati D, Lorusso LN (2018) How to write a systematic review of the literature. HERD: Health Environ Res Des J 11(1):15–30. https://doi.org/10.1177/1937586717747384

Pidgeon TE, Wellstead G, Sagoo H, Jafree DJ, Fowler AJ, Agha RA (2016) An assessment of the compliance of systematic review articles published in craniofacial surgery with the PRISMA statement guidelines: a systematic review. J Cranio-Maxillofacial Surg 44(10):1522–1530. https://doi.org/10.1016/j.jcms.2016.07.018

Poole R, Kennedy OJ, Roderick P, Fallowfield JA, Hayes PC, Parkes J (2017) Coffee consumption and health: umbrella review of meta-analyses of multiple health outcomes. BMJ 359:j5024. https://doi.org/10.1136/bmj.j5024

Pullin AS, Stewart GB (2006) Guidelines for systematic review in conservation and environmental management. Conserv Biol 20(6):1647–1656. https://doi.org/10.1111/j.1523-1739.2006.00485.x

Salgado EG, Dekkers R (2018) Lean product development: nothing new under the sun? Int J Manag Rev 20(4):903–933. https://doi.org/10.1111/ijmr.12169

Siddaway AP, Wood AM, Hedges LV (2019) How to do a systematic review: a best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses. Ann Rev Psychol 70(1):747–770. https://doi.org/10.1146/annurev-psych-010418-102803

Silagy CA, Middleton P, Hopewell S (2002) Publishing protocols of systematic reviews comparing what was done to what was planned. JAMA 287(21):2831–2834. https://doi.org/10.1001/jama.287.21.2831

Simera I, Altman DG, Moher D, Schulz KF, Hoey J (2008) Guidelines for reporting health research: the EQUATOR Networ’s survey of guideline authors. PLoS Med 5(6):e139. https://doi.org/10.1371/journal.pmed.0050139

Singh G, Haddad KM, Chow CW (2007) Are articles in “top” management journals necessarily of higher quality? J Manag Inq 16(4):319–331. https://doi.org/10.1177/1056492607305894

Stewart LA, Clarke M, Rovers M, Riley RD, Simmonds M, Stewart G, Tierney JF (2015) Preferred reporting items for a systematic review and meta-analysis of individual participant data: the PRISMA-IPD statement. JAMA 313(16):1657–1665. https://doi.org/10.1001/jama.2015.3656

Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, Moher D, Becker BJ, Sipe TA, Thacker SB (2000) Meta-analysis of observational studies in epidemiology: a proposal for reporting. JAMA 283(15):2008–2012. https://doi.org/10.1001/jama.283.15.2008

Templier M, Paré G (2018) Transparency in literature reviews: an assessment of reporting practices across review types and genres in top IS journals. Eur J Inf Syst 27(5):503–550. https://doi.org/10.1080/0960085X.2017.1398880

Tong A, Flemming K, McInnes E, Oliver S, Craig J (2012) Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol 12(1):181. https://doi.org/10.1186/1471-2288-12-181

Tornquist EM, Funk SG, Champagne MT (1989) Writing research reports for clinical audiences. West J Nurs Res 11(5):576–582. https://doi.org/10.1177/019394598901100507

Torraco RJ (2005) Writing integrative literature reviews: guidelines and examples. Hum Resour Dev Rev 4(3):356–367. https://doi.org/10.1177/1534484305278283

Vandenbroucke JP, von Elm E, Altman DG, Gøtzsche PC, Mulrow CD, Pocock SJ, Poole C, Schlesselman JJ, Egger M, STROBE Initiative (2007) Strengthening the reporting of observational studies in epidemiology (STROBE): explanation and elaboration. PLoS Med 4(10):e297. https://doi.org/10.1371/journal.pmed.0040297

Webster J, Watson RT (2002) Analyzing the past to prepare for the future: writing a literature review. MIS Quart 26(2):xiii–xxiii

Welch V, Petticrew M, Tugwell P, Moher D, O’Neill J, Waters E, White H (2012) PRISMA-equity 2012 extension: reporting guidelines for systematic reviews with a focus on health equity. PLoS Med 9(10):e1001333. https://doi.org/10.1371/journal.pmed.1001333

Wong G, Greenhalgh T, Westhorp G, Buckingham J, Pawson R (2013a) RAMESES publication standards: meta-narrative reviews. BMC Med 11(1):20. https://doi.org/10.1186/1741-7015-11-20

Wong G, Greenhalgh T, Westhorp G, Buckingham J, Pawson R (2013b) RAMESES publication standards: realist syntheses. BMC Med 11(1):21(21–14). https://doi.org/10.1186/1741-7015-11-21

Yoshii A, Plaut DA, McGraw KA, Anderson MJ, Wellik KE (2009) Analysis of the reporting of search strategies in cochrane systematic reviews. J Med Libr Assoc: JMLA 97(1):21–29. https://doi.org/10.3163/1536-5050.97.1.004

Zhang J, Han L, Shields L, Tian J, Wang J (2019) A PRISMA assessment of the reporting quality of systematic reviews of nursing published in the Cochrane library and paper-based journals. Medicine 98(49):e18099. https://doi.org/10.1097/MD.0000000000018099

Zumsteg JM, Cooper JS, Noon MS (2012) Systematic review checklist. J Ind Ecol 16(s1):S12–S21. https://doi.org/10.1111/j.1530-9290.2012.00476.x

Download references

Author information

Authors and affiliations.

University of Glasgow, Glasgow, UK

Rob Dekkers

Glasgow Caledonian University, Glasgow, UK

Lindsey Carey

Peter Langhorne

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this chapter

Dekkers, R., Carey, L., Langhorne, P. (2022). Reporting Standards for Literature Reviews. In: Making Literature Reviews Work: A Multidisciplinary Guide to Systematic Approaches. Springer, Cham. https://doi.org/10.1007/978-3-030-90025-0_13

Download citation

DOI : https://doi.org/10.1007/978-3-030-90025-0_13

Published : 11 August 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-90024-3

Online ISBN : 978-3-030-90025-0

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Jump to navigation

Home

Cochrane Training

Chapter iii: reporting the review.

Miranda Cumpston, Toby Lasserson, Ella Flemyng, Matthew J Page

Key Points:

  • Clear reporting of a systematic review allows readers to evaluate the rigour of the methods applied, and to interpret the findings appropriately. Transparency can facilitate attempts to verify or reproduce the results, and make the review more usable for health care decision makers.
  • The target audience for Cochrane Reviews is people making decisions about health care, including healthcare professionals, consumers and policy makers. Cochrane Reviews should be written so that they are easy to read and understand by someone with a basic sense of the topic who may not necessarily be an expert in the area.
  • Cochrane Protocols and Reviews should comply with the PRISMA 2020 and PRISMA for Protocols reporting guidelines.
  • Guidance on the composition of plain language summaries of Cochrane Reviews is also available to help review authors specify the key messages in terms that are accessible to consumers and non-expert readers.
  • Review authors should ensure that reporting of objectives, important outcomes, results, caveats and conclusions is consistent across the main text, the abstract, and any other summary versions of the review (e.g. plain language summary).

This chapter should be cited as: Cumpston M, Lasserson T, Flemyng E, Page MJ. Chapter III: Reporting the review. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023. Available from www.training.cochrane.org/handbook .

III.1 Introduction

The effort of undertaking a systematic review is wasted if review authors do not report clearly what they did and what they found ( Glasziou et al 2014 ). Clear reporting enables others to replicate the methods used in the review, which can facilitate attempts to verify or reproduce the results ( Page et al 2018 ). Transparency can also make the review more usable for healthcare decision makers. For example, clearly describing the interventions assigned in the included studies can help users determine how best to deliver effective interventions in practice ( Hoffmann et al 2017 ). Also, comprehensively describing the eligibility criteria applied, sources consulted, analyses conducted, and post-hoc decisions made, can reduce uncertainties in assessments of risk of bias in the review findings ( Whiting et al 2016 ). For these reasons, transparent reporting is an essential component of all systematic reviews.

Surveys of the transparency of published systematic reviews suggest that many elements of systematic reviews could be reported better. For example, Nguyen and colleagues evaluated a random sample of 300 systematic reviews of interventions indexed in bibliographic databases in November 2020 ( Nguyen et al 2022 ). They found that in at least 20% of the reviews there was no information about the years of coverage of the search, the methods used to collect data and appraise studies, or the funding source of the review. Less than half of the reviews provided information on a protocol or registration record for the review. However, Cochrane Reviews, which accounted for 3% of the sample, had more complete reporting than all other types of systematic reviews.

Possible reasons why more complete reporting of Cochrane Reviews has been observed include the use of software (RevMan, https://training.cochrane.org/online-learning/core-software-cochrane-reviews/revman ) and strategies in the editorial process that promote good reporting. RevMan includes many standard headings and subheadings which are designed to prompt Cochrane Review authors to document their methods and results clearly. 

Cochrane Reviews of interventions should adhere to the PRISMA 2020 (Preferred Reporting Items for Systematic reviews and Meta-Analysis) reporting guideline; see http://www.prisma-statement.org/ . PRISMA is an evidence-based, minimum set of items for reporting systematic reviews and meta-analyses to ensure the highest possible standard for reporting is met. Extensions to PRISMA and additional reporting guidelines for specific areas of methods are cited in the relevant sections below.

Cochrane’s Methodological Expectations of Cochrane Intervention Reviews (MECIR) detail standards for the conduct of Cochrane Reviews of interventions. They provide expectations for the general methodological approach to be followed from designing the review up to interpreting the findings at the end. There is a good reason to distinguish between conduct (MECIR) and reporting (PRISMA): good conduct does not necessarily lead to good reporting, good reporting cannot improve poor conduct, and poor reporting can obscure good or poor conduct of a review. The MECIR expectations of conduct are embedded in the relevant chapters of this Handbook and authors should adhere to MECIR throughout the development of their systematic review. MECIR conduct guidance for updates of Cochrane Reviews of interventions are presented in Chapter IV . For the latest version of all MECIR conduct guidance, readers should consult the MECIR web pages, available at https://methods.cochrane.org/mecir .

This chapter is built on reporting guidance from PRISMA 2020 ( Page et al 2021a , Page et al 2021b ) and is divided into sections for Cochrane Review protocols ( Section III.2) and new Cochrane Reviews ( Section III.3 ). Many of the standard headings recommended for use in Cochrane Reviews are referred to in this chapter, although the precise headings available in RevMan may be amended as new versions are released. New headings can be added and some standard headings can be deactivated; if the latter is done, review authors should ensure that all information expected (as outlined in PRISMA 2020) is still reported somewhere in the review.

III.2 Reporting of protocols of new Cochrane Reviews

Preparing a well-written review protocol is important for many reasons (see Chapter 1 ). The protocol is a public record of the question of interest and the intended methods before results of the studies are fully known. This helps readers to judge how the eligibility criteria of the review, stated outcomes and planned methods will address the intended question of interest. It also helps anyone who evaluates the completed review to judge how far it fulfilled its original objectives ( Lasserson et al 2016 ). Investing effort in the development of the review question and planning of methods also stimulates review authors to anticipate methodological challenges that may arise, and helps minimize potential for non-reporting biases by encouraging review authors to publish their review and report results for all pre-specified outcomes ( Shamseer et al 2015 ).

See the Introduction and Methods sections of PRISMA 2020 for the reporting items relevant to protocols for new Cochrane Reviews. All these items are also covered in PRISMA for Protocols, an extension to the PRISMA guidelines for the reporting of systematic review protocols ( Moher et al 2015 , Shamseer et al 2015 ). They include guidance for reporting of the:

  • Background;
  • Objectives;
  • Criteria for considering studies for inclusion in the review;
  • Search methods for identification of studies (e.g. a list of all sources that will be searched, a complete search strategy to be implemented for at least one database);
  • Data collection and analysis (e.g. types of information that will be sought from reports of included studies and methods for obtaining such information, how risk of bias in included studies will be assessed, and any intended statistical methods for combining results across studies); and
  • Other information (e.g. acknowledgements, contributions of authors, declarations of interest, and sources of support).

These sections correspond to the same sections in a completed review, and further details are outlined in Section III.3 .

The required reporting items have been incorporated into a template for protocols for Cochrane Reviews, which is available in Cochrane’s review production tool, RevMan (see the RevMan Knowledge Base ). If using the template, authors should carefully consider the methods that are appropriate for their specific review and adapt the template where required.

One key difference between a review protocol and a completed review is that the Methods section in a protocol should be written in the future tense. Because Cochrane Reviews are updated as new evidence accumulates, methods outlined in the protocol should generally be written as if a suitably large number of studies will be identified to allow the objectives to be met (even if this is assumed to be unlikely the case at the time of writing).

PRISMA 2020 reflects the minimum expectations for good reporting of a review protocol. Further guidance on the level of planning required for each aspect of the review methods and the detailed information recommended for inclusion in the protocol is given in the relevant chapters of this Handbook.

III.3 Reporting of new Cochrane Reviews

The main text of a Cochrane Review should be succinct and readable. Although there is no formal word limit for Cochrane Reviews, review authors should consider 10,000 words a maximum for the main text of the review unless there is a special reason to write a longer review, such as when the question is unusually broad or complex.

People making decisions about health care are the target audience for Cochrane Reviews. This includes healthcare professionals, consumers and policy makers, and reviews should be accessible to these audiences. Cochrane Reviews should be written so that they are easy to read and understand by someone with a basic sense of the topic who is not necessarily an expert in the area. Some explanation of terms and concepts is likely to be helpful, and perhaps even essential. However, too much explanation can detract from the readability of a review. Simplicity and clarity are also vital to readability. The readability of Cochrane Reviews should compare to that of a well-written article in a general medical journal.

Review authors should ensure that reporting of objectives, outcomes, results, caveats and conclusions is consistent across the main text, the tables and figures, the abstract, and any other summary versions of the review (e.g. ‘Summary of findings’ table and plain language summary). Although this sounds simple, it can be challenging in practice; authors should review their text carefully to ensure that readers of a summary version are likely to come away with the same overall understanding of the conclusions of the review as readers accessing the full text.

Plagiarism is not acceptable and all sources of information should be cited (for more information see the Cochrane Library editorial policy on plagiarism ). Also, the unattributed reproduction of text from other sources should be avoided. Quotes from other published or unpublished sources should be indicated and attributed clearly, and permission may be required to reproduce any published figures.

PRISMA 2020 provides the main reporting items for new Cochrane Reviews. A template for Cochrane Reviews of interventions is available that incorporates the relevant reporting guidance from PRISMA 2020. The template is available in RevMan to facilitate author adherence to the reporting guidance via the RevMan Knowledge Base . If using the template, authors should consider carefully the methods that are appropriate for their specific review and adapt the template where required. In the remainder of this section we summarize the reporting guidance relating to different sections of a Cochrane Review.

III.3.1 Abstract

All reviews should include an abstract of not more than 1000 words, although in the interests of brevity, authors should aim to include no more than 700 words, without sacrificing important content. Abstracts should be targeted primarily at healthcare decision makers (clinicians, consumers and policy makers) rather than just to researchers.

Terminology should be reasonably easy to understand for a general rather than a specialist healthcare audience. Abbreviations should be avoided, except where they are widely understood (e.g. HIV). Where essential, other abbreviations should be spelt out (with the abbreviations in brackets) on first use. Names of drugs and interventions that can be understood internationally should be used wherever possible. Trade or brand names should not be used and generic names are preferred.

Abstracts of Cochrane Reviews are made freely available on the internet and published in bibliographic databases that index the Cochrane Database of Systematic Reviews (e.g. MEDLINE, Embase). Some readers may be unable to access the full review, or the full text may not have been translated into their language, so abstracts may be the only source they have to understand the review results ( Beller et al 2013 ). It is important therefore that they can be read as stand-alone documents. The abstract should summarize the key methods, results and conclusions of the review. An abstract should not contain any information that is not in the main body of the review, and the overall messages should be consistent with the conclusions of the review.

Abstracts for Cochrane Reviews of interventions should follow the PRISMA 2020 for Abstracts checklist ( Page et al 2021b ). Each abstract should include:

  • Rationale (a concise summary of the rationale for and context of the review);
  • Objectives (of the review);
  • Search methods (including an indication of databases searched, and the date of the last search for which studies were fully incorporated);
  • Eligibility criteria (including a summary of eligibility criteria for study designs, participants, interventions and comparators);
  • Risk of bias (methods used to assess risk of bias);
  • Synthesis methods (methods used to synthesize results, especially any variations on standard approaches);
  • Included studies (total number of studies and participants and a brief summary of key characteristics);
  • Results of syntheses (including the number of studies and participants for each outcome, a clear statement of the direction and magnitude of the effect, the effect estimate and 95% confidence interval if meta-analysis was used, and the GRADE assessment of the certainty of the evidence. The results should contain the same outcomes as found in other summary formats such as the plain language summary and ‘Summary of findings’ table, including those for which no studies reported the outcome and those that are not statistically significant. This section should also provide a brief summary of the limitations of the evidence included in the review);
  • Authors’ conclusions (including implications both for practice and for research);
  • Funding (primary source of funding for the review); and
  • Registration (registration name and number and/or DOIs of previously published protocols and versions of the review, if applicable).

III.3.2 Plain language summary

A Cochrane Plain language summary is a stand-alone summary of the systematic review. Like the Abstract, the Plain language summary may be read alone, and its overall messages should be consistent with the conclusions in the full review. The Plain language summary should convey clearly the questions and key findings of the review, using language that can be understood by a wide range of non-expert readers. The summary should use words and sentence structures that are easy to understand, and should avoid technical terms and jargon where possible. Any technical terms used should be explained. The audience for Plain language summaries may include people with a health condition, carers, healthcare workers or policy makers. Readers may not have English as their first language. Cochrane Plain language summaries are frequently translated, and using plain language is also helpful for translators.

Writing in plain language is a skill that is different from writing for a scientific audience. Full guidance and a template are available as online supplementary material to this chapter. Authors are strongly encouraged to use this guidance to ensure good practice and consistency with other summaries in the Cochrane Library. It may also be helpful to seek assistance for this task, such as asking someone with experience in writing in plain language for a general audience for help, or seeking feedback on the draft summary from a consumer or someone with little knowledge of the topic area.

III.3.3 Background and Objectives

Well-formulated review questions occur in the context of an already-formed body of knowledge. The Background section should address this context, including a description of the condition or problem of interest. It should help clarify the rationale for the review, and explain why the questions being addressed are important. It should be concise (generally under 1000 words) and be understandable to the users of the intervention(s) under investigation.

It is important that the eligibility criteria and other aspects of the methods, such as the comparisons used in the synthesis, build on ideas that have been developed in the Background section. For example, if there are uncertainties to be explored in how variation in setting, different populations or type of intervention influence the intervention effect, then it would be important to acknowledge these as objectives of the review, and ensure the concepts and rationale are explained.

The following three standard subheadings in the Background section of a Cochrane Review are intended to facilitate a structured approach to the context and overall rationale for the review.

  • Description of the condition:  A brief description of the condition being addressed, who is affected, and its significance, is a useful way to begin the review. It may include information about the biology, diagnosis, prognosis, prevalence, incidence and burden of the condition, and may consider equity or variation in how different populations are affected.
  • Description of the intervention and how it might work:  A description of the experimental intervention(s) should place it in the context of any standard or alternative interventions, remembering that standard practice may vary widely according to context. The role of the comparator intervention(s) in standard practice should also be made clear. For drugs, basic information on clinical pharmacology should be presented where available, such as dose range, metabolism, selective effects, half-life, duration and any known interactions with other drugs. For more complex interventions, such as behavioural or service-level interventions, a description of the main components should be provided (see Chapter 17 ). This section should also provide theoretical reasoning as to why the intervention(s) under review may have an impact on potential recipients, for example, by relating a drug intervention to the biology of the condition. Authors may refer to a body of empirical evidence such as similar interventions having an impact on the target recipients or identical interventions having an impact on other populations. Authors may also refer to a body of literature that justifies the possibility of effectiveness. Authors may find it helpful to use a logic model ( Kneale et al 2015 ) or conceptual framework to illustrate the proposed mechanism of action of the intervention and its components. This will also provide review authors with a framework for the methods and analyses undertaken throughout the review to ensure that the review question is clearly and appropriately addressed. More guidance on considering the conceptual framework for a particular review question is presented in Chapter 2 and Chapter 17 .
  • Why it is important to do this review: Review authors should explain clearly why the questions being asked are important. Rather than justifying the review on the grounds that there are known eligible studies, it is more helpful to emphasize what aspects of, or uncertainties in, the accumulating evidence base now justify a systematic review. For example, it might be the case that studies have reached conflicting conclusions, that there is debate about the evidence to date, or that there are competing approaches to implementing the intervention.

Immediately following the Background section of the review, review authors should declare the review objectives. They should begin with a precise statement of the primary objective of the review, ideally in a single sentence. Where possible the style should be of the form “To assess the effects of [intervention or comparison] for [health problem] for/in [types of people, disease or problem and setting if specified] ”. This might be followed by a series of secondary objectives relating to different participant groups, different comparisons of interventions or different outcome measures. If relevant, any objectives relating to the evaluation of economic or qualitative evidence should be stated. It is not necessary to state specific hypotheses.

III.3.4 Methods

The Methods section in a completed review should be written in the past tense, and should describe what was done to obtain the results and conclusions of the current review.

Review authors are expected to cite their protocol to make it clear that there was one. Often a review is unable to implement all the methods outlined in the protocol. For example, planned investigations of heterogeneity (e.g. subgroup analyses) and small-study effects may not have been conducted because of an insufficient number of studies. Authors should describe and explain all amendments to the prespecified methods in the main Methods section.

The Methods section of a Cochrane Review includes five main subsections, within which are a series of standard headings to guide authors in reporting all the relevant information. See Sections III.3.4.1 , III.3.4.2 ,  III.3.4.3 , III.3.4.4, and III.3.4.5 for a summary of content recommended for inclusion under each subheading.

III.3.4.1 Criteria for considering studies for this review

Review authors should declare all criteria used to decide which studies are included in the review. Doing so will help readers understand the scope of the review and recognize why particular studies they are aware of were not included. Eligible study designs should be described, with a focus on specific features of a study’s design rather than design labels (e.g. how groups were formed, whether the intervention was assigned to individuals or clusters of individuals) ( Reeves et al 2017 ). Review authors should describe eligibility criteria for participants, including any restrictions based on age, diagnostic criteria, location and setting. If relevant, it is useful to describe how studies including a subset of relevant participants were addressed (e.g. when children up to the age of 16 years only were eligible but a study included children up to the age of 18 years). Eligibility criteria for interventions and comparators should be stated also, including any criteria around delivery, dose, duration, intensity, co-interventions and characteristics of complex interventions. The rationale for all criteria should be clear, including the eligible study designs.

Typically, studies should not be excluded from a review solely because no outcomes of interest were reported, because failure to report an outcome does not mean it was not assessed ( Dwan et al 2017 ). However, on occasion it will be appropriate to include only studies that measured particular outcomes. For example, a review of a multi-component public health intervention promoting healthy lifestyle choices, focusing on reduction in smoking prevalence, might legitimately exclude studies that do not measure any smoking outcomes. Review authors should specify if measurement of a particular outcome was used as an eligibility criterion for the review, and justify why this was done.

Further guidance on planning eligibility criteria is presented in Chapter 3 .

III.3.4.2 Outcome measures

Review authors should specify the critical and important outcomes of interest to the review, and define acceptable ways of measuring them. The review’s important outcomes should normally reflect at least one potential benefit and at least one potential harm.

For each listed outcome or outcome domain, it should be clear which specific outcomes, measures or tools will be considered together and combined for the purposes of synthesis. For example, for the outcome of depression, a series of measurement tools for depression symptoms may be listed. It should be explicitly stated whether they will be synthesized together as a single outcome (depression), or presented as a series of separate syntheses for each tool. Any categories of time that will be used to group outcomes for synthesis should also be defined, e.g. short term (up to 1 month), medium term (> 1 month to 12 months), and long term (> 12 months). Additional guidance on grouping of outcomes for synthesis is included in Chapter 3 , and in the InSynQ (Intervention Synthesis Questions) reporting guideline ( https://InSynQ.info ).  

III.3.4.3 Search methods for identification of studies

It is essential that users of systematic reviews are given an opportunity to evaluate the methods used to identify studies for inclusion. Such an evaluation is possible when review authors report their search methods comprehensively. This involves specifying all sources consulted, including databases, trials registers, websites, and a list of individuals or organizations contacted. If particular journals were handsearched, this should be noted. Any specific methods used to develop the search strategy, such as automated text analysis or peer review, should also be noted, including methods used to translate the search strategy for use in different databases. Specifying the dates of coverage of all databases searched and the date of the last search for which studies were fully incorporated can help users determine how up to date the review is. Review authors should also declare any limits placed on the search (e.g. by language, publication date or publication format).

To facilitate replication of a search, review authors should include in the supplementary material the exact search strategy (or strategies) used for each database, including any limits and filters used. Search strategies can be exported from bibliographic databases, and these should be copied and pasted instead of re-typing each line, which can introduce errors.

See  Chapter 4 for guidance on search methods. An extension to the PRISMA statement for reporting of literature searches is also available ( Rethlefsen et al 2021 ).

III.3.4.4 Data collection and analysis

Cochrane Reviews include several standard subheadings to enable a structured, detailed description of the methods used for data collection and analysis. Additional headings should be included where appropriate to describe additional methods implemented in the review, e.g. those specific to the analysis of qualitative or economic evidence.

Selection of studies: There should be a description of how the eligibility criteria were applied, from screening of search results through to the final selection of studies for inclusion in the review. The number of people involved at each stage of the process should be stated, such as two authors working independently, along with an indication of how any disagreements were resolved. Any automated processes, software tools or crowdsourcing used to support selection should be noted. See Chapter 4 for guidance on the study selection process.

Data collection and management:  Review authors should specify how data were collected for the included studies. This includes describing the number of people involved in data collection, whether they worked independently, how any disagreements were resolved, and whether standardized data collection forms were used (and if so, whether they were piloted in advance). Any software tools used in data collection should be cited, as well as any checklists such as TIDieR for the description of interventions ( Hoffmann et al 2017 ), TIDieR PHP for population health and policy interventions ( Campbell et al 2018 ), or TACIT for identifying conflicts of interest ( https://tacit.one/ ). If study authors or sponsors were contacted to obtain missing information or to clarify the information available, this should be stated.

RevMan allows authors to directly import some types of data (including study results and risk of bias assessments). To facilitate the import, it is recommended that Cochrane authors consider the required format of data import files to inform their data extraction forms. See documentation in the RevMan Knowledge Base .

A brief description of the data items (e.g. participant characteristics, intervention details) extracted from each report is recommended. If methods for transforming or processing data in preparation for analysis were necessary (e.g. converting standard errors to standard deviations, extracting numeric data from graphs), these methods should be described.

Additional information about the outcomes to be collected is helpful to include, including a description of how authors handled multiplicity, such as where a single study reports more than one similar outcome measure or measurement time point eligible for inclusion in the same review, requiring a method or decision rule to select between eligible results. See Chapter 3 for guidance on selecting outcomes, and Chapter 5 for guidance on data collection.

Risk of bias assessment in included studies: There should be a description of the approach used to assess risk of bias in the included studies. This involves specifying the risk-of-bias tool(s) used, how many authors were involved in the assessment, how disagreements were resolved, and how the assessments were incorporated into the analysis or interpretation of the results. The preferred bias assessment tools for Cochrane review authors are RoB 2 for RCTs and ROBINS -I for non-randomized studies (described in Chapter 8 and Chapter 25 ). When using either of these tools, some specific information is needed in this section of the Methods. Authors should specify the outcome measures and timepoints assessed (often the same prespecified outcomes were considered in the GRADE assessment and included in summary versions of the review, see Chapter 3, Section 3.2.4.2 ); and the effect of interest the author team assessed (either the effect of assignment to the intervention, or the effect of adhering to the intervention). Authors should also specify how overall judgements were reached, both across domains for an individual result and across multiple studies included in a synthesis. Cochrane has developed checklists for reporting risk of bias methods in protocols and completed reviews for authors using the RoB 2 tool ( https://methods.cochrane.org/risk-bias-2 ) and the ROBINS-I tool ( https://methods.cochrane.org/robins-i ). See Chapter 7 for further guidance on study risk-of-bias assessment. Authors who have used the original version of the RoB tool (from 2008 or 2011) should refer to guidance for reporting the risk of bias in version 5.2 of the Cochrane Handbook for Systematic Reviews of Interventions (available at https://training.cochrane.org/handbook/archive/v5.2 ).

Measures of the treatment effect:  The effect measures used by the review authors to describe results in any included studies or meta-analyses (or both) should be stated. Examples of effect measures include the odds ratio (OR), risk ratio (RR) and risk difference (RD) for dichotomous data; the mean difference (MD) and standardized mean difference (SMD) for continuous data; and hazard ratio for time-to-event data. Note that some non-randomized study designs require different effect estimates, and these should be specified if such designs are included in the review (e.g. interrupted time series commonly measure the change in level and change in slope). See Chapter 6 for more guidance on effect measures.

Unit of analysis issues: If the review includes study designs that can give rise to a unit-of-analysis error (when the number of observations in an analysis does not match the number of units randomized), the approaches taken to address these issues should be described. Studies that can give rise to unit-of-analysis errors include crossover trials, cluster-randomized trials, studies where interventions are assigned to multiple parts of the body of the same participant, and studies with multiple intervention groups where more than two groups are included in the same meta-analysis. See Chapter 23 for guidance on handling unit-of-analysis issues.

Dealing with missing data:  Review authors may encounter various types of missing data in their review. For example, there may be missing information that has not been reported by the included studies, such as information about the methods of the included studies (e.g. when the method of randomization is not reported, which may be addressed in the risk of bias assessment); missing statistics (e.g. when standard deviations of mean scores are not reported, where missing statistics may be calculated from the available information or imputed); or non-reporting of outcomes (which may represent a risk of bias due to missing results). Missing data may also refer to cases where participants in the included primary studies have withdrawn or been lost to follow-up, or have missing measurements for some outcomes, which may be considered and addressed through risk of bias assessment. Any strategies used to deal with missing data should be reported, including any attempts to obtain the missing data. See Chapter 10 for guidance on dealing with missing data.

Reporting bias assessment: Any methods used to assess the risk of bias due to missing results should be described. Such methods may include consideration of the number of studies missing from a synthesis due to selective non-reporting of results, or investigations to assess small-study effects (e.g. funnel plots), which can arise from the suppression of small studies with ‘negative’ results (also called publication bias). If relevant, any tools or checklists used (such as ROB-ME, https://www.riskofbias.info/welcome/rob-me-tool ) should be cited.See Chapter 13 for a description of methods for assessing risk of bias due to missing results in a synthesis.

Synthesis methods:  Reviews may address multiple research questions (‘synthesis questions’). For example, a review may be interested in the effects of an intervention in children or adults, or may wish to investigate the effects of different types of exercise interventions. Each comparison to be made in the synthesis should be specified in enough detail to allow a reader to replicate decisions about which studies belong in each synthesis, and the rationale for the comparisons should be clear. Comparisons for synthesis can be defined using the same PICO characteristics that are used to define the eligibility criteria for including studies in the review. See Chapter 3 for guidance on defining the ‘PICO for each synthesis’. Further guidance is available in the InSynQ (Intervention Synthesis Questions) tool for planning and reporting synthesis questions ( https://InSynQ.info ).

Review authors should then describe the methods used for synthesizing results across studies in each comparison (e.g. meta-analysis, network meta-analysis or other methods). Where data have been combined in statistical software external to RevMan, authors should reference the software, commands and settings used to run the analysis. See Chapter 10 for guidance on undertaking meta-analysis, Chapter 11 for guidance on undertaking network meta-analysis, and Chapter 12 for a description of other synthesis methods. An extension to the PRISMA statement for reporting network meta-analyses is available for reviews using these methods ( Hutton et al 2015 ).

Where meta-analysis is planned, details should be specified of the meta-analysis model (e.g. fixed-effect or random-effects), the specific method used (e.g. Mantel Haenszel, inverse variance, Peto), and a rationale presented for the options selected. Review authors should also describe their approach to identifying or quantifying statistical heterogeneity (e.g. visual inspection of results, a formal statistical test for heterogeneity, I2, Tau2, or prediction interval). See Chapter 10 for guidance on assessment of heterogeneity.

Where meta-analysis is not possible, any other synthesis methods used should be described explicitly, including the rationale for the methods selected. It is common for these methods to be insufficiently described in published reviews ( Campbell et al 2019 , Cumpston et al 2023 ), and general terms such as ‘narrative synthesis’ do not provide appropriate detail about the specific methods used. In addition to detailed guidance in Chapter 12 , a reporting guideline for Synthesis Without Meta-analysis (SWiM) has been developed and should be considered in addition to MECIR for reporting these methods( Campbell et al 2020 ).

For whichever synthesis methods are used, the structure of tables and plots used to visually display results should also be specified, including a rationale for the options selected (see Section III.3.5.4).

Investigations of heterogeneity and subgroup analysis:  If subgroup analyses or meta-regression were performed, review authors should specify the potential effect modifiers explored, the rationale for each, whether they were identified before or after the results were known, whether they were based on between-study or within-study subgroups, and how they were compared (e.g. using a statistical test for interaction). See Chapter 10 for more information on investigating heterogeneity. If applicable, review authors should specify which equity-related characteristics were explored.

Sensitivity analysis: If any sensitivity analyses were performed to explore the robustness of meta-analysis results, review authors should specify the basis of each analysis (e.g. removal of studies at high risk of bias, imputing alternative estimates of missing standard deviations). See Chapter 10 for more information on sensitivity analyses.

Certainty of the evidence assessment:  Review authors should describe methods for summarizing the findings of the review, and assessing the certainty of the body of evidence (e.g. using the GRADE approach). The domains to be assessed should be stated, including any thresholds used to downgrade the certainty of the evidence, such as risk of bias assessment, levels of unexplained heterogeneity, or key factors for assessing directness. Who conducted the GRADE assessment should be stated, including whether two authors assessed GRADE independently and how disagreements were resolved. Review authors should also indicate which populations, interventions, comparisons and outcomes are addressed in ‘Summary of findings’ tables, specifying up to seven prioritized critical or important outcomes to be included. Authors should note what they considered to be a minimally important difference for each outcome. Any specific language used to describe results in the context of the GRADE assessment should be explained, such as using the word “probably” for to moderate-certainty evidence, and “may” in relation to low-certainty evidence (see Chapter 15, Section 15.6.4 ). For more details on completing ‘Summary of findings’ tables and using the GRADE approach, see Chapter 14 .

III.3.4.5 Consumer Involvement

Cochrane follows the ACTIVE (Authors and Consumers Together Impacting on eVidencE) framework to help review authors have meaningful involvement in their systematic reviews ( Pollock et al 2017 ). Review authors should report on their methods for involving consumers in their review, including the authors’ general approach to involvement; the level of involvement and the roles of the consumers involved; the stage in the review process when involvement occurs; and any formal research methods or techniques used.

Other stakeholders may also be involved in the systematic reviews, such as health care providers, policy makers and other decision makers. Where other stakeholders are involved, this should also be described.

If review authors did not involve consumers or other stakeholders, this should be stated.

III.3.5 Results

A narrative summary of the results of a Cochrane Review should be provided under the three standard subheadings in the Results section (see Sections ‎III.3.5.1 , ‎III.3.5.2 and ‎III.3.5.3 for a summary of content recommended for inclusion under each subheading). Details about the effects of interventions (including summary statistics and effect estimates for each included study and for synthesis) can be presented in various tables and figures (see Section ‎III.3.5.4 ).

III.3.5.1 Description of studies

The results section should start with a summary of the results of the search (for example, how many references were retrieved by the electronic searches, how many were evaluated after duplicates were removed, how many were considered as potentially eligible after screening, and how many were included). Review authors are expected to include a PRISMA-type flow diagram demonstrating the flow of studies throughout the selection process ( Page et al 2021b ). Such flow diagrams can be created within RevMan.

To help readers determine the completeness and applicability of the review findings in relation to the review question, as well as how studies are grouped for synthesis within the review, authors should describe the characteristics of the included studies. In the Results section, a brief narrative summary of the included studies should be presented. The summary should not describe each included study individually, but instead should summarize how the included studies vary in terms of design, number of participants, and important effect modifiers outlined in the protocol (e.g. populations and settings, interventions, comparators, outcomes or funding sources). An ‘Overview of synthesis and included studies’ (OSIS) table should be used to summarize key characteristics, and assist readers in matching studies to comparisons for synthesis (guidance on this is available in the RevMan Knowledge Base ). See Chapter 9 for further guidance on summarizing study characteristics.

More details about each included study should be presented in the ‘Characteristics of included studies’ supplementary material. These are organized in tables and should include (at a minimum) the following information about each included study:

  • basic study design or design features;
  • baseline demographics of the study sample (e.g. age, sex/gender, key equity characteristics);
  • sample size;
  • details of all interventions (including what was delivered, by whom, in which setting, and how often; for more guidance see the TIDieR ( Hoffmann et al 2017 ) and TIDieR PHP ( Campbell et al 2018 ) reporting guidelines;
  • outcomes measured (with details on how and when they were measured);
  • funding source; and
  • declarations of interest among the primary researchers.

Studies that may appear to some readers to meet the eligibility criteria, but which were excluded, should be listed in the ‘Characteristics of excluded studies’ supplementary material, and an explicit reason for exclusion should be provided (one reason is usually sufficient, and all reasons should be consistent with the stated eligibility criteria). It is not necessary to include every study excluded at the full text screening stage in the table; rather, authors should use their judgement to identify those studies most likely to be considered eligible by readers, and hence most useful to include here. A succinct summary of the reasons why studies were excluded from the review should be provided in the Results section.

It is helpful to make readers aware of any completed studies that have been identified as potentially eligible but have not been incorporated into the review. This may occur when there is insufficient information to determine whether the study meets the eligibility criteria of the review, or when a top-up search is run immediately prior to publication and the review authors consider it unlikely that inclusion of the study would change the review conclusions substantially. A description of such studies can be provided in the ‘Characteristics of studies awaiting classification’ supplementary material.

Readers should also be made aware of any studies that meet the eligibility criteria for the review, but which are still in progress and hence have no results available. This serves several purposes. It will help readers assess the stability of the review findings, alert research funders about ongoing research activity, help inform research implications, and can serve as a useful basis for deciding when an update of the review may be needed. A description of such studies can be provided in the ‘Characteristics of ongoing studies’ supplementary material.

III.3.5.2 Risk of bias in included studies

To help readers determine the credibility of the results of included studies, review authors should provide an overview of their risk-of-bias assessments in this section of the Results. For example, this might include overall comments on key domains that influenced the overall risk of bias judgement (e.g. the extent to which blinding was implemented across all included trials), and an indication of whether important differences in overall risk of bias were observed across outcomes. It is not necessary to describe the individual domain assessments of each included study or each result here. If risk of bias assessments were very similar (or identical) for all outcomes in the review, a summary of the assessments across studies should be presented here. If risk of bias assessments are very different for different outcomes, this section should be very brief, and summaries of the assessments across studies should be provided within the ‘Synthesis of results’ section alongside the relevant results.

If RoB 2 or ROBINS-I has been used, result-level ‘risk of bias’ tables should be included to summarize the risk of bias judgements for each domain for each study included in the synthesis. For RoB 2, these tables can be generated in RevMan, and summaries of risk of bias assessments can also be added to forest plots presenting the results of meta-analysis. Authors should use an additional supplementary material for ROBINS-I risk of bias tables. More detailed assessments, including the consensus responses to each signalling question and comments to support each response, can be made available as an additional file in a publicly available data repository.

Cochrane guidance specific to the presentation and reporting of risk of bias assessments using the RoB 2 tool is available at https://methods.cochrane.org/risk-bias-2 , and for ROBINS-I at https://methods.cochrane.org/robins-i . Chapter 7 , Chapter 8 and Chapter 25 present further guidance on risk of bias assessment.

III.3.5.3 Synthesis of Results

Review authors should summarize in text form the results for all pre-specified review outcomes, regardless of the statistical significance, magnitude or direction of the effects, or whether evidence was found for those outcomes. The text should present the results in a logical and systematic way. This can be done by organizing results by population or comparison (e.g. by first describing results for the comparison of drug versus placebo, then describing results for the comparison of drug A versus drug B).

If meta-analysis was possible, synthesized results should always be accompanied by a measure of statistical uncertainty, such as a 95% confidence interval. If other synthesis methods were used, authors should take care to specifically state the methods used. In particular, unless vote counting based on the direction of effect is used explicitly, authors should avoid the inadvertent use of vote counting in text (e.g. “the majority of studies found a positive effect”) ( Cumpston et al 2023 ). It is also helpful to indicate the amount of information (numbers of studies and participants) contributing to each synthesis. If additional studies reported results that could not be included in synthesis (e.g. because results were incompletely reported or were in an incompatible format), these additional results should be reported in the review. If no data were available for particular review outcomes of interest, review authors should say so, so that all pre-specified outcomes are accounted for. Guidance on summarizing results from meta-analysis is provided in Chapter 10 , from network meta-analysis in Chapter 11 , and for methods other than meta-analysis in Chapter 12 .

It is important that the results of the review are presented in a manner that ensures the reader can interpret the findings accurately. The direction of effect (increase or decrease, benefit or harm), should always be clear to the reader, and the minimal important difference in the outcome (if known) should be specified. Review authors should consider presenting results in formats that are easy to interpret. For example, standardized mean differences are difficult to interpret because they are in units of standard deviation, but can be re-expressed in more accessible formats (see Chapter 15 ). 

In addition to summarizing the effects of interventions, review authors should also summarize the results of any subgroup analyses (or meta-regression), sensitivity analyses, and assessments of the risk of bias due to missing results (if performed) that are relevant to each synthesis. A common issue in reporting the results of subgroup analyses that should be avoided is the misleading emphasis placed on the intervention effects within subgroups (e.g. noting that one group has a statistically significant effect) without reference to a test for between-subgroup difference (see Chapter 10 ).

A ‘Summary of findings’ table is a useful means of presenting findings for the most important comparisons and outcomes, whether or not evidence is available for them. In a published Cochrane Review, all ‘Summary of findings’ tables are included before the Background section. A ‘Summary of findings’ table typically:

  • includes results for one clearly defined population group;
  • indicates the intervention and the comparator;
  • includes seven or fewer patient-important outcomes;
  • describes the characteristics of the outcomes (e.g. scale, scores, follow-up);
  • indicates the number of participants and studies for each outcome;
  • presents at least one estimate of the typical risk or score for participants receiving the comparator intervention for each outcome;
  • summarizes the intervention effect (if appropriate), and;
  • includes an assessment of the certainty of the body of evidence for each outcome.

The assessment of the certainty of the body of evidence should follow the GRADE approach, which includes considerations of risk of bias, indirectness, inconsistency, imprecision and publication bias (see Chapter 14 ). Where available, the GRADE assessment should always be presented alongside each result wherever it appears (for example, in the Results, Discussion or Abstract).

A common mistake to avoid is the confusion of ‘no evidence of an effect’ with ‘evidence of no effect’. When a confidence interval includes the possibility of no effect, it is wrong to claim that it shows that an intervention has no effect or is no different from the control intervention, unless the confidence interval is narrow enough to exclude a meaningful difference in either a positive or negative direction. Where confidence intervals are compatible with either a positive and negative, or positive and negligible effect, this is factored into an assessment of the imprecision of the result through GRADE. Authors can therefore report the size and direction of the central effect estimate as observed, alongside an assessment of its uncertainty.

III.3.5.4 Presenting results of studies and syntheses in tables and figures

Simple summary data for each intervention group (such as means and standard deviations), as well as estimates of effect (such as mean differences), should be presented for each study, for each outcome of interest to the review, in the Analyses supplementary material. The Analyses supplementary material has a hierarchical structure, presenting results in forest plots or other table formats, grouped first by comparison, and then for each outcome assessed within the comparison. Authors can also record in each table the source of all results presented, in particular, whether results were obtained from published literature, by correspondence, from a trials register, or from another source (e.g. clinical study report). Presenting such information facilitates attempts by others to verify or reproduce the results ( Page et al 2018 ).

In addition to the Analyses supplementary material, review authors should include the main forest plots and tables that help the review address its objectives and support its conclusions as Figures and Tables within the main body of the review.

Forest plots display effect estimates and confidence intervals for each individual study and the meta-analysis ( Lewis and Clarke 2001 ). Forest plots created in RevMan typically illustrate:

1. the summary statistics (e.g. number of events and sample size of each group for dichotomous outcomes) for each study;

2. point estimates and confidence intervals for each study, both in numeric and graphic format;

3. a point estimate and confidence interval for the meta-analytic effect, both in numeric and graphic format;

4. the total number of participants in the experimental and control groups;

5. labels indicating the interventions being compared and the direction of effect;

6. percentage weights assigned to each study;

7. the risk of bias in each point estimate, including the overall judgement and judgements for each domain;

8. estimates of heterogeneity (e.g. Tau 2 ) and inconsistency (I 2 );

9. a statistical test for the meta-analytic effect.

For reviews using network meta-analysis, a range of figures and table formats may be appropriate to present both the network of evidence and the results of the analysis. These may include a network diagram, contribution matrix, forest plot or rankogram (see Chapter 11 for more details).

If meta-analysis was not possible or appropriate, or if the results of some studies could not be included in a meta-analysis, the results of each included study should still be presented in the review. Wherever possible, results should be presented in a consistent format (e.g. an estimate of effect such as a risk ratio or mean difference with a confidence interval, which may be calculable from the available data even if not presented in the primary study). Where meta-analysis is not used, review authors may find it useful to present the results of studies in a forest plot without calculating a meta-analytic effect.

Where appropriate, authors might consider presenting alternative figures to present the results of included studies. These may include a harvest plot, effect direction plot or albatross plot (see Chapter 12 for more details).

Figures other than forest plots and funnel plots may be produced in software other than RevMan and included as Figures in a Cochrane Review.

Review authors should ensure that all statistical results presented in the main review text are consistent between the text and tables or figures, and across all sections of the review where results are reported (e.g. the Abstract, Plain language summary, ‘Summary of findings’ tables, Results and Analyses supplementary material).

If authors wish to make additional data available, such as completed data collection forms or full datasets and code used in statistical analysis, these may be provided as additional files through a publicly available repository (such as the Open Science Framework) and cited in the review.

Authors should avoid presenting tables or forest plots for comparisons or outcomes for which there are no data (i.e. no included studies reported that outcome or comparison). Instead, authors should note in the text of the review that no data are available for the comparisons. However, if the review has a ‘Summary of findings’ table, the main outcomes should be included in this irrespective of whether data are available from the included studies.

III.3.6 Discussion

A structured discussion can help readers consider the implications of the review findings. Standard Discussion subheadings in Cochrane Reviews provide the structure for this section.

Summary of main results:  It is useful to provide a concise description of results for the main outcomes of the review, but this should not simply repeat text provided elsewhere. If the review has a number of comparisons this section should focus on those that are most prominent in the review, and that address the main review objectives. Review authors should avoid repeating all the results of the synthesis, but be careful to ensure that all summary statements made in the Discussion are supported by and consistent with the results presented elsewhere in the review.

Limitations of the evidence included in the review:  This section should present an assessment of how well the evidence identified in the review addressed the review question. It should indicate whether the studies identified were sufficient to address all of the objectives of the review, and whether all relevant types of participants, interventions and outcomes have been investigated. Information presented under ‘Description of studies’ will be useful to draw on in writing this part of the discussion. This section should also summarize the considerations that led to downgrading or upgrading the certainty of the evidence in their implementation of GRADE. This information can be based on explanations for downgrading decisions alongside the ‘Summary of findings’ tables in the review.

Limitations of the review process: It is important for review authors to reflect on and report any decisions they made that might have introduced bias into the review findings. For example, rather than emphasizing the comprehensiveness of the search for studies, review authors should consider which aspects of the design or execution of the search could have led to studies being missed. This might occur because of the complexity and low specificity of the search, because the indexing of studies in the area is poor, or because searches beyond bibliographic databases did not occur. If attempts to obtain relevant data were not successful, this should be stated. Additional limitations to consider include contestable decisions relating to the inclusion or exclusion of studies, synthesis of study results, or grouping of studies for the purposes of subgroup analysis. For example, review authors may have decided to exclude particular studies from a synthesis because of uncertainty about the precise details of the interventions delivered, measurement instrument used, or where it has not been possible to retrieve subgroup level data. If data were imputed and alternative approaches to achieve this could have been undertaken, this might also be acknowledged. It may be helpful to consider tools that have been designed to assess the risk of bias in systematic reviews (such as the ROBIS tool (Whiting et al 2016)) when writing this section.

Agreements and disagreements with other studies or reviews: Review authors should also discuss the extent to which the findings of the current review agree or disagree with those of other reviews. Authors could briefly summarize the conclusions of previous reviews addressing the same question, and if the conclusions contrast with their own, discuss why this may have occurred (e.g. because of differences in eligibility criteria, search methods or synthesis approach).

Further guidance on issues for consideration in the Discussion section is presented in Chapter 14 and Chapter 15 .

III.3.7 Conclusions

There are two standard sections in Cochrane Reviews devoted to the authors’ conclusions.

Implications for practice: In this section, review authors should provide a general interpretation of the evidence so that it can inform healthcare or policy decisions. The implications for practice should be as practical and unambiguous as possible, should be supported by the data presented in the review and should not be based on additional data that were not systematically compiled and evaluated as part of the review. Recommendations for how interventions should be implemented and used in practice should not be given in Cochrane Reviews , as they may be inappropriate depending on the different settings and individual circumstances of readers. Authors may be helpful to readers by identifying factors that are likely to be relevant to their decision making, such as the relative value of the likely benefits and harms of the intervention, participants at different levels of risk, or resource issues. If the review considered equity, discuss the equity-related implications for practice and policy.

Implications for research:  This section of a Cochrane Review is often used by people making decisions about future research, and review authors should try to write something that will be useful for this purpose. Implications for how research might be done and reported (e.g. the need for randomized trials rather than other types of study, for better descriptions of interventions, or for the routine collection of patient-important outcomes) should be distinguished from what future research should be done (e.g. research in particular subgroups of people, or an as-yet-untested experimental intervention). In addition to important gaps in the completeness and applicability of the evidence noted in the Discussion, any factors that led to downgrading the evidence as part of a GRADE assessment may provide suggestions to be addressed by future research. This could include avoidable sources of bias or larger studies. This section should also draw on what is known about any ongoing studies identified from trials register searches, and any information about ongoing or recently completed studies can be used to guide recommendations on whether new studies should be initiated. If the review considered equity, discuss the equity-related implications for research. It is important that this section is as clear and explicit as possible. General statements that contain little or no specific information, such as “Future research should be better conducted” or “More research is needed” are of little use to people making decisions, and should be avoided.

III.3.8 Additional information

A Cochrane Review should include several pieces of additional, administrative information, many of which are standard in other journals. These include acknowledgements, contributions of authors, declarations of interest, sources of support, registration and protocol details, and availability of data, code and other materials.

Acknowledgements: Review authors should acknowledge the contribution of people not listed as authors of the review and any contributions to searching, data collection, study appraisal or statistical analysis performed by people not listed as authors. Written permission is required from those listed in this section.

Contributions of authors:  The contributions of each author to the review should be described. It is helpful to specify which authors were involved in each of the following tasks: conception of the review; design of the review; co-ordination of the review; search and selection of studies for inclusion in the review; collection of data for the review; assessment of the risk of bias in the included studies; analysis of data; assessment of the certainty in the body of evidence; interpretation of data, and writing of the review. Refer to the Cochrane Library editorial policy on authorship for the criteria that must be met to be listed as an author.

Declarations of interest:  All authors should report any present or recent affiliations or other involvement in any organization or entity with an interest in the review’s topic that might lead to a real or perceived conflict of interest. The dates of the involvement should be reported. For reviews whose titles were registered prior to 14 October 2020, and for updates which were underway before that date, the relevant time frame for interests begins three years before the original registration of the review with Cochrane, before the beginning of an individual author’s first involvement with the review, or before the decision to commence work on a review update. For all other reviews and updates, the relevant time frame for interests begins three years before the submission of the initial draft article, or three years before the beginning of an individual author’s first involvement. If there are no known conflicts of interest, this should be stated explicitly, for example, by writing “None known”. Authors should make themselves aware of the restrictions in place on authorship of Cochrane Reviews where conflicts of interest arise. Refer to the Cochrane Library editorial policy on conflicts of interest for full details.

Sources of support:  Authors should acknowledge grants that supported the review, and other forms of support, such as support from their university or institution in the form of a salary. Sources of support are divided into ‘internal’ (provided by the institutions at which the review was produced) and ‘external’ (provided by other institutions or funding agencies). Each source, its country of origin and what it supported should be provided. Authors should make themselves aware of the restrictions in place on funding of Cochrane Reviews by commercial sources where conflicts of interest may arise. Refer to the Cochrane Library editorial policy on conflicts of interest for full details.

Registration and protocol: Authors should provide the DOIs of protocols or previous versions of the review. If the systematic review is registered, authors should cite the review’s registration record number

Data, code and other materials: Cochrane requires, as a condition for publication, that the data supporting the results in systematic reviews published in the Cochrane Database of Systematic Reviews be made available for users, and that authors provide a data availability statement.

Analyses and data management are preferably conducted within Cochrane’s authoring tool, RevMan, for which computational methods are publicly available. Data entered into RevMan, such as study data, analysis data, and additional information including search results, citations of included and excluded studies, and risk of bias assessments are automatically made available for download from Cochrane Reviews published on the Cochrane Library. Scripts and artefacts used to generate analyses outside of RevMan which are presented in the review should be publicly archived and cited within the review’s data availability statement. External files, such as template data extraction forms or other data sets, can be added to a disciplinary or general repository and cited within the review. Refer to the Cochrane Library editorial policy on data sharing for full details.

III.4 Chapter information

Authors: Miranda Cumpston, Toby Lasserson, Ella Flemyng, Matthew J Page,

Acknowledgements: We thank previous chapter author Jacqueline Chandler, on whose text this version is based. This chapter builds on an earlier version of the Handbook ( Version 5, Chapter 4 : Guide to the contents of a Cochrane protocol and review), edited by Julian Higgins and Sally Green. We thank them for their contributions to the earlier chapter. We thank Sue Brennan, Rachel Churchill, Robin Featherstone, Ruth Foxlee, Kayleigh Kew, Nuala Livingstone and Denise Mitchell for their feedback on this chapter.

Declarations of interest:  Toby Lasserson and Ella Flemyng are employees of Cochrane. Matthew Page co-led the development of the PRISMA 2020 statement.

III.5 References

Beller EM, Glasziou PP, Altman DG, Hopewell S, Bastian H, Chalmers I, Gøtzsche PC, Lasserson T, Tovey D, for the PfAG. PRISMA for Abstracts: Reporting Systematic Reviews in Journal and Conference Abstracts. PLoS Medicine 2013; 10 : e1001419.

Campbell M, Katikireddi SV, Hoffmann T, Armstrong R, Waters E, Craig P. TIDieR-PHP: a reporting guideline for population health and policy interventions. BMJ 2018; 361 : k1079.

Campbell M, Katikireddi SV, Sowden A, Thomson H. Lack of transparency in reporting narrative synthesis of quantitative data: a methodological assessment of systematic reviews. Journal of Clinical Epidemiology 2019; 105 : 1-9.

Campbell M, McKenzie JE, Sowden A, Katikireddi SV, Brennan SE, Ellis S, Hartmann-Boyce J, Ryan R, Shepperd S, Thomas J, Welch V, Thomson H. Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline. BMJ 2020; 368 : l6890.

Cumpston MS, Brennan SE, Ryan R, McKenzie JE. Synthesis methods other than meta-analysis were commonly used but seldom specified: survey of systematic reviews. Journal of Clinical Epidemiology 2023; 156 : 42-52.

Dwan KM, Williamson PR, Kirkham JJ. Do systematic reviews still exclude studies with "no relevant outcome data"? BMJ 2017; 358 : j3919.

Glasziou P, Altman DG, Bossuyt P, Boutron I, Clarke M, Julious S, Michie S, Moher D, Wager E. Reducing waste from incomplete or unusable reports of biomedical research. Lancet 2014; 383 : 267-276.

Hoffmann TC, Oxman AD, Ioannidis JP, Moher D, Lasserson TJ, Tovey DI, Stein K, Sutcliffe K, Ravaud P, Altman DG, Perera R, Glasziou P. Enhancing the usability of systematic reviews by improving the consideration and description of interventions. BMJ 2017; 358 : j2998.

Hutton B, Salanti G, Caldwell DM, Chaimani A, Schmid CH, Cameron C, Ioannidis JP, Straus S, Thorlund K, Jansen JP, Mulrow C, Catala-Lopez F, Gotzsche PC, Dickersin K, Boutron I, Altman DG, Moher D. The PRISMA extension statement for reporting of systematic reviews incorporating network meta-analyses of health care interventions: checklist and explanations. Annals of Internal Medicine 2015; 162 : 777-784.

Kneale D, Thomas J, Harris K. Developing and Optimising the Use of Logic Models in Systematic Reviews: Exploring Practice and Good Practice in the Use of Programme Theory in Reviews. PLoS One 2015; 10 : e0142187.

Lasserson T, Churchill R, Chandler J, Tovey D, Higgins JPT. Standards for the reporting of protocols of new Cochrane Intervention Reviews. In: Higgins JPT, Lasserson T, Chandler J, Tovey D, Churchill R, editors. Methodological Expectations of Cochrane Intervention Reviews . London: Cochrane; 2016.

Lewis S, Clarke M. Forest plots: trying to see the wood and the trees. BMJ 2001; 322 : 1479-1480.

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Systematic Reviews 2015; 4 : 1.

Nguyen PY, Kanukula R, McKenzie JE, Alqaidoom Z, Brennan SE, Haddaway NR, Hamilton DG, Karunananthan S, McDonald S, Moher D, Nakagawa S, Nunan D, Tugwell P, Welch VA, Page MJ. Changing patterns in reporting and sharing of review data in systematic reviews with meta-analysis of the effects of interventions: cross sectional meta-research study. BMJ 2022; 379 : e072428.

Page MJ, Altman DG, Shamseer L, McKenzie JE, Ahmadzai N, Wolfe D, Yazdi F, Catalá-López F, Tricco AC, Moher D. Reproducible research practices are underused in systematic reviews of biomedical interventions. Journal of Clinical Epidemiology 2018; 94 : 8-18.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R, Glanville J, Grimshaw JM, Hrobjartsson A, Lalu MM, Li T, Loder EW, Mayo-Wilson E, McDonald S, McGuinness LA, Stewart LA, Thomas J, Tricco AC, Welch VA, Whiting P, Moher D. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021a; 372 : n71.

Page MJ, Moher D, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R, Glanville J, Grimshaw JM, Hrobjartsson A, Lalu MM, Li T, Loder EW, Mayo-Wilson E, McDonald S, McGuinness LA, Stewart LA, Thomas J, Tricco AC, Welch VA, Whiting P, McKenzie JE. PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews. BMJ 2021b; 372 : n160.

Pollock A, Campbell P, Struthers C, Synnot A, Nunn J, Hill S, Goodare H, Watts C, Morley R. Stakeholder involvement in systematic reviews: a protocol for a systematic review of methods, outcomes and effects. Research Involvement and Engagement 2017; 3 : 9.

Reeves BC, Wells GA, Waddington H. Quasi-experimental study designs series-paper 5: a checklist for classifying studies evaluating the effects on health interventions-a taxonomy without labels. Journal of Clinical Epidemiology 2017; 89 : 30-42.

Rethlefsen ML, Kirtley S, Waffenschmidt S, Ayala AP, Moher D, Page MJ, Koffel JB, Blunt H, Brigham T, Chang S, Clark J, Conway A, Couban R, de Kock S, Farrah K, Fehrmann P, Foster M, Fowler SA, Glanville J, Harris E, Hoffecker L, Isojarvi J, Kaunelis D, Ket H, Levay P, Lyon J, McGowan J, Murad MH, Nicholson J, Pannabecker V, Paynter R, Pinotti R, Ross-White A, Sampson M, Shields T, Stevens A, Sutton A, Weinfurter E, Wright K, Young S, Group P-S. PRISMA-S: an extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews. Systematic Reviews 2021; 10 : 39.

Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA, Group P-P. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ 2015; 349 : g7647.

Whiting P, Savovi ć J, Higgins JPT, Caldwell DM, Reeves BC, Shea B, Davies P, Kleijnen J, Churchill R. ROBIS: A new tool to assess risk of bias in systematic reviews was developed. Journal of Clinical Epidemiology 2016; 69 : 225-234.

For permission to re-use material from the Handbook (either academic or commercial), please see here for full details.

  • Open access
  • Published: 29 March 2021

The PRISMA 2020 statement: an updated guideline for reporting systematic reviews

  • Matthew J. Page   ORCID: orcid.org/0000-0002-4242-7526 1 ,
  • Joanne E. McKenzie 1 ,
  • Patrick M. Bossuyt 2 ,
  • Isabelle Boutron 3 ,
  • Tammy C. Hoffmann 4 ,
  • Cynthia D. Mulrow 5 ,
  • Larissa Shamseer 6 ,
  • Jennifer M. Tetzlaff 7 ,
  • Elie A. Akl 8 ,
  • Sue E. Brennan 1 ,
  • Roger Chou 9 ,
  • Julie Glanville 10 ,
  • Jeremy M. Grimshaw 11 ,
  • Asbjørn Hróbjartsson 12 ,
  • Manoj M. Lalu 13 ,
  • Tianjing Li 14 ,
  • Elizabeth W. Loder 15 ,
  • Evan Mayo-Wilson 16 ,
  • Steve McDonald 1 ,
  • Luke A. McGuinness 17 ,
  • Lesley A. Stewart 18 ,
  • James Thomas 19 ,
  • Andrea C. Tricco 20 ,
  • Vivian A. Welch 21 ,
  • Penny Whiting 17 &
  • David Moher 22  

Systematic Reviews volume  10 , Article number:  89 ( 2021 ) Cite this article

302k Accesses

78k Citations

101 Altmetric

Metrics details

An Editorial to this article was published on 19 April 2021

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews. In order to encourage its wide dissemination this article is freely accessible on BMJ, PLOS Medicine, Journal of Clinical Epidemiology and International Journal of Surgery journal websites.

Systematic reviews serve many critical roles. They can provide syntheses of the state of knowledge in a field, from which future research priorities can be identified; they can address questions that otherwise could not be answered by individual studies; they can identify problems in primary research that should be rectified in future studies; and they can generate or evaluate theories about how or why phenomena occur. Systematic reviews therefore generate various types of knowledge for different users of reviews (such as patients, healthcare providers, researchers, and policy makers) [ 1 , 2 ]. To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did (such as how studies were identified and selected) and what they found (such as characteristics of contributing studies and results of meta-analyses). Up-to-date reporting guidance facilitates authors achieving this [ 3 ].

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement published in 2009 (hereafter referred to as PRISMA 2009) [ 4 , 5 , 6 , 7 , 8 , 9 , 10 ] is a reporting guideline designed to address poor reporting of systematic reviews [ 11 ]. The PRISMA 2009 statement comprised a checklist of 27 items recommended for reporting in systematic reviews and an “explanation and elaboration” paper [ 12 , 13 , 14 , 15 , 16 ] providing additional reporting guidance for each item, along with exemplars of reporting. The recommendations have been widely endorsed and adopted, as evidenced by its co-publication in multiple journals, citation in over 60,000 reports (Scopus, August 2020), endorsement from almost 200 journals and systematic review organisations, and adoption in various disciplines. Evidence from observational studies suggests that use of the PRISMA 2009 statement is associated with more complete reporting of systematic reviews [ 17 , 18 , 19 , 20 ], although more could be done to improve adherence to the guideline [ 21 ].

Many innovations in the conduct of systematic reviews have occurred since publication of the PRISMA 2009 statement. For example, technological advances have enabled the use of natural language processing and machine learning to identify relevant evidence [ 22 , 23 , 24 ], methods have been proposed to synthesise and present findings when meta-analysis is not possible or appropriate [ 25 , 26 , 27 ], and new methods have been developed to assess the risk of bias in results of included studies [ 28 , 29 ]. Evidence on sources of bias in systematic reviews has accrued, culminating in the development of new tools to appraise the conduct of systematic reviews [ 30 , 31 ]. Terminology used to describe particular review processes has also evolved, as in the shift from assessing “quality” to assessing “certainty” in the body of evidence [ 32 ]. In addition, the publishing landscape has transformed, with multiple avenues now available for registering and disseminating systematic review protocols [ 33 , 34 ], disseminating reports of systematic reviews, and sharing data and materials, such as preprint servers and publicly accessible repositories. To capture these advances in the reporting of systematic reviews necessitated an update to the PRISMA 2009 statement.

Development of PRISMA 2020

A complete description of the methods used to develop PRISMA 2020 is available elsewhere [ 35 ]. We identified PRISMA 2009 items that were often reported incompletely by examining the results of studies investigating the transparency of reporting of published reviews [ 17 , 21 , 36 , 37 ]. We identified possible modifications to the PRISMA 2009 statement by reviewing 60 documents providing reporting guidance for systematic reviews (including reporting guidelines, handbooks, tools, and meta-research studies) [ 38 ]. These reviews of the literature were used to inform the content of a survey with suggested possible modifications to the 27 items in PRISMA 2009 and possible additional items. Respondents were asked whether they believed we should keep each PRISMA 2009 item as is, modify it, or remove it, and whether we should add each additional item. Systematic review methodologists and journal editors were invited to complete the online survey (110 of 220 invited responded). We discussed proposed content and wording of the PRISMA 2020 statement, as informed by the review and survey results, at a 21-member, two-day, in-person meeting in September 2018 in Edinburgh, Scotland. Throughout 2019 and 2020, we circulated an initial draft and five revisions of the checklist and explanation and elaboration paper to co-authors for feedback. In April 2020, we invited 22 systematic reviewers who had expressed interest in providing feedback on the PRISMA 2020 checklist to share their views (via an online survey) on the layout and terminology used in a preliminary version of the checklist. Feedback was received from 15 individuals and considered by the first author, and any revisions deemed necessary were incorporated before the final version was approved and endorsed by all co-authors.

The PRISMA 2020 statement

Scope of the guideline.

The PRISMA 2020 statement has been designed primarily for systematic reviews of studies that evaluate the effects of health interventions, irrespective of the design of the included studies. However, the checklist items are applicable to reports of systematic reviews evaluating other interventions (such as social or educational interventions), and many items are applicable to systematic reviews with objectives other than evaluating interventions (such as evaluating aetiology, prevalence, or prognosis). PRISMA 2020 is intended for use in systematic reviews that include synthesis (such as pairwise meta-analysis or other statistical synthesis methods) or do not include synthesis (for example, because only one eligible study is identified). The PRISMA 2020 items are relevant for mixed-methods systematic reviews (which include quantitative and qualitative studies), but reporting guidelines addressing the presentation and synthesis of qualitative data should also be consulted [ 39 , 40 ]. PRISMA 2020 can be used for original systematic reviews, updated systematic reviews, or continually updated (“living”) systematic reviews. However, for updated and living systematic reviews, there may be some additional considerations that need to be addressed. Where there is relevant content from other reporting guidelines, we reference these guidelines within the items in the explanation and elaboration paper [ 41 ] (such as PRISMA-Search [ 42 ] in items 6 and 7, Synthesis without meta-analysis (SWiM) reporting guideline [ 27 ] in item 13d). Box 1 includes a glossary of terms used throughout the PRISMA 2020 statement.

PRISMA 2020 is not intended to guide systematic review conduct, for which comprehensive resources are available [ 43 , 44 , 45 , 46 ]. However, familiarity with PRISMA 2020 is useful when planning and conducting systematic reviews to ensure that all recommended information is captured. PRISMA 2020 should not be used to assess the conduct or methodological quality of systematic reviews; other tools exist for this purpose [ 30 , 31 ]. Furthermore, PRISMA 2020 is not intended to inform the reporting of systematic review protocols, for which a separate statement is available (PRISMA for Protocols (PRISMA-P) 2015 statement [ 47 , 48 ]). Finally, extensions to the PRISMA 2009 statement have been developed to guide reporting of network meta-analyses [ 49 ], meta-analyses of individual participant data [ 50 ], systematic reviews of harms [ 51 ], systematic reviews of diagnostic test accuracy studies [ 52 ], and scoping reviews [ 53 ]; for these types of reviews we recommend authors report their review in accordance with the recommendations in PRISMA 2020 along with the guidance specific to the extension.

How to use PRISMA 2020

The PRISMA 2020 statement (including the checklists, explanation and elaboration, and flow diagram) replaces the PRISMA 2009 statement, which should no longer be used. Box  2 summarises noteworthy changes from the PRISMA 2009 statement. The PRISMA 2020 checklist includes seven sections with 27 items, some of which include sub-items (Table  1 ). A checklist for journal and conference abstracts for systematic reviews is included in PRISMA 2020. This abstract checklist is an update of the 2013 PRISMA for Abstracts statement [ 54 ], reflecting new and modified content in PRISMA 2020 (Table  2 ). A template PRISMA flow diagram is provided, which can be modified depending on whether the systematic review is original or updated (Fig.  1 ).

figure 1

 PRISMA 2020 flow diagram template for systematic reviews. The new design is adapted from flow diagrams proposed by Boers [ 55 ], Mayo-Wilson et al. [ 56 ] and Stovold et al. [ 57 ] The boxes in grey should only be completed if applicable; otherwise they should be removed from the flow diagram. Note that a “report” could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report or any other document providing relevant information

We recommend authors refer to PRISMA 2020 early in the writing process, because prospective consideration of the items may help to ensure that all the items are addressed. To help keep track of which items have been reported, the PRISMA statement website ( http://www.prisma-statement.org/ ) includes fillable templates of the checklists to download and complete (also available in Additional file 1 ). We have also created a web application that allows users to complete the checklist via a user-friendly interface [ 58 ] (available at https://prisma.shinyapps.io/checklist/ and adapted from the Transparency Checklist app [ 59 ]). The completed checklist can be exported to Word or PDF. Editable templates of the flow diagram can also be downloaded from the PRISMA statement website.

We have prepared an updated explanation and elaboration paper, in which we explain why reporting of each item is recommended and present bullet points that detail the reporting recommendations (which we refer to as elements) [ 41 ]. The bullet-point structure is new to PRISMA 2020 and has been adopted to facilitate implementation of the guidance [ 60 , 61 ]. An expanded checklist, which comprises an abridged version of the elements presented in the explanation and elaboration paper, with references and some examples removed, is available in Additional file 2 . Consulting the explanation and elaboration paper is recommended if further clarity or information is required.

Journals and publishers might impose word and section limits, and limits on the number of tables and figures allowed in the main report. In such cases, if the relevant information for some items already appears in a publicly accessible review protocol, referring to the protocol may suffice. Alternatively, placing detailed descriptions of the methods used or additional results (such as for less critical outcomes) in supplementary files is recommended. Ideally, supplementary files should be deposited to a general-purpose or institutional open-access repository that provides free and permanent access to the material (such as Open Science Framework, Dryad, figshare). A reference or link to the additional information should be included in the main report. Finally, although PRISMA 2020 provides a template for where information might be located, the suggested location should not be seen as prescriptive; the guiding principle is to ensure the information is reported.

Use of PRISMA 2020 has the potential to benefit many stakeholders. Complete reporting allows readers to assess the appropriateness of the methods, and therefore the trustworthiness of the findings. Presenting and summarising characteristics of studies contributing to a synthesis allows healthcare providers and policy makers to evaluate the applicability of the findings to their setting. Describing the certainty in the body of evidence for an outcome and the implications of findings should help policy makers, managers, and other decision makers formulate appropriate recommendations for practice or policy. Complete reporting of all PRISMA 2020 items also facilitates replication and review updates, as well as inclusion of systematic reviews in overviews (of systematic reviews) and guidelines, so teams can leverage work that is already done and decrease research waste [ 36 , 62 , 63 ].

We updated the PRISMA 2009 statement by adapting the EQUATOR Network’s guidance for developing health research reporting guidelines [ 64 ]. We evaluated the reporting completeness of published systematic reviews [ 17 , 21 , 36 , 37 ], reviewed the items included in other documents providing guidance for systematic reviews [ 38 ], surveyed systematic review methodologists and journal editors for their views on how to revise the original PRISMA statement [ 35 ], discussed the findings at an in-person meeting, and prepared this document through an iterative process. Our recommendations are informed by the reviews and survey conducted before the in-person meeting, theoretical considerations about which items facilitate replication and help users assess the risk of bias and applicability of systematic reviews, and co-authors’ experience with authoring and using systematic reviews.

Various strategies to increase the use of reporting guidelines and improve reporting have been proposed. They include educators introducing reporting guidelines into graduate curricula to promote good reporting habits of early career scientists [ 65 ]; journal editors and regulators endorsing use of reporting guidelines [ 18 ]; peer reviewers evaluating adherence to reporting guidelines [ 61 , 66 ]; journals requiring authors to indicate where in their manuscript they have adhered to each reporting item [ 67 ]; and authors using online writing tools that prompt complete reporting at the writing stage [ 60 ]. Multi-pronged interventions, where more than one of these strategies are combined, may be more effective (such as completion of checklists coupled with editorial checks) [ 68 ]. However, of 31 interventions proposed to increase adherence to reporting guidelines, the effects of only 11 have been evaluated, mostly in observational studies at high risk of bias due to confounding [ 69 ]. It is therefore unclear which strategies should be used. Future research might explore barriers and facilitators to the use of PRISMA 2020 by authors, editors, and peer reviewers, designing interventions that address the identified barriers, and evaluating those interventions using randomised trials. To inform possible revisions to the guideline, it would also be valuable to conduct think-aloud studies [ 70 ] to understand how systematic reviewers interpret the items, and reliability studies to identify items where there is varied interpretation of the items.

We encourage readers to submit evidence that informs any of the recommendations in PRISMA 2020 (via the PRISMA statement website: http://www.prisma-statement.org/ ). To enhance accessibility of PRISMA 2020, several translations of the guideline are under way (see available translations at the PRISMA statement website). We encourage journal editors and publishers to raise awareness of PRISMA 2020 (for example, by referring to it in journal “Instructions to authors”), endorsing its use, advising editors and peer reviewers to evaluate submitted systematic reviews against the PRISMA 2020 checklists, and making changes to journal policies to accommodate the new reporting recommendations. We recommend existing PRISMA extensions [ 47 , 49 , 50 , 51 , 52 , 53 , 71 , 72 ] be updated to reflect PRISMA 2020 and advise developers of new PRISMA extensions to use PRISMA 2020 as the foundation document.

We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders. Ultimately, we hope that uptake of the guideline will lead to more transparent, complete, and accurate reporting of systematic reviews, thus facilitating evidence based decision making.

Box 1 Glossary of terms

Systematic review —A review that uses explicit, systematic methods to collate and synthesise findings of studies that address a clearly formulated question [ 43 ]

Statistical synthesis —The combination of quantitative results of two or more studies. This encompasses meta-analysis of effect estimates (described below) and other methods, such as combining P values, calculating the range and distribution of observed effects, and vote counting based on the direction of effect (see McKenzie and Brennan [ 25 ] for a description of each method)

Meta-analysis of effect estimates —A statistical technique used to synthesise results when study effect estimates and their variances are available, yielding a quantitative summary of results [ 25 ]

Outcome —An event or measurement collected for participants in a study (such as quality of life, mortality)

Result —The combination of a point estimate (such as a mean difference, risk ratio, or proportion) and a measure of its precision (such as a confidence/credible interval) for a particular outcome

Report —A document (paper or electronic) supplying information about a particular study. It could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report, or any other document providing relevant information

Record —The title or abstract (or both) of a report indexed in a database or website (such as a title or abstract for an article indexed in Medline). Records that refer to the same report (such as the same journal article) are “duplicates”; however, records that refer to reports that are merely similar (such as a similar abstract submitted to two different conferences) should be considered unique.

Study —An investigation, such as a clinical trial, that includes a defined group of participants and one or more interventions and outcomes. A “study” might have multiple reports. For example, reports could include the protocol, statistical analysis plan, baseline characteristics, results for the primary outcome, results for harms, results for secondary outcomes, and results for additional mediator and moderator analyses

Box 2 Noteworthy changes to the PRISMA 2009 statement

• Inclusion of the abstract reporting checklist within PRISMA 2020 (see item #2 and Box 2 ).

• Movement of the ‘Protocol and registration’ item from the start of the Methods section of the checklist to a new Other section, with addition of a sub-item recommending authors describe amendments to information provided at registration or in the protocol (see item #24a-24c).

• Modification of the ‘Search’ item to recommend authors present full search strategies for all databases, registers and websites searched, not just at least one database (see item #7).

• Modification of the ‘Study selection’ item in the Methods section to emphasise the reporting of how many reviewers screened each record and each report retrieved, whether they worked independently, and if applicable, details of automation tools used in the process (see item #8).

• Addition of a sub-item to the ‘Data items’ item recommending authors report how outcomes were defined, which results were sought, and methods for selecting a subset of results from included studies (see item #10a).

• Splitting of the ‘Synthesis of results’ item in the Methods section into six sub-items recommending authors describe: the processes used to decide which studies were eligible for each synthesis; any methods required to prepare the data for synthesis; any methods used to tabulate or visually display results of individual studies and syntheses; any methods used to synthesise results; any methods used to explore possible causes of heterogeneity among study results (such as subgroup analysis, meta-regression); and any sensitivity analyses used to assess robustness of the synthesised results (see item #13a-13f).

• Addition of a sub-item to the ‘Study selection’ item in the Results section recommending authors cite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded (see item #16b).

• Splitting of the ‘Synthesis of results’ item in the Results section into four sub-items recommending authors: briefly summarise the characteristics and risk of bias among studies contributing to the synthesis; present results of all statistical syntheses conducted; present results of any investigations of possible causes of heterogeneity among study results; and present results of any sensitivity analyses (see item #20a-20d).

• Addition of new items recommending authors report methods for and results of an assessment of certainty (or confidence) in the body of evidence for an outcome (see items #15 and #22).

• Addition of a new item recommending authors declare any competing interests (see item #26).

• Addition of a new item recommending authors indicate whether data, analytic code and other materials used in the review are publicly available and if so, where they can be found (see item #27).

Gurevitch J, Koricheva J, Nakagawa S, Stewart G. Meta-analysis and the science of research synthesis. Nature. 2018;555:175–82. https://doi.org/10.1038/nature25753 .

Article   CAS   PubMed   Google Scholar  

Gough D, Thomas J, Oliver S. Clarifying differences between reviews within evidence ecosystems. Syst Rev. 2019;8:170. https://doi.org/10.1186/s13643-019-1089-2 .

Article   PubMed   PubMed Central   Google Scholar  

Moher D. Reporting guidelines: doing better for readers. BMC Med. 2018;16:233. https://doi.org/10.1186/s12916-018-1226-0 .

Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151:264–9, W64. https://doi.org/10.7326/0003-4819-151-4-200908180-00135 .

Article   PubMed   Google Scholar  

Moher D, Liberati A, Tetzlaff J, Altman DG. PRISMA Group Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ. 2009;339:b2535. https://doi.org/10.1136/bmj.b2535 .

Moher D, Liberati A, Tetzlaff J, Altman DG. PRISMA Group Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6:e1000097. https://doi.org/10.1371/journal.pmed.1000097 .

Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. J Clin Epidemiol. 2009;62:1006–12. https://doi.org/10.1016/j.jclinepi.2009.06.005 .

Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Int J Surg. 2010;8:336–41. https://doi.org/10.1016/j.ijsu.2010.02.007 .

Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Open Med. 2009;3:e123–30.

PubMed   PubMed Central   Google Scholar  

Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Reprint--preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Phys Ther. 2009;89:873–80. https://doi.org/10.1093/ptj/89.9.873 .

Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Med. 2007;4:e78. https://doi.org/10.1371/journal.pmed.0040078 .

Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol. 2009;62:e1–34. https://doi.org/10.1016/j.jclinepi.2009.06.006 .

Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339:b2700. https://doi.org/10.1136/bmj.b2700 .

Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. Ann Intern Med. 2009;151:W65–94. https://doi.org/10.7326/0003-4819-151-4-200908180-00136 .

Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6:e1000100. https://doi.org/10.1371/journal.pmed.1000100 .

Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting. systematic reviews and meta-analyses of studies that evaluate health care. interventions: explanation and elaboration. PLoS Med. 2009;6:e1000100. https://doi.org/10.1371/journal.pmed.1000100 .

Page MJ, Shamseer L, Altman DG, et al. Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Med. 2016;13:e1002028. https://doi.org/10.1371/journal.pmed.1002028 .

Panic N, Leoncini E, de Belvis G, Ricciardi W, Boccia S. Evaluation of the endorsement of the preferred reporting items for systematic reviews and meta-analysis (PRISMA) statement on the quality of published systematic review and meta-analyses. PLoS One. 2013;8:e83138. https://doi.org/10.1371/journal.pone.0083138 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Agha RA, Fowler AJ, Limb C, et al. Impact of the mandatory implementation of reporting guidelines on reporting quality in a surgical journal: a before and after study. Int J Surg. 2016;30:169–72. https://doi.org/10.1016/j.ijsu.2016.04.032 .

Leclercq V, Beaudart C, Ajamieh S, Rabenda V, Tirelli E, Bruyère O. Meta-analyses indexed in PsycINFO had a better completeness of reporting when they mention PRISMA. J Clin Epidemiol. 2019;115:46–54. https://doi.org/10.1016/j.jclinepi.2019.06.014 .

Page MJ, Moher D. Evaluations of the uptake and impact of the preferred reporting items for systematic reviews and meta-analyses (PRISMA) statement and extensions: a scoping review. Syst Rev. 2017;6:263. https://doi.org/10.1186/s13643-017-0663-8 .

O’Mara-Eves A, Thomas J, McNaught J, Miwa M, Ananiadou S. Using text mining for study identification in systematic reviews: a systematic review of current approaches. Syst Rev. 2015;4:5. https://doi.org/10.1186/2046-4053-4-5 .

Marshall IJ, Noel-Storr A, Kuiper J, Thomas J, Wallace BC. Machine learning for identifying randomized controlled trials: an evaluation and practitioner’s guide. Res Synth Methods. 2018;9:602–14. https://doi.org/10.1002/jrsm.1287 .

Marshall IJ, Wallace BC. Toward systematic review automation: a practical guide to using machine learning tools in research synthesis. Syst Rev. 2019;8:163. https://doi.org/10.1186/s13643-019-1074-9 .

McKenzie JE, Brennan SE. Synthesizing and presenting findings using other methods. In: Higgins JPT, Thomas J, Chandler J, et al., editors. Cochrane handbook for systematic reviews of interventions. London: Cochrane; 2019. https://doi.org/10.1002/9781119536604.ch12 .

Chapter   Google Scholar  

Higgins JPT, López-López JA, Becker BJ, et al. Synthesising quantitative evidence in systematic reviews of complex health interventions. BMJ Glob Health. 2019;4(Suppl 1):e000858. https://doi.org/10.1136/bmjgh-2018-000858 .

Campbell M, McKenzie JE, Sowden A, et al. Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline. BMJ. 2020;368:l6890. https://doi.org/10.1136/bmj.l6890 .

Sterne JAC, Savović J, Page MJ, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366:l4898. https://doi.org/10.1136/bmj.l4898 .

Sterne JA, Hernán MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355:i4919. https://doi.org/10.1136/bmj.i4919 .

Whiting P, Savović J, Higgins JP, ROBIS group, et al. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225–34. https://doi.org/10.1016/j.jclinepi.2015.06.005 .

Shea BJ, Reeves BC, Wells G, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ. 2017;358:j4008. https://doi.org/10.1136/bmj.j4008 .

Hultcrantz M, Rind D, Akl EA, et al. The GRADE working group clarifies the construct of certainty of evidence. J Clin Epidemiol. 2017;87:4–13. https://doi.org/10.1016/j.jclinepi.2017.05.006 .

Booth A, Clarke M, Dooley G, et al. The nuts and bolts of PROSPERO: an international prospective register of systematic reviews. Syst Rev. 2012;1:2. https://doi.org/10.1186/2046-4053-1-2 .

Moher D, Stewart L, Shekelle P. Establishing a new journal for systematic review products. Syst Rev. 2012;1:1. https://doi.org/10.1186/2046-4053-1-1 .

Page MJ, McKenzie JE, Bossuyt PM, et al. Updating guidance for reporting systematic reviews: development of the PRISMA 2020 statement. J Clin Epidemiol 2021;134:103–112. https://doi.org/10.1016/j.jclinepi.2021.02.003 .

Page MJ, Altman DG, Shamseer L, et al. Reproducible research practices are underused in systematic reviews of biomedical interventions. J Clin Epidemiol. 2018;94:8–18. https://doi.org/10.1016/j.jclinepi.2017.10.017 .

Page MJ, Altman DG, McKenzie JE, et al. Flaws in the application and interpretation of statistical analyses in systematic reviews of therapeutic interventions were common: a cross-sectional analysis. J Clin Epidemiol. 2018;95:7–18. https://doi.org/10.1016/j.jclinepi.2017.11.022 .

Page MJ, McKenzie JE, Bossuyt PM, et al. Mapping of reporting guidance for systematic reviews and meta-analyses generated a comprehensive item bank for future reporting guidelines. J Clin Epidemiol. 2020;118:60–8. https://doi.org/10.1016/j.jclinepi.2019.11.010 .

Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol. 2012;12:181. https://doi.org/10.1186/1471-2288-12-181 .

France EF, Cunningham M, Ring N, et al. Improving reporting of meta-ethnography: the eMERGe reporting guidance. BMC Med Res Methodol. 2019;19:25. https://doi.org/10.1186/s12874-018-0600-0 .

Page MJ, Moher D, Bossuyt PM, et al. PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews. BMJ. 2021;372:n160. https://doi.org/10.1136/bmj.n160 .

Rethlefsen ML, Kirtley S, Waffenschmidt S, et al. PRISMA-S Group PRISMA-S: an extension to the PRISMA statement for reporting literature searches in systematic reviews. Syst Rev. 2021;10:39. https://doi.org/10.1186/s13643-020-01542-z .

Higgins JPT, Thomas J, Chandler J, et al. Cochrane handbook for systematic reviews of interventions: version 6.0. London: Cochrane; 2019. Available from https://training.cochrane.org/handbook

Book   Google Scholar  

Dekkers OM, Vandenbroucke JP, Cevallos M, Renehan AG, Altman DG, Egger M. COSMOS-E: guidance on conducting systematic reviews and meta-analyses of observational studies of etiology. PLoS Med. 2019;16:e1002742. https://doi.org/10.1371/journal.pmed.1002742 .

Cooper H, Hedges LV, Valentine JV. The handbook of research synthesis and meta-analysis. New York: Russell Sage Foundation; 2019.

IOM (Institute of Medicine). Finding what works in health care: standards for systematic reviews. Washington, D.C.: The National Academies Press; 2011.

Google Scholar  

Moher D, Shamseer L, Clarke M, PRISMA-P Group, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1. https://doi.org/10.1186/2046-4053-4-1 .

Shamseer L, Moher D, Clarke M, PRISMA-P Group, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2015;350:g7647. https://doi.org/10.1136/bmj.g7647 .

Hutton B, Salanti G, Caldwell DM, et al. The PRISMA extension statement for reporting of systematic reviews incorporating network meta-analyses of health care interventions: checklist and explanations. Ann Intern Med. 2015;162:777–84. https://doi.org/10.7326/M14-2385 .

Stewart LA, Clarke M, Rovers M, PRISMA-IPD Development Group, et al. Preferred reporting items for systematic review and meta-analyses of individual participant data: the PRISMA-IPD statement. JAMA. 2015;313:1657–65. https://doi.org/10.1001/jama.2015.3656 .

Zorzela L, Loke YK, Ioannidis JP, et al. PRISMAHarms Group PRISMA harms checklist: improving harms reporting in systematic reviews. BMJ. 2016;352:i157. https://doi.org/10.1136/bmj.i157 .

McInnes MDF, Moher D, Thombs BD, the PRISMA-DTA Group, et al. Preferred reporting items for a systematic review and meta-analysis of diagnostic test accuracy studies: the PRISMA-DTA statement. JAMA. 2018;319:388–96. https://doi.org/10.1001/jama.2017.19163 .

Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-SCR): checklist and explanation. Ann Intern Med. 2018;169:467–73. https://doi.org/10.7326/M18-0850 .

Beller EM, Glasziou PP, Altman DG, et al. PRISMA for Abstracts Group PRISMA for Abstracts: reporting systematic reviews in journal and conference abstracts. PLoS Med. 2013;10:e1001419. https://doi.org/10.1371/journal.pmed.1001419 .

Boers M. Graphics and statistics for cardiology: designing effective tables for presentation and publication. Heart. 2018;104:192–200. https://doi.org/10.1136/heartjnl-2017-311581 .

Mayo-Wilson E, Li T, Fusco N, Dickersin K, MUDS investigators. Practical guidance for using multiple data sources in systematic reviews and meta-analyses (with examples from the MUDS study). Res Synth Methods. 2018;9:2–12. https://doi.org/10.1002/jrsm.1277 .

Stovold E, Beecher D, Foxlee R, Noel-Storr A. Study flow diagrams in Cochrane systematic review updates: an adapted PRISMA flow diagram. Syst Rev. 2014;3:54. https://doi.org/10.1186/2046-4053-3-54 .

McGuinness LA. mcguinlu/PRISMA-Checklist: Initial release for manuscript submission (Version v1.0.0). Geneva: Zenodo; 2020. https://doi.org/10.5281/zenodo.3994319 .

Aczel B, Szaszi B, Sarafoglou A, et al. A consensus-based transparency checklist. Nat Hum Behav. 2020;4:4–6. https://doi.org/10.1038/s41562-019-0772-6 .

Barnes C, Boutron I, Giraudeau B, Porcher R, Altman DG, Ravaud P. Impact of an online writing aid tool for writing a randomized trial report: the COBWEB (Consort-based WEB tool) randomized controlled trial. BMC Med. 2015;13:221. https://doi.org/10.1186/s12916-015-0460-y .

Chauvin A, Ravaud P, Moher D, et al. Accuracy in detecting inadequate research reporting by early career peer reviewers using an online CONSORT-based peer-review tool (COBPeer) versus the usual peer-review process: a cross-sectional diagnostic study. BMC Med. 2019;17:205. https://doi.org/10.1186/s12916-019-1436-0 .

Wayant C, Page MJ, Vassar M. Evaluation of reproducible research practices in oncology systematic reviews with meta-analyses referenced by national comprehensive cancer network guidelines. JAMA Oncol. 2019;5:1550–5. https://doi.org/10.1001/jamaoncol.2019.2564 .

Article   PubMed Central   PubMed   Google Scholar  

McKenzie JE, Brennan SE. Overviews of systematic reviews: great promise, greater challenge. Syst Rev. 2017;6:185. https://doi.org/10.1186/s13643-017-0582-8 .

Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7:e1000217. https://doi.org/10.1371/journal.pmed.1000217 .

Simera I, Moher D, Hirst A, Hoey J, Schulz KF, Altman DG. Transparent and accurate reporting increases reliability, utility, and impact of your research: reporting guidelines and the EQUATOR Network. BMC Med. 2010;8:24. https://doi.org/10.1186/1741-7015-8-24 .

Speich B, Schroter S, Briel M, et al. Impact of a short version of the CONSORT checklist for peer reviewers to improve the reporting of randomised controlled trials published in biomedical journals: study protocol for a randomised controlled trial. BMJ Open. 2020;10:e035114. https://doi.org/10.1136/bmjopen-2019-035114 .

Stevens A, Shamseer L, Weinstein E, et al. Relation of completeness of reporting of health research to journals’ endorsement of reporting guidelines: systematic review. BMJ. 2014;348:g3804. https://doi.org/10.1136/bmj.g3804 .

Hair K, Macleod MR, Sena ES, IICARus Collaboration. A randomised controlled trial of an Intervention to Improve Compliance with the ARRIVE guidelines (IICARus). Res Integr Peer Rev. 2019;4:12. https://doi.org/10.1186/s41073-019-0069-3 .

Blanco D, Altman D, Moher D, Boutron I, Kirkham JJ, Cobo E. Scoping review on interventions to improve adherence to reporting guidelines in health research. BMJ Open. 2019;9:e026589. https://doi.org/10.1136/bmjopen-2018-026589 .

Charters E. The use of think-aloud methods in qualitative research: an introduction to think-aloud methods. Brock Educ J. 2003;12:68–82. https://doi.org/10.26522/brocked.v12i2.38 .

Article   Google Scholar  

Welch V, Petticrew M, Tugwell P, PRISMA-Equity Bellagio group, et al. PRISMA-equity 2012 extension: reporting guidelines for systematic reviews with a focus on health equity. PLoS Med. 2012;9:e1001333. https://doi.org/10.1371/journal.pmed.1001333 .

Wang X, Chen Y, Liu Y, et al. Reporting items for systematic reviews and meta-analyses of acupuncture: the PRISMA for acupuncture checklist. BMC Complement Altern Med. 2019;19:208. https://doi.org/10.1186/s12906-019-2624-3 .

Download references

Acknowledgements

We dedicate this paper to the late Douglas G Altman and Alessandro Liberati, whose contributions were fundamental to the development and implementation of the original PRISMA statement.

We thank the following contributors who completed the survey to inform discussions at the development meeting: Xavier Armoiry, Edoardo Aromataris, Ana Patricia Ayala, Ethan M Balk, Virginia Barbour, Elaine Beller, Jesse A Berlin, Lisa Bero, Zhao-Xiang Bian, Jean Joel Bigna, Ferrán Catalá-López, Anna Chaimani, Mike Clarke, Tammy Clifford, Ioana A Cristea, Miranda Cumpston, Sofia Dias, Corinna Dressler, Ivan D Florez, Joel J Gagnier, Chantelle Garritty, Long Ge, Davina Ghersi, Sean Grant, Gordon Guyatt, Neal R Haddaway, Julian PT Higgins, Sally Hopewell, Brian Hutton, Jamie J Kirkham, Jos Kleijnen, Julia Koricheva, Joey SW Kwong, Toby J Lasserson, Julia H Littell, Yoon K Loke, Malcolm R Macleod, Chris G Maher, Ana Marušic, Dimitris Mavridis, Jessie McGowan, Matthew DF McInnes, Philippa Middleton, Karel G Moons, Zachary Munn, Jane Noyes, Barbara Nußbaumer-Streit, Donald L Patrick, Tatiana Pereira-Cenci, Ba′ Pham, Bob Phillips, Dawid Pieper, Michelle Pollock, Daniel S Quintana, Drummond Rennie, Melissa L Rethlefsen, Hannah R Rothstein, Maroeska M Rovers, Rebecca Ryan, Georgia Salanti, Ian J Saldanha, Margaret Sampson, Nancy Santesso, Rafael Sarkis-Onofre, Jelena Savović, Christopher H Schmid, Kenneth F Schulz, Guido Schwarzer, Beverley J Shea, Paul G Shekelle, Farhad Shokraneh, Mark Simmonds, Nicole Skoetz, Sharon E Straus, Anneliese Synnot, Emily E Tanner-Smith, Brett D Thombs, Hilary Thomson, Alexander Tsertsvadze, Peter Tugwell, Tari Turner, Lesley Uttley, Jeffrey C Valentine, Matt Vassar, Areti Angeliki Veroniki, Meera Viswanathan, Cole Wayant, Paul Whaley, and Kehu Yang. We thank the following contributors who provided feedback on a preliminary version of the PRISMA 2020 checklist: Jo Abbott, Fionn Büttner, Patricia Correia-Santos, Victoria Freeman, Emily A Hennessy, Rakibul Islam, Amalia (Emily) Karahalios, Kasper Krommes, Andreas Lundh, Dafne Port Nascimento, Davina Robson, Catherine Schenck-Yglesias, Mary M Scott, Sarah Tanveer and Pavel Zhelnov. We thank Abigail H Goben, Melissa L Rethlefsen, Tanja Rombey, Anna Scott, and Farhad Shokraneh for their helpful comments on the preprints of the PRISMA 2020 papers. We thank Edoardo Aromataris, Stephanie Chang, Toby Lasserson and David Schriger for their helpful peer review comments on the PRISMA 2020 papers.

Provenance and peer review

Not commissioned; externally peer reviewed.

Patient and public involvement

Patients and the public were not involved in this methodological research. We plan to disseminate the research widely, including to community participants in evidence synthesis organisations.

There was no direct funding for this research. MJP is supported by an Australian Research Council Discovery Early Career Researcher Award (DE200101618) and was previously supported by an Australian National Health and Medical Research Council (NHMRC) Early Career Fellowship (1088535) during the conduct of this research. JEM is supported by an Australian NHMRC Career Development Fellowship (1143429). TCH is supported by an Australian NHMRC Senior Research Fellowship (1154607). JMT is supported by Evidence Partners Inc. JMG is supported by a Tier 1 Canada Research Chair in Health Knowledge Transfer and Uptake. MML is supported by The Ottawa Hospital Anaesthesia Alternate Funds Association and a Faculty of Medicine Junior Research Chair. TL is supported by funding from the National Eye Institute (UG1EY020522), National Institutes of Health, United States. LAM is supported by a National Institute for Health Research Doctoral Research Fellowship (DRF-2018-11-ST2–048). ACT is supported by a Tier 2 Canada Research Chair in Knowledge Synthesis. DM is supported in part by a University Research Chair, University of Ottawa. The funders had no role in considering the study design or in the collection, analysis, interpretation of data, writing of the report, or decision to submit the article for publication.

Author information

Authors and affiliations.

School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia

Matthew J. Page, Joanne E. McKenzie, Sue E. Brennan & Steve McDonald

Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Amsterdam University Medical Centres, University of Amsterdam, Amsterdam, Netherlands

Patrick M. Bossuyt

Université de Paris, Centre of Epidemiology and Statistics (CRESS), Inserm, F 75004, Paris, France

Isabelle Boutron

Institute for Evidence-Based Healthcare, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, Australia

Tammy C. Hoffmann

Annals of Internal Medicine, University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA

Cynthia D. Mulrow

Knowledge Translation Program, Li Ka Shing Knowledge Institute, Toronto, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada

Larissa Shamseer

Evidence Partners, Ottawa, Canada

Jennifer M. Tetzlaff

Clinical Research Institute, American University of Beirut, Beirut, Lebanon; Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada

Elie A. Akl

Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR, USA

York Health Economics Consortium (YHEC Ltd), University of York, York, UK

Julie Glanville

Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada; Department of Medicine, University of Ottawa, Ottawa, Canada

Jeremy M. Grimshaw

Centre for Evidence-Based Medicine Odense (CEBMO) and Cochrane Denmark, Department of Clinical Research, University of Southern Denmark, JB Winsløwsvej 9b, 3rd Floor, 5000 Odense, Denmark; Open Patient data Exploratory Network (OPEN), Odense University Hospital, Odense, Denmark

Asbjørn Hróbjartsson

Department of Anesthesiology and Pain Medicine, The Ottawa Hospital, Ottawa, Canada; Clinical Epidemiology Program, Blueprint Translational Research Group, Ottawa Hospital Research Institute, Ottawa, Canada; Regenerative Medicine Program, Ottawa Hospital Research Institute, Ottawa, Canada

Manoj M. Lalu

Department of Ophthalmology, School of Medicine, University of Colorado Denver, Denver, Colorado, United States; Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA

Tianjing Li

Division of Headache, Department of Neurology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts, USA; Head of Research, The BMJ, London, UK

Elizabeth W. Loder

Department of Epidemiology and Biostatistics, Indiana University School of Public Health-Bloomington, Bloomington, Indiana, USA

Evan Mayo-Wilson

Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK

Luke A. McGuinness & Penny Whiting

Centre for Reviews and Dissemination, University of York, York, UK

Lesley A. Stewart

EPPI-Centre, UCL Social Research Institute, University College London, London, UK

James Thomas

Li Ka Shing Knowledge Institute of St. Michael’s Hospital, Unity Health Toronto, Toronto, Canada; Epidemiology Division of the Dalla Lana School of Public Health and the Institute of Health Management, Policy, and Evaluation, University of Toronto, Toronto, Canada; Queen’s Collaboration for Health Care Quality Joanna Briggs Institute Centre of Excellence, Queen’s University, Kingston, Canada

Andrea C. Tricco

Methods Centre, Bruyère Research Institute, Ottawa, Ontario, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada

Vivian A. Welch

Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada

David Moher

You can also search for this author in PubMed   Google Scholar

Contributions

JEM and DM are joint senior authors. MJP, JEM, PMB, IB, TCH, CDM, LS, and DM conceived this paper and designed the literature review and survey conducted to inform the guideline content. MJP conducted the literature review, administered the survey and analysed the data for both. MJP prepared all materials for the development meeting. MJP and JEM presented proposals at the development meeting. All authors except for TCH, JMT, EAA, SEB, and LAM attended the development meeting. MJP and JEM took and consolidated notes from the development meeting. MJP and JEM led the drafting and editing of the article. JEM, PMB, IB, TCH, LS, JMT, EAA, SEB, RC, JG, AH, TL, EMW, SM, LAM, LAS, JT, ACT, PW, and DM drafted particular sections of the article. All authors were involved in revising the article critically for important intellectual content. All authors approved the final version of the article. MJP is the guarantor of this work. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

Corresponding author

Correspondence to Matthew J. Page .

Ethics declarations

Competing interests.

All authors have completed the ICMJE uniform disclosure form at http://www.icmje.org/conflicts-of-interest/ and declare: EL is head of research for the BMJ ; MJP is an editorial board member for PLOS Medicine ; ACT is an associate editor and MJP, TL, EMW, and DM are editorial board members for the Journal of Clinical Epidemiology ; DM and LAS were editors in chief, LS, JMT, and ACT are associate editors, and JG is an editorial board member for Systematic Reviews . None of these authors were involved in the peer review process or decision to publish. TCH has received personal fees from Elsevier outside the submitted work. EMW has received personal fees from the American Journal for Public Health , for which he is the editor for systematic reviews. VW is editor in chief of the Campbell Collaboration, which produces systematic reviews, and co-convenor of the Campbell and Cochrane equity methods group. DM is chair of the EQUATOR Network, IB is adjunct director of the French EQUATOR Centre and TCH is co-director of the Australasian EQUATOR Centre, which advocates for the use of reporting guidelines to improve the quality of reporting in research articles. JMT received salary from Evidence Partners, creator of DistillerSR software for systematic reviews; Evidence Partners was not involved in the design or outcomes of the statement, and the views expressed solely represent those of the author.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

PRISMA 2020 checklist.

Additional file 2.

PRISMA 2020 expanded checklist.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Page, M.J., McKenzie, J.E., Bossuyt, P.M. et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Syst Rev 10 , 89 (2021). https://doi.org/10.1186/s13643-021-01626-4

Download citation

Accepted : 04 January 2021

Published : 29 March 2021

DOI : https://doi.org/10.1186/s13643-021-01626-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

literature review reporting guidelines

Review article: reporting guidelines in the biomedical literature

Affiliation.

  • 1 Department of Anesthesia and Pain Medicine, Hospital for Sick Children, University of Toronto, 555 University Avenue, Toronto, ON, M5G 1X8, Canada. [email protected]
  • PMID: 23760791
  • DOI: 10.1007/s12630-013-9973-z

Purpose: Complete and accurate reporting of original research in the biomedical literature is essential for healthcare professionals to translate research outcomes appropriately into clinical practice. Use of reporting guidelines has become commonplace among journals, peer reviewers, and authors. This narrative review aims 1) to inform investigators, peer reviewers, and authors of original research in anesthesia on reporting guidelines for frequently reported study designs; 2) to describe the evidence supporting the use of reporting guidelines and checklists; and 3) to discuss the implications of widespread adoption of reporting guidelines by biomedical journals and peer reviewers.

Principal findings: Inadequate reporting can influence the interpretation, translation, and application of published research. As a result, reporting guidelines have been developed in order to improve the quality, completeness, and accuracy of original research reports. Biomedical journals increasingly endorse the use of reporting guidelines for authors and peer reviewers. To date, there is encouraging evidence that reporting guidelines improve the quality of reporting of published research, but the rates of both adoption of reporting guidelines and improvement in reporting are far from ideal.

Conclusions: Use of reporting guidelines improves the quality of published research in biomedical journals. Nevertheless, the quality of research in the biomedical literature remains suboptimal despite increased adherence to reporting guidelines.

Publication types

  • Biomedical Research*
  • Guideline Adherence
  • Guidelines as Topic*
  • Peer Review, Research
  • Periodicals as Topic
  • Publishing*
  • Research Design

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing a Literature Review

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays). When we say “literature review” or refer to “the literature,” we are talking about the research ( scholarship ) in a given field. You will often see the terms “the research,” “the scholarship,” and “the literature” used mostly interchangeably.

Where, when, and why would I write a lit review?

There are a number of different situations where you might write a literature review, each with slightly different expectations; different disciplines, too, have field-specific expectations for what a literature review is and does. For instance, in the humanities, authors might include more overt argumentation and interpretation of source material in their literature reviews, whereas in the sciences, authors are more likely to report study designs and results in their literature reviews; these differences reflect these disciplines’ purposes and conventions in scholarship. You should always look at examples from your own discipline and talk to professors or mentors in your field to be sure you understand your discipline’s conventions, for literature reviews as well as for any other genre.

A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research methodology.

Lit reviews can also be standalone pieces, either as assignments in a class or as publications. In a class, a lit review may be assigned to help students familiarize themselves with a topic and with scholarship in their field, get an idea of the other researchers working on the topic they’re interested in, find gaps in existing research in order to propose new projects, and/or develop a theoretical framework and methodology for later research. As a publication, a lit review usually is meant to help make other scholars’ lives easier by collecting and summarizing, synthesizing, and analyzing existing research on a topic. This can be especially helpful for students or scholars getting into a new research area, or for directing an entire community of scholars toward questions that have not yet been answered.

What are the parts of a lit review?

Most lit reviews use a basic introduction-body-conclusion structure; if your lit review is part of a larger paper, the introduction and conclusion pieces may be just a few sentences while you focus most of your attention on the body. If your lit review is a standalone piece, the introduction and conclusion take up more space and give you a place to discuss your goals, research methods, and conclusions separately from where you discuss the literature itself.

Introduction:

  • An introductory paragraph that explains what your working topic and thesis is
  • A forecast of key topics or texts that will appear in the review
  • Potentially, a description of how you found sources and how you analyzed them for inclusion and discussion in the review (more often found in published, standalone literature reviews than in lit review sections in an article or research paper)
  • Summarize and synthesize: Give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: Don’t just paraphrase other researchers – add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically Evaluate: Mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: Use transition words and topic sentence to draw connections, comparisons, and contrasts.

Conclusion:

  • Summarize the key findings you have taken from the literature and emphasize their significance
  • Connect it back to your primary research question

How should I organize my lit review?

Lit reviews can take many different organizational patterns depending on what you are trying to accomplish with the review. Here are some examples:

  • Chronological : The simplest approach is to trace the development of the topic over time, which helps familiarize the audience with the topic (for instance if you are introducing something that is not commonly known in your field). If you choose this strategy, be careful to avoid simply listing and summarizing sources in order. Try to analyze the patterns, turning points, and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred (as mentioned previously, this may not be appropriate in your discipline — check with a teacher or mentor if you’re unsure).
  • Thematic : If you have found some recurring central themes that you will continue working with throughout your piece, you can organize your literature review into subsections that address different aspects of the topic. For example, if you are reviewing literature about women and religion, key themes can include the role of women in churches and the religious attitude towards women.
  • Qualitative versus quantitative research
  • Empirical versus theoretical scholarship
  • Divide the research by sociological, historical, or cultural sources
  • Theoretical : In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts. You can argue for the relevance of a specific theoretical approach or combine various theorical concepts to create a framework for your research.

What are some strategies or tips I can use while writing my lit review?

Any lit review is only as good as the research it discusses; make sure your sources are well-chosen and your research is thorough. Don’t be afraid to do more research if you discover a new thread as you’re writing. More info on the research process is available in our "Conducting Research" resources .

As you’re doing your research, create an annotated bibliography ( see our page on the this type of document ). Much of the information used in an annotated bibliography can be used also in a literature review, so you’ll be not only partially drafting your lit review as you research, but also developing your sense of the larger conversation going on among scholars, professionals, and any other stakeholders in your topic.

Usually you will need to synthesize research rather than just summarizing it. This means drawing connections between sources to create a picture of the scholarly conversation on a topic over time. Many student writers struggle to synthesize because they feel they don’t have anything to add to the scholars they are citing; here are some strategies to help you:

  • It often helps to remember that the point of these kinds of syntheses is to show your readers how you understand your research, to help them read the rest of your paper.
  • Writing teachers often say synthesis is like hosting a dinner party: imagine all your sources are together in a room, discussing your topic. What are they saying to each other?
  • Look at the in-text citations in each paragraph. Are you citing just one source for each paragraph? This usually indicates summary only. When you have multiple sources cited in a paragraph, you are more likely to be synthesizing them (not always, but often
  • Read more about synthesis here.

The most interesting literature reviews are often written as arguments (again, as mentioned at the beginning of the page, this is discipline-specific and doesn’t work for all situations). Often, the literature review is where you can establish your research as filling a particular gap or as relevant in a particular way. You have some chance to do this in your introduction in an article, but the literature review section gives a more extended opportunity to establish the conversation in the way you would like your readers to see it. You can choose the intellectual lineage you would like to be part of and whose definitions matter most to your thinking (mostly humanities-specific, but this goes for sciences as well). In addressing these points, you argue for your place in the conversation, which tends to make the lit review more compelling than a simple reporting of other sources.

Equator network

Enhancing the QUAlity and Transparency Of health Research

  • Courses & events
  • Librarian Network
  • Search for reporting guidelines

literature review reporting guidelines

Browse for reporting guidelines by selecting one or more of these drop-downs:

Displaying 622 reporting guidelines found.

Most recently added records are displayed first.

  • Consensus reporting guidelines to address gaps in descriptions of ultra-rare genetic conditions
  • Biofield therapies: Guidelines for reporting clinical trials
  • The PICOTS-ComTeC Framework for Defining Digital Health Interventions: An ISPOR Special Interest Group Report
  • REPORT-SCS : minimum reporting standards for spinal cord stimulation studies in spinal cord injury
  • CARE-radiology statement explanation and elaboration: reporting guideline for radiological case reports
  • Trial Forge Guidance 4: a guideline for reporting the results of randomised Studies Within A Trial (SWATs)
  • The RETRIEVE Checklist for Studies Reporting the Elicitation of Stated Preferences for Child Health -Related Quality of Life
  • We don’t know what you did last summer. On the importance of transparent reporting of reaction time data pre-processing
  • Introducing So NHR-Reporting guidelines for Social Networks In Health Research
  • Reporting standard for describing first responder systems, smartphone alerting systems, and AED networks
  • The reporting checklist for Chinese patent medicine guidelines: RIGHT for CPM
  • The SHARE : SHam Acupuncture REporting guidelines and a checklist in clinical trials
  • REPCAN : Guideline for REporting Population-based CANcer Registry Data
  • The Test Adaptation Reporting Standards (TARES) : reporting test adaptations
  • ACCORD (ACcurate COnsensus Reporting Document): A reporting guideline for consensus methods in biomedicine developed via a modified Delphi
  • Consolidated Reporting Guidelines for Prognostic and Diagnostic Machine Learning Modeling Studies: Development and Validation
  • Development of the Reporting Infographics and Visual Abstracts of Comparative studies (RIVA-C) checklist and guide
  • Preliminary guideline for reporting bibliometric reviews of the biomedical literature (BIBLIO) : a minimum requirements
  • Appropriate design and reporting of superiority, equivalence and non-inferiority clinical trials incorporating a benefit-risk assessment: the BRAINS study including expert workshop
  • Preferred Reporting Items for Resistance Exercise Studies (PRIRES) : A Checklist Developed Using an Umbrella Review of Systematic Reviews
  • ENLIGHT : A consensus checklist for reporting laboratory-based studies on the non-visual effects of light in humans
  • Consensus Statement for Protocols of Factorial Randomized Trials: Extension of the SPIRIT 2013 Statement
  • Reporting of Factorial Randomized Trials: Extension of the CONSORT 2010 Statement
  • Adjusting for Treatment Switching in Oncology Trials: A Systematic Review and Recommendations for Reporting
  • Modeling Infectious Diseases in Healthcare Network (MInD-Healthcare) Framework for Describing and Reporting Multidrug-resistant Organism and Healthcare -Associated Infections Agent-based Modeling Methods
  • LEVEL (Logical Explanations & Visualizations of Estimates in Linear mixed models): recommendations for reporting multilevel data and analyses
  • Data linkage in pharmacoepidemiology: A call for rigorous evaluation and reporting
  • Expert consensus document: Reporting checklist for quantification of pulmonary congestion by lung ultrasound in heart failure
  • Commentary: minimum reporting standards should be expected for preclinical radiobiology irradiators and dosimetry in the published literature
  • Best practice guidelines for citizen science in mental health research: systematic review and evidence synthesis
  • Evaluating the quality of studies reporting on clinical applications of stromal vascular fraction: A systematic review and proposed reporting guidelines (CLINIC-STRA-SVF)
  • Enhancing reporting quality and impact of early phase dose-finding clinical trials: CONSORT Dose-finding Extension (CONSORT-DEFINE) guidance
  • Enhancing quality and impact of early phase dose-finding clinical trial protocols: SPIRIT Dose-finding Extension (SPIRIT-DEFINE) guidance
  • ESMO Guidance for Reporting Oncology real -World evidence (GROW)
  • A systematic review and cluster analysis approach of 103 studies of high-intensity interval training on cardiorespiratory fitness
  • Generate Analysis -Ready Data for Real-world Evidence: Tutorial for Harnessing Electronic Health Records With Advanced Informatic Technologies
  • Developing Consensus -Based Guidelines for Case Reporting in Aesthetic Medicine: Enhancing Transparency and Standardization
  • MINIMAR (MINimum Information for Medical AI Reporting): Developing reporting standards for artificial intelligence in health care
  • Presenting artificial intelligence, deep learning, and machine learning studies to clinicians and healthcare stakeholders: an introductory reference with a guideline and a Clinical AI Research (CAIR) checklist proposal
  • Data Processing Strategies to Determine Maximum Oxygen Uptake: A Systematic Scoping Review and Experimental Comparison with Guidelines for Reporting
  • Improving the Rigor of Mechanistic Behavioral Science: The Introduction of the Checklist for Investigating Mechanisms in Behavior -Change Research (CLIMBR)
  • An analysis of reporting practices in the top 100 cited health and medicine-related bibliometric studies from 2019 to 2021 based on a proposed guidelines
  • Improving the Reporting of Primary Care Research: Consensus Reporting Items for Studies in Primary Care-the CRISP Statement
  • Checklist for studies of HIV drug resistance prevalence or incidence: rationale and recommended use
  • Community-developed checklists for publishing images and image analyses
  • Systematic Development of Standards for Mixed Methods Reporting in Rehabilitation Health Sciences Research
  • Minimal reporting guideline for research involving eye tracking (2023 edition)
  • CHEERS Value of Information (CHEERS-VOI) Reporting Standards – Explanation and Elaboration
  • Initial Standardized Framework for Reporting Social Media Analytics in Emergency Care Research
  • The adapted Autobiographical interview: A systematic review and proposal for conduct and reporting
  • Paediatric Ureteroscopy (P-URS) reporting checklist: a new tool to aid studies report the essential items on paediatric ureteroscopy for stone disease
  • Adult Ureteroscopy (A-URS) Checklist: A New Tool To Standardise Reporting in Endourology
  • Recommendations for the development, implementation, and reporting of control interventions in efficacy and mechanistic trials of physical, psychological, and self-management therapies: the Co PPS Statement
  • Reporting Eye-tracking Studies In DEntistry (RESIDE) checklist
  • AdVi SHE : A Validation -Assessment Tool of Health -Economic Models for Decision Makers and Model Users
  • i CHECK-DH : Guidelines and Checklist for the Reporting on Digital Health Implementations
  • New reporting items and recommendations for randomized trials impacted by COVID- 19 and force majeure events: a targeted approach
  • Development, explanation, and presentation of the Physical Literacy Interventions Reporting Template (PLIRT)
  • Reporting guidelines for allergy and immunology survey research
  • CheckList for EvaluAtion of Radiomics research (CLEAR) : a step-by-step reporting guideline for authors and reviewers endorsed by ESR and EuSo MII
  • Transparent reporting of multivariable prediction models for individual prognosis or diagnosis: checklist for systematic reviews and meta-analyses (TRIPOD-SRMA)
  • Preferred Reporting Items for Complex Sample Survey Analysis (PRICSSA)
  • Defining measures of kidney function in observational studies using routine health care data: methodological and reporting considerations
  • CORE-CERT Items as a Minimal Requirement for Replicability of Exercise Interventions: Results From Application to Exercise Studies for Breast Cancer Patients
  • Recommendations for Reporting Machine Learning Analyses in Clinical Research
  • ACURATE : A guide for reporting sham controls in trials using acupuncture
  • Checklist for Artificial Intelligence in Medical Imaging (CLAIM) : A Guide for Authors and Reviewers
  • STandards for Reporting Interventions in Clinical Trials Of Tuina/Massage (STRICTOTM) : Extending the CONSORT statement
  • Transparent reporting of multivariable prediction models developed or validated using clustered data: TRIPOD-Cluster checklist
  • The SUPER reporting guideline suggested for reporting of surgical technique
  • Guidelines for Reporting Outcomes in Trial Reports: The CONSORT-Outcomes 2022 Extension
  • Guidelines for Reporting Outcomes in Trial Protocols: The SPIRIT-Outcomes 2022 Extension
  • PROBE 2023 guidelines for reporting observational studies in Endodontics: A consensus-based development study
  • CONFERD-HP : recommendations for reporting COmpeteNcy FramEwoRk Development in health professions
  • Development of the ASSESS tool: a comprehenSive tool to Support rEporting and critical appraiSal of qualitative, quantitative, and mixed methods implementation reSearch outcomes
  • Guiding document analyses in health professions education research
  • Evidence-based statistical analysis and methods in biomedical research (SAMBR) checklists according to design features
  • Social Accountability Reporting for Research (SAR 4Research): checklist to strengthen reporting on studies on social accountability in the literature
  • How to Report Data on Bilateral Procedures and Other Issues with Clustered Data: The CLUDA Reporting Guidelines
  • Best practice guidance and reporting items for the development of scoping review protocols
  • Establishing reporting standards for participant characteristics in post-stroke aphasia research: An international e -Delphi exercise and consensus meeting
  • STARTER Checklist for Antimalarial Therapeutic Efficacy Reporting
  • Best Practice in the chemical characterisation of extracts used in pharmacological and toxicological research -The ConPhy MP-Guidelines
  • Advising on Preferred Reporting Items for patient-reported outcome instrument development: the PRIPROID
  • Recommendations for reporting the results of studies of instrument and scale development and testing
  • Methodical approaches to determine the rate of radial muscle displacement using tensiomyography: A scoping review and new reporting guideline
  • Methods for developing and reporting living evidence synthesis
  • Bayesian Analysis Reporting Guidelines
  • Reporting guideline for overviews of reviews of healthcare interventions: development of the PRIOR statement
  • The Do CTRINE Guidelines: Defined Criteria To Report INnovations in Education
  • Development of guidelines to reduce, handle and report missing data in palliative care trials: A multi-stakeholder modified nominal group technique
  • The RIPI-f (Reporting Integrity of Psychological Interventions delivered face-to-face) checklist was developed to guide reporting of treatment integrity in face-to-face psychological interventions
  • The Intraoperative Complications Assessment and Reporting with Universal Standards (ICARUS) Global Surgical Collaboration Project: Development of Criteria for Reporting Adverse Events During Surgical Procedures and Evaluating Their Impact on the Postoperative Course
  • CODE-EHR best-practice framework for the use of structured electronic health-care records in clinical research
  • A Reporting Tool for Adapted Guidelines in Health Care: The RIGHT-Ad@pt Checklist
  • Development of a reporting guideline for systematic reviews of animal experiments in the field of traditional Chinese medicine
  • TIDieR-telehealth : precision in reporting of telehealth interventions used in clinical trials – unique considerations for the Template for the Intervention Description and Replication (TIDieR) checklist
  • Towards better reporting of the proportion of days covered method in cardiovascular medication adherence: A scoping review and new tool TEN-SPIDERS
  • Murine models of radiation cardiotoxicity: a systematic review and recommendations for future studies
  • Systematic Review and Meta -Analysis of Outcomes After Operative Treatment of Aberrant Subclavian Artery Pathologies and Suggested Reporting Items
  • Reporting standards for psychological network analyses in cross-sectional data
  • Reporting ChAracteristics of cadaver training and sUrgical studies: The CACTUS guidelines
  • EULAR points to consider for minimal reporting requirements in synovial tissue research in rheumatology
  • Methods and Applications of Social Media Monitoring of Mental Health During Disasters: Scoping Review
  • Position Statement on Exercise Dosage in Rheumatic and Musculoskeletal Diseases: The Role of the IMPACT-RMD Toolkit
  • A checklist for assessing the methodological quality of concurrent t ES-fMRI studies (ContES checklist): a consensus study and statement
  • Application of Mixed Methods in Health Services Management Research: A Systematic Review
  • Methodological standards for conducting and reporting meta-analyses: Ensuring the replicability of meta-analyses of pharmacist-led medication review
  • A scoping review of the use of ethnographic approaches in implementation research and recommendations for reporting
  • The Chest Wall Injury Society Recommendations for Reporting Studies of Surgical Stabilization of Rib Fractures
  • How to Report Light Exposure in Human Chronobiology and Sleep Research Experiments
  • Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI
  • Use of actigraphy for assessment in pediatric sleep research
  • EULAR points to consider when analysing and reporting comparative effectiveness research using observational data in rheumatology
  • International Consensus Based Review and Recommendations for Minimum Reporting Standards in Research on Transcutaneous Vagus Nerve Stimulation (Version 2020)
  • Developing a checklist for reporting research using simulated patient methodology (CRiSP) : a consensus study
  • Conceptual Ambiguity Surrounding Gamification and Serious Games in Health Care: Literature Review and Development of Game -Based Intervention Reporting Guidelines (GAMING)
  • Improving Reporting of Clinical Studies Using the POSEIDON Criteria: POSORT Guidelines
  • Six practical recommendations for improved implementation outcomes reporting
  • Recommendations and publication guidelines for studies using frequency domain and time-frequency domain analyses of neural time series
  • EVIDENCE Publication Checklist for Studies Evaluating Connected Sensor Technologies: Explanation and Elaboration
  • An extension of the RIGHT statement for introductions and interpretations of clinical practice guidelines: RIGHT for INT
  • Rasch Reporting Guideline for Rehabilitation Research (RULER) : The RULER Statement
  • Using qualitative research to develop an elaboration of the TIDieR checklist for interventions to enhance vaccination communication: short report
  • Preliminary Minimum Reporting Requirements for In -Vivo Neural Interface Research: I. Implantable Neural Interfaces
  • Standardized Reporting of Machine Learning Applications in Urology: The STREAM-URO Framework
  • Guidance for publishing qualitative research in informatics
  • Guidelines for reporting on animal fecal transplantation (GRAFT) studies: recommendations from a systematic review of murine transplantation protocols
  • A Scoping Review of Four Decades of Outcomes in Nonsurgical Root Canal Treatment, Nonsurgical Retreatment, and Apexification Studies: Part 3 -A Proposed Framework for Standardized Data Collection and Reporting of Endodontic Outcome Studies
  • Health -Economic Analyses of Diagnostics: Guidance on Design and Reporting
  • Reporting guidelines for human microbiome research: the STORMS checklist
  • Smartphone -Delivered Ecological Momentary Interventions Based on Ecological Momentary Assessments to Promote Health Behaviors: Systematic Review and Adapted Checklist for Reporting Ecological Momentary Assessment and Intervention Studies
  • A Systematic Review of Methods and Procedures Used in Ecological Momentary Assessments of Diet and Physical Activity Research in Youth: An Adapted STROBE Checklist for Reporting EMA Studies (CREMAS)
  • Reporting Data on Auditory Brainstem Responses (ABR) in Rats: Recommendations Based on Review of Experimental Protocols and Literature
  • Heterogeneity in the Identification of Potential Drug -Drug Interactions in the Intensive Care Unit: A Systematic Review, Critical Appraisal, and Reporting Recommendations
  • Development of the CLARIFY (CheckList stAndardising the Reporting of Interventions For Yoga) guidelines: a Delphi study
  • Early phase clinical trials extension to guidelines for the content of statistical analysis plans
  • PRESENT 2020: Text Expanding on the Checklist for Proper Reporting of Evidence in Sport and Exercise Nutrition Trials
  • Intraoperative fluorescence diagnosis in the brain: a systematic review and suggestions for future standards on reporting diagnostic accuracy and clinical utility
  • Checklist for Theoretical Report in Epidemiological Studies (CRT-EE) : explanation and elaboration
  • STAndard Reporting of CAries Detection and Diagnostic Studies (STARCARDDS)
  • Implementing the 27 PRISMA 2020 Statement items for systematic reviews in the sport and exercise medicine, musculoskeletal rehabilitation and sports science fields: the PERSiST (implementing Prisma in Exercise, Rehabilitation, Sport medicine and SporTs science) guidance
  • Strengthening the Reporting of Observational Studies in Epidemiology Using Mendelian Randomization: The STROBE-MR Statement
  • Recommended reporting items for epidemic forecasting and prediction research: The EPIFORGE 2020 guidelines
  • Extending the CONSORT Statement to moxibustion
  • Consensus-based recommendations for case report in Chinese medicine (CARC)
  • Reporting Guidelines for Whole -Body Vibration Studies in Humans, Animals and Cell Cultures: A Consensus Statement from an International Group of Experts
  • Guidelines for cellular and molecular pathology content in clinical trial protocols: the SPIRIT-Path extension
  • A Guideline for Reporting Mediation Analyses of Randomized Trials and Observational Studies: The AGReMA Statement
  • Social Innovation For Health Research (SIFHR) : Development of the SIFHR Checklist
  • How to write a guideline: a proposal for a manuscript template that supports the creation of trustworthy guidelines
  • REPORT-PFP : a consensus from the International Patellofemoral Research Network to improve REPORTing of quantitative PatelloFemoral Pain studies
  • Reporting stAndards for research in PedIatric Dentistry (RAPID) : an expert consensus-based statement
  • Improving the reporting quality of reliability generalization meta-analyses: The REGEMA checklist
  • Evaluation of post-introduction COVID- 19 vaccine effectiveness: Summary of interim guidance of the World Health Organization
  • Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report
  • Guidelines for Reporting Trial Protocols and Completed Trials Modified Due to the COVID- 19 Pandemic and Other Extenuating Circumstances: The CONSERVE 2021 Statement
  • Describing deprescribing trials better: an elaboration of the CONSORT statement
  • Comprehensive reporting of pelvic floor muscle training for urinary incontinence: CERT-PFMT
  • International Olympic Committee Consensus Statement: Methods for Recording and Reporting of Epidemiological Data on Injury and Illness in Sports 2020 (Including the STROBE Extension for Sports Injury and Illness Surveillance (STROBE-SIIS))
  • RIGHT for Acupuncture: An Extension of the RIGHT Statement for Clinical Practice Guidelines on Acupuncture
  • The APOSTEL 2.0 Recommendations for Reporting Quantitative Optical Coherence Tomography Studies
  • Room Indirect Calorimetry Operating and Reporting Standards (RICORS 1.0): A Guide to Conducting and Reporting Human Whole -Room Calorimeter Studies
  • COSMIN reporting guideline for studies on measurement properties of patient-reported outcome measures
  • Ensuring best practice in genomics education and evaluation: reporting item standards for education and its evaluation in genomics (RISE 2 Genomics)
  • A Consensus -Based Checklist for Reporting of Survey Studies (CROSS)
  • CONSORT extension for the reporting of randomised controlled trials conducted using cohorts and routinely collected data (CONSORT-ROUTINE) : checklist with explanation and elaboration
  • Preferred reporting items for journal and conference abstracts of systematic reviews and meta-analyses of diagnostic test accuracy studies (PRISMA-DTA for Abstracts): checklist, explanation, and elaboration
  • An analysis of preclinical efficacy testing of antivenoms for sub -Saharan Africa: Inadequate independent scrutiny and poor-quality reporting are barriers to improving snakebite treatment and management
  • Artificial intelligence in dental research: Checklist for authors, reviewers, readers
  • EULAR recommendations for the reporting of ultrasound studies in rheumatic and musculoskeletal diseases (RMDs)
  • PRIASE 2021 guidelines for reporting animal studies in Endodontology: a consensus-based development
  • The reporting checklist for public versions of guidelines: RIGHT-PVG
  • SQUIRE-EDU (Standards for QUality Improvement Reporting Excellence in Education): Publication Guidelines for Educational Improvement
  • PRISMA-S : an extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews
  • Anaesthesia Case Report (ACRE) checklist: a tool to promote high-quality reporting of cases in peri-operative practice
  • Strengthening tRansparent reporting of reseArch on uNfinished nursing CARE : The RANCARE guideline
  • Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: The APA Publications and Communications Board task force report
  • Defining Group Care Programs: An Index of Reporting Standards
  • Stakeholder analysis in health innovation planning processes: A systematic scoping review
  • PRISMA extension for moxibustion 2020: recommendations, explanation, and elaboration
  • PRISMA (Preferred Reporting Items for Systematic Reviews and Meta -Analyses) Extension for Chinese Herbal Medicines 2020 (PRISMA-CHM 2020)
  • Reporting gaps in immunization costing studies: Recommendations for improving the practice
  • The RIGHT Extension Statement for Traditional Chinese Medicine: Development, Recommendations, and Explanation
  • Proposed Requirements for Cardiovascular Imaging -Related Machine Learning Evaluation (PRIME) : A Checklist: Reviewed by the American College of Cardiology Healthcare Innovation Council
  • Benefit -Risk Assessment of Vaccines. Part II : Proposal Towards Consolidated Standards of Reporting Quantitative Benefit -Risk Models Applied to Vaccines (BRIVAC)
  • BIAS : Transparent reporting of biomedical image analysis challenges
  • Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist
  • STrengthening the Reporting Of Pharmacogenetic Studies: Development of the STROPS guideline
  • TIDieR-Placebo : a guide and checklist for reporting placebo and sham controls
  • Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI Extension
  • Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI Extension
  • The IDEAL Reporting Guidelines: A Delphi Consensus Statement Stage specific recommendations for reporting the evaluation of surgical innovation
  • The adaptive designs CONSORT extension (ACE) statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design
  • Reporting Guideline for Priority Setting of Health Research (REPRISE)
  • Consensus on the reporting and experimental design of clinical and cognitive-behavioural neurofeedback studies (CRED-nf checklist)
  • PRICE 2020 guidelines for reporting case reports in Endodontics: a consensus-based development
  • PRIRATE 2020 guidelines for reporting randomized trials in Endodontics: a consensus-based development
  • Guidance for reporting intervention development studies in health research (GUIDED) : an evidence-based consensus study
  • Standard Protocol Items for Clinical Trials with Traditional Chinese Medicine 2018: Recommendations, Explanation and Elaboration (SPIRIT-TCM Extension 2018)
  • SPIRIT extension and elaboration for n-of-1 trials: SPENT 2019 checklist
  • Standards for reporting interventions in clinical trials of cupping (STRICTOC) : extending the CONSORT statement
  • Pa CIR : A tool to enhance pharmacist patient care intervention reporting
  • Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline
  • Guidelines for reporting case studies and series on drug-induced QT interval prolongation and its complications following acute overdose
  • Criteria for describing and evaluating training interventions in healthcare professions – CRe-DEPTH
  • CONSORT extension for reporting N-of- 1 trials for traditional Chinese medicine (CENT for TCM) : Recommendations, explanation and elaboration
  • Reporting items for systematic reviews and meta-analyses of acupuncture: the PRISMA for acupuncture checklist
  • Consolidated criteria for strengthening reporting of health research involving indigenous peoples: the CONSIDER statement
  • Checklist for the preparation and review of pain clinical trial publications: a pain-specific supplement to CONSORT
  • Reporting guidelines on remotely collected electronic mood data in mood disorder (e MOOD)-recommendations
  • CONSORT 2010 statement: extension to randomised crossover trials
  • Reporting of Multi -Arm Parallel -Group Randomized Trials: Extension of the CONSORT 2010 Statement
  • Microbiology Investigation Criteria for Reporting Objectively (MICRO) : a framework for the reporting and interpretation of clinical microbiology data
  • Core Outcome Set -STAndardised Protocol Items: the COS-STAP Statement
  • The Reporting on ERAS Compliance, Outcomes, and Elements Research (RECOvER) Checklist: A Joint Statement by the ERAS ® and ERAS ® USA Societies
  • Reporting of stepped wedge cluster randomised trials: extension of the CONSORT 2010 statement with explanation and elaboration
  • Improving reporting of Meta -Ethnography : The e MERGe Reporting Guidance
  • The Reporting Items for Patent Landscapes statement
  • Reporting guidelines on how to write a complete and transparent abstract for overviews of systematic reviews of health care interventions
  • The reporting of studies conducted using observational routinely collected health data statement for pharmacoepidemiology (RECORD-PE)
  • PRISMA Extension for Scoping Reviews (PRISMA-ScR) : Checklist and Explanation
  • Systems Perspective of Amazon Mechanical Turk for Organizational Research: Review and Recommendations
  • ESPACOMP Medication Adherence Reporting Guideline (EMERGE)
  • Reporting randomised trials of social and psychological interventions: the CONSORT-SPI 2018 Extension
  • Reporting guidelines for implementation research on nurturing care interventions designed to promote early childhood development
  • Strengthening the reporting of empirical simulation studies: Introducing the STRESS guidelines
  • TIDieR-PHP : a reporting guideline for population health and policy interventions
  • Guidelines for Inclusion of Patient -Reported Outcomes in Clinical Trial Protocols: The SPIRIT-PRO Extension
  • Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies: The PRISMA-DTA Statement
  • Structural brain development: A review of methodological approaches and best practices
  • Improving the Development, Monitoring and Reporting of Stroke Rehabilitation Research: Consensus -Based Core Recommendations from the Stroke Recovery and Rehabilitation Roundtable
  • Variability in the Reporting of Serum Urate and Flares in Gout Clinical Trials: Need for Minimum Reporting Requirements
  • Methodology of assessment and reporting of safety in anti-malarial treatment efficacy studies of uncomplicated falciparum malaria in pregnancy: a systematic literature review
  • Standards for UNiversal reporting of patient Decision Aid Evaluation studies: the development of SUNDAE Checklist
  • Consideration of Sex Differences in Design and Reporting of Experimental Arterial Pathology Studies -Statement From ATVB Council
  • Guidelines for the Content of Statistical Analysis Plans in Clinical Trials
  • RECORDS : Improved reporting of Monte Carlo Radiation transport studies: Report of the AAPM Research Committee Task Group 268
  • Ten simple rules for neuroimaging meta-analysis
  • CONSORT-Equity 2017 extension and elaboration for better reporting of health equity in randomised trials
  • Preferred Reporting Items for Overviews of systematic reviews including harms checklist: A pilot tool to be used for balanced reporting of benefits and harms
  • Reporting Guidelines for the Use of Expert Judgement in Model -Based Economic Evaluations
  • Standards for reporting chronic periodontitis prevalence and severity in epidemiologic studies: Proposed standards from the Joint EU / USA Periodontal Epidemiology Working Group
  • Graphics and statistics for cardiology: designing effective tables for presentation and publication
  • Guidelines for reporting meta-epidemiological methodology research
  • Reporting to Improve Reproducibility and Facilitate Validity Assessment for Healthcare Database Studies V1.0
  • Characteristics of funding of clinical trials: cross-sectional survey and proposed guidance
  • Guidelines for Reporting on Latent Trajectory Studies (GRoLTS)
  • AMWA ‒ EMWA ‒ ISMPP Joint Position Statement on the Role of Professional Medical Writers
  • STROCSS 2021: Strengthening the reporting of cohort, cross-sectional and case-control studies in surgery
  • Unique identification of research resources in the biomedical literature: the Resource Identification Initiative (RRID)
  • Checklist for One Health Epidemiological Reporting of Evidence (COHERE)
  • STARD for Abstracts: essential items for reporting diagnostic accuracy studies in journal or conference abstracts
  • GRIPP 2 reporting checklists: tools to improve reporting of patient and public involvement in research
  • Single organ cutaneous vasculitis: Case definition & guidelines for data collection, analysis, and presentation of immunization safety data
  • Improving the reporting of clinical trials of infertility treatments (IMPRINT) : modifying the CONSORT statement
  • Improving the reporting of therapeutic exercise interventions in rehabilitation research
  • Guidance to develop individual dose recommendations for patients on chronic hemodialysis
  • A literature review of applied adaptive design methodology within the field of oncology in randomised controlled trials and a proposed extension to the CONSORT guidelines
  • AHRQ Series on Complex Intervention Systematic Reviews – Paper 6: PRISMA-CI Extension Statement & Checklist
  • Adaptation of the CARE Guidelines for Therapeutic Massage and Bodywork Publications: Efforts To Improve the Impact of Case Reports
  • A review of published analyses of case-cohort studies and recommendations for future reporting
  • Guidelines for reporting evaluations based on observational methodology
  • Reporting and Guidelines in Propensity Score Analysis: A Systematic Review of Cancer and Cancer Surgical Studies
  • Minimum Information for Studies Evaluating Biologics in Orthopaedics (MIBO) : Platelet -Rich Plasma and Mesenchymal Stem Cells
  • CONSORT 2010 statement: extension checklist for reporting within person randomised trials
  • CONSORT Extension for Chinese Herbal Medicine Formulas 2017: Recommendations, Explanation, and Elaboration
  • Methods and processes of developing the strengthening the reporting of observational studies in epidemiology – veterinary (STROBE-Vet) statement
  • CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials
  • STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies
  • SPIRIT 2013 Statement: Defining standard protocol items for clinical trials
  • Best Practices in Data Analysis and Sharing in Neuroimaging using MRI
  • Latent Class Analysis: An example for reporting results
  • Guidelines for Developing and Reporting Machine Learning Predictive Models in Biomedical Research: A Multidisciplinary View
  • Clarity in Reporting Terminology and Definitions of Set End Points in Resistance Training
  • An introduction to using Bayesian linear regression with clinical data
  • Making economic evaluations more helpful for treatment choices in haemophilia
  • Guideline for Reporting Interventions on Spinal Manipulative Therapy: Consensus on Interventions Reporting Criteria List for Spinal Manipulative Therapy (CIRCLe SMT)
  • Guidance on Conducting and REporting DElphi Studies (CREDES) in palliative care: Recommendations based on a methodological systematic review
  • STARD-BLCM : Standards for the Reporting of Diagnostic accuracy studies that use Bayesian Latent Class Models
  • A Reporting Tool for Practice Guidelines in Health Care: The RIGHT Statement
  • The AGREE Reporting Checklist: a tool to improve reporting of clinical practice guidelines
  • The REFLECT statement: methods and processes of creating reporting guidelines for randomized controlled trials for livestock and food safety by modifying the CONSORT statement
  • Reporting Items for Updated Clinical Guidelines: Checklist for the Reporting of Updated Guidelines (CheckUp)
  • Preferred Reporting Items for the Development of Evidence-based Clinical Practice Guidelines in Traditional Medicine (PRIDE-CPG-TM) : Explanation and elaboration
  • Standards for Reporting Implementation Studies (StaRI) Statement
  • Preferred Reporting Of Case Series in Surgery (PROCESS) 2023 guidelines
  • Consensus on Exercise Reporting Template (CERT) : Modified Delphi Study
  • Development of the Anatomical Quality Assurance (AQUA) Checklist: guidelines for reporting original anatomical studies
  • CONSORT 2010 statement: extension to randomised pilot and feasibility trials
  • Core Outcome Set -STAndards for Reporting: The COS-STAR Statement
  • Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement
  • Evaluation of response after pre-operative radiotherapy in soft tissue sarcomas; the European Organisation for Research and Treatment of Cancer -Soft Tissue and Bone Sarcoma Group (EORTC-STBSG) and Imaging Group recommendations for radiological examination and reporting with an emphasis on magnetic resonance imaging
  • Standardization of pathologic evaluation and reporting of postneoadjuvant specimens in clinical trials of breast cancer: recommendations from an international working group
  • Image-guided Tumor Ablation: Standardization of Terminology and Reporting Criteria—A 10 -Year Update
  • Irreversible Electroporation (IRE) : Standardization of Terminology and Reporting Criteria for Analysis and Comparison
  • Recommendations for improving the quality of reporting clinical electrochemotherapy studies based on qualitative systematic review
  • METastasis Reporting and Data System for Prostate Cancer: Practical Guidelines for Acquisition, Interpretation, and Reporting of Whole-body Magnetic Resonance Imaging-based Evaluations of Multiorgan Involvement in Advanced Prostate Cancer
  • Reporting Magnetic Resonance Imaging in Men on Active Surveillance for Prostate Cancer: The PRECISE Recommendations -A Report of a European School of Oncology Task Force
  • Transcription factor HIF 1A: downstream targets, associated pathways, polymorphic hypoxia response element (HRE) sites, and initiative for standardization of reporting in scientific literature
  • Eliciting the child’s voice in adverse event reporting in oncology trials: Cognitive interview findings from the Pediatric Patient -Reported Outcomes version of the Common Terminology Criteria for Adverse Events initiative
  • CONSISE statement on the reporting of Seroepidemiologic Studies for influenza (ROSES-I statement): an extension of the STROBE statement
  • REporting recommendations for tumour MARKer prognostic studies (REMARK)
  • Recommendations to improve adverse event reporting in clinical trial publications: a joint pharmaceutical industry/journal editor perspective
  • Reporting studies on time to diagnosis: proposal of a guideline by an international panel (REST)
  • A Checklist for Reporting Valuation Studies of Multi -Attribute Utility -Based Instruments (CREATE)
  • Strengthening the Reporting of Observational Studies in Epidemiology for Newborn Infection (STROBE-NI) : an extension of the STROBE statement for neonatal infection research
  • The SCARE 2020 Guideline: Updating Consensus Surgical CAse REport (SCARE) Guidelines
  • Development and validation of the guideline for reporting evidence-based practice educational interventions and teaching (GREET)
  • Using theory of change to design and evaluate public health interventions: a systematic review
  • Homeopathic clinical case reports: Development of a supplement (HOM-CASE) to the CARE clinical case reporting guideline
  • Reporting Guidelines for Health Care Simulation Research: Extensions to the CONSORT and STROBE Statements
  • Guidelines for Reporting Articles on Psychiatry and Heart rate variability (GRAPH) : recommendations to advance research communication
  • RAMESES II reporting standards for realist evaluations
  • Guidelines for Accurate and Transparent Health Estimates Reporting: the GATHER statement
  • Medical abortion reporting of efficacy: the MARE guidelines.
  • Strengthening the Reporting of Observational Studies in Epidemiology—Nutritional Epidemiology (STROBE-nut) : An Extension of the STROBE Statement
  • Sex and Gender Equity in Research: rationale for the SAGER guidelines and recommended use
  • The Single -Case Reporting Guideline In BEhavioural Interventions (SCRIBE) 2016 Statement
  • Consensus on Recording Deep Endometriosis Surgery: the CORDES statement
  • Developing the Clarity and Openness in Reporting: E3-based (CORE) reference user manual for creation of clinical study reports in the era of clinical trial transparency
  • SCCT guidelines for the interpretation and reporting of coronary CT angiography: a report of the Society of Cardiovascular Computed Tomography Guidelines Committee
  • Definition and classification of intraoperative complications (CLASSIC) : Delphi study and pilot evaluation
  • Improving research practice in rat orthotopic and partial orthotopic liver transplantation: a review, recommendation, and publication guide
  • Methodology used in studies reporting chronic kidney disease prevalence: a systematic literature review
  • Standardized outcomes reporting in metabolic and bariatric surgery
  • Consensus guidelines on plasma cell myeloma minimal residual disease analysis and reporting
  • DELTA 2 guidance on choosing the target difference and undertaking and reporting the sample size calculation for a randomised controlled trial
  • Transparent reporting of data quality in distributed data networks
  • Quality of methods reporting in animal models of colitis
  • Guidelines for reporting of health interventions using mobile phones: mobile health (mHealth) evidence reporting and assessment (m ERA) checklist
  • Quality of pain intensity assessment reporting: ACTTION systematic review and recommendations
  • An extension of STARD statements for reporting diagnostic accuracy studies on liver fibrosis tests: the Liver -FibroSTARD standards
  • STROBE-AMS : recommendations to optimise reporting of epidemiological studies on antimicrobial resistance and informing improvement in antimicrobial stewardship
  • Reporting guidelines for population pharmacokinetic analyses
  • Recommendations for the improved effectiveness and reporting of telemedicine programs in developing countries: results of a systematic literature review
  • Guidelines for the reporting of treatment trials for alcohol use disorders
  • Development of the Standards of Reporting of Neurological Disorders (STROND) checklist: A guideline for the reporting of incidence and prevalence studies in neuroepidemiology
  • A review of 40 years of enteric antimicrobial resistance research in Eastern Africa: what can be done better?
  • Ensuring consistent reporting of clinical pharmacy services to enhance reproducibility in practice: an improved version of DEPICT
  • RiGoR: reporting guidelines to address common sources of bias in risk model development
  • Reporting standards for guideline-based performance measures
  • Utstein-style guidelines on uniform reporting of in-hospital cardiopulmonary resuscitation in dogs and cats
  • Guidelines for reporting embedded recruitment trials
  • PRISMA harms checklist: improving harms reporting in systematic reviews
  • Developing a methodological framework for organisational case studies: a rapid review and consensus development process
  • The PRISMA 2020 statement: An updated guideline for reporting systematic reviews
  • The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: guidelines for reporting observational studies
  • A checklist to improve reporting of group-based behaviour-change interventions
  • The REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) Statement
  • Preferred reporting items for studies mapping onto preference-based outcome measures: The MAPS statement
  • A call for transparent reporting to optimize the predictive value of preclinical research
  • Guidelines for uniform reporting of body fluid biomarker studies in neurologic disorders
  • The PRISMA Extension Statement for Reporting of Systematic Reviews Incorporating Network Meta-analyses of Health Care Interventions: Checklist and Explanations
  • Setting number of decimal places for reporting risk ratios: rule of four
  • Too many digits: the presentation of numerical data
  • The CONSORT Statement: Application within and adaptations for orthodontic trials
  • A structured approach to documenting a search strategy for publication: a 12 step guideline for authors
  • Standards of reporting for MRI-targeted biopsy studies (START) of the prostate: recommendations from an International Working Group
  • Standardized reporting guidelines for emergency department syncope risk-stratification research
  • Designing and reporting case series in plastic surgery
  • Disaster medicine reporting: the need for new guidelines and the CONFIDE statement
  • Development and validation of reporting guidelines for studies involving data linkage
  • A reporting guide for studies on individual differences in traffic safety
  • Translating trial-based molecular monitoring into clinical practice: importance of international standards and practical considerations for community practitioners
  • Canadian Association of Gastroenterology consensus guidelines on safety and quality indicators in endoscopy
  • A tool to analyze the transferability of health promotion interventions
  • A new manner of reporting pressure results after glaucoma surgery
  • Do the media provide transparent health information? A cross-cultural comparison of public information about the HPV vaccine
  • Recommendations for the reporting of foot and ankle models
  • An introduction to standardized clinical nomenclature for dysmorphic features: the Elements of Morphology project
  • Strengthening the Reporting of Observational Studies in Epidemiology for Respondent -Driven Sampling Studies: ‘ STROBE-RDS ’ Statement.
  • CONSORT extension for reporting N-of- 1 trials (CENT) 2015 Statement
  • A protocol format for the preparation, registration and publication of systematic reviews of animal intervention studies
  • Reporting standards for literature searches and report inclusion criteria: making research syntheses more transparent and easy to replicate
  • Preferred Reporting Items for Systematic Review and Meta -Analyses of individual participant data: the PRISMA-IPD Statement
  • Evaluating complex interventions in end of life care: the MORECare statement on good practice generated by a synthesis of transparent expert consultations and systematic reviews
  • Biospecimen reporting for improved study quality (BRISQ)
  • Developing a guideline to standardize the citation of bioresources in journal articles (CoBRA)
  • Reporting Guidelines for Clinical Pharmacokinetic Studies: The Clin PK Statement
  • Using the spinal cord injury common data elements
  • Guidelines for assessment of bone microstructure in rodents using micro-computed tomography
  • Instrumental variable methods in comparative safety and effectiveness research
  • Protecting the power of interventions through proper reporting
  • Reporting of data from out-of-hospital cardiac arrest has to involve emergency medical dispatching – taking the recommendations on reporting OHCA the Utstein style a step further
  • A position paper on standardizing the nonneoplastic kidney biopsy report
  • Using qualitative methods for attribute development for discrete choice experiments: issues and recommendations
  • Reporting of interaction
  • Head, neck, and brain tumor embolization guidelines
  • Reporting outcomes of back pain trials: a modified Delphi study
  • A common language in neoadjuvant breast cancer clinical trials: proposals for standard definitions and endpoints
  • Viscerotropic disease: case definition and guidelines for collection, analysis, and presentation of immunization safety data
  • Diarrhea: case definition and guidelines for collection, analysis, and presentation of immunization safety data
  • Immunization site pain: case definition and guidelines for collection, analysis, and presentation of immunization safety data
  • Can the Brighton Collaboration case definitions be used to improve the quality of Adverse Event Following Immunization (AEFI) reporting? Anaphylaxis as a case study
  • Definitions, methodological and statistical issues for phase 3 clinical trials in chronic myeloid leukemia: a proposal by the European LeukemiaNet
  • A new standardized format for reporting hearing outcome in clinical trials
  • A consensus approach toward the standardization of back pain definitions for use in prevalence studies
  • A proposed taxonomy of terms to guide the clinical trial recruitment process
  • Minimum data elements for research reports on CFS
  • Reporting standards for angiographic evaluation and endovascular treatment of cerebral arteriovenous malformations
  • How to report low-level laser therapy (LLLT) /photomedicine dose and beam parameters in clinical and laboratory studies
  • Common data elements for posttraumatic stress disorder research
  • American College of Medical Genetics standards and guidelines for interpretation and reporting of postnatal constitutional copy number variants
  • Completeness of reporting of radiation therapy planning, dose, and delivery in veterinary radiation oncology manuscripts from 2005 to 2010
  • Criteria for Reporting the Development and Evaluation of Complex Interventions in healthcare: revised guideline (CReDECI 2)
  • TRIPOD + AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods
  • Preferred Reporting Items for Systematic Review and Meta -Analysis Protocols (PRISMA-P) 2015 statement
  • Reporting guidance for violence risk assessment predictive validity studies: the RAGEE Statement
  • Standards for reporting qualitative research: a synthesis of recommendations
  • A systematic review of systematic reviews and meta-analyses of animal experiments with guidelines for reporting
  • Systematic reviews and meta-analysis of preclinical studies: why perform them and how to appraise them critically
  • Guidelines for reporting case studies on extracorporeal treatments in poisonings: methodology
  • Reporting standards for studies of diagnostic test accuracy in dementia: The STARDdem Initiative.
  • Strengthening the reporting of molecular epidemiology for infectious diseases (STROME-ID) : an extension of the STROBE statement
  • Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide
  • Launch of a checklist for reporting longitudinal observational drug studies in rheumatology: a EULAR extension of STROBE guidelines based on experience from biologics registries
  • CONSORT Harms 2022 statement, explanation, and elaboration: updated guideline for the reporting of harms in randomized trials
  • The CARE Guidelines: Consensus-based Clinical Case Reporting Guideline Development
  • Documenting Clinical and Laboratory Images in Publications: The CLIP Principles
  • ICMJE : Uniform Format for Disclosure of Competing Interests in ICMJE Journals
  • Selection and presentation of imaging figures in the medical literature
  • Neuroimaging standards for research into small vessel disease and its contribution to ageing and neurodegeneration
  • Finding What Works in Health Care: Standards for Systematic Reviews. Chapter 5 – Standards for Reporting Systematic Reviews
  • Systematic Reviews. CRD ’s guidance for undertaking reviews in health care
  • The Hu GENet ™ Hu GE Review Handbook, version 1.0. Guidelines for systematic review and meta-analysis of gene disease association studies
  • Cochrane Handbook for Systematic Reviews of Interventions Version 6.1
  • Writing for Publication in Veterinary Medicine. A Practical Guide for Researchers and Clinicians
  • Publication of population data for forensic purposes
  • Publication of population data of linearly inherited DNA markers in the International Journal of Legal Medicine
  • Proposed definitions and criteria for reporting time frame, outcome, and complications for clinical orthopedic studies in veterinary medicine
  • Recommended guidelines for the conduct and evaluation of prognostic studies in veterinary oncology
  • Update of the stroke therapy academic industry roundtable preclinical recommendations
  • Good laboratory practice: preventing introduction of bias at the bench
  • Consensus-based reporting standards for diagnostic test accuracy studies for paratuberculosis in ruminants
  • A gold standard publication checklist to improve the quality of animal studies, to fully integrate the Three Rs, and to make systematic reviews more feasible
  • The ARRIVE Guidelines 2.0: updated guidelines for reporting animal research
  • International Society for Medical Publication Professionals Code of Ethics
  • Proposed best practice for statisticians in the reporting and publication of pharmaceutical industry-sponsored clinical trials
  • What should be done to tackle ghostwriting in the medical literature?
  • Electrodermal activity at acupoints: literature review and recommendations for reporting clinical trials
  • Systematic review to determine best practice reporting guidelines for AFO interventions in studies involving children with cerebral palsy
  • EANM Dosimetry Committee guidance document: good practice of clinical dosimetry reporting
  • ASAS recommendations for collecting, analysing and reporting NSAID intake in clinical trials/epidemiological studies in axial spondyloarthritis
  • International Spinal Cord Injury Core Data Set (version 3.0) – including standardization of reporting
  • Risk of recurrent venous thromboembolism after stopping treatment in cohort studies: recommendation for acceptable rates and standardized reporting
  • The importance of uniform venous terminology in reports on varicose veins
  • Recommendations for reporting perioperative transoesophageal echo studies
  • Guidelines for reporting an f MRI study
  • Society for Cardiovascular Magnetic Resonance guidelines for reporting cardiovascular magnetic resonance examinations
  • Criteria for evaluation of novel markers of cardiovascular risk
  • American Society of Transplantation recommendations for screening, monitoring and reporting of infectious complications in immunosuppression trials in recipients of organ transplantation
  • Research reporting standards for radioembolization of hepatic malignancies
  • Research reporting standards for image-guided ablation of bone and soft tissue tumors
  • EANM procedure guidelines for brain neurotransmission SPECT using I-labelled dopamine transporter ligands, version 2
  • Transcatheter Therapy for Hepatic Malignancy: Standardization of Terminology and Reporting Criteria
  • Reporting standards for percutaneous thermal ablation of renal cell carcinoma
  • Research reporting standards for percutaneous vertebral augmentation
  • Reporting standards for percutaneous interventions in dialysis access
  • Reporting standards for clinical evaluation of new peripheral arterial revascularization devices
  • Guidelines for the reporting of renal artery revascularization in clinical trials
  • Reporting standards for carotid interventions from the Society for Vascular Surgery
  • Reporting standards for carotid artery angioplasty and stent placement
  • Standardized definitions and clinical endpoints in carotid artery and supra-aortic trunk revascularization trials
  • Setting the standards for reporting ruptured abdominal aortic aneurysm
  • Endovascular repair compared with operative repair of traumatic rupture of the thoracic aorta: a nonsystematic review and a plea for trauma-specific reporting guidelines
  • Reporting standards for thoracic endovascular aortic repair (TEVAR)
  • Research reporting standards for endovascular treatment of pelvic venous insufficiency
  • Reporting standards for endovascular treatment of pulmonary embolism
  • Reporting Standards for Endovascular Repair of Saccular Intracranial Cerebral Aneurysms
  • Trial design and reporting standards for intra-arterial cerebral thrombolysis for acute ischemic stroke
  • Target registration and target positioning errors in computer-assisted neurosurgery: proposal for a standardized reporting of error assessment
  • Recommended guidelines for reporting on emergency medical dispatch when conducting research in emergency medicine: the Utstein style
  • Recommended guidelines for reviewing, reporting, and conducting research on post-resuscitation care: the Utstein style
  • Utstein-style guidelines for uniform reporting of laboratory CPR research
  • Recommendations for uniform reporting of data following major trauma–the Utstein style
  • Recommended guidelines for reviewing, reporting, and conducting research on in-hospital resuscitation: the in-hospital ‘Utstein style’
  • Standardization of uveitis nomenclature for reporting clinical data. Results of the First International Workshop
  • EACTS / ESCVS best practice guidelines for reporting treatment results in the thoracic aorta
  • Consensus statement: Defining minimal criteria for reporting the systemic inflammatory response to cardiopulmonary bypass
  • Guidelines for reporting data and outcomes for the surgical treatment of atrial fibrillation
  • Recommendations for reporting morbid events after heart valve surgery
  • Standards for reporting results of refractive surgery
  • Quality of study methods in individual- and group-level HIV intervention research: critical reporting elements
  • Quality of reporting in evaluations of surgical treatment of trigeminal neuralgia: recommendations for future reports
  • Guidelines for reporting case series of tumours of the colon and rectum
  • Guidance on reporting ultrasound exposure conditions for bio-effects studies
  • American College of Cardiology Clinical Expert Consensus Document on Standards for Acquisition, Measurement and Reporting of Intravascular Ultrasound Studies (IVUS)
  • Standardized reporting of bleeding complications for clinical investigations in acute coronary syndromes: a proposal from the academic bleeding consensus (ABC) multidisciplinary working group
  • Standardized reporting guidelines for studies evaluating risk stratification of ED patients with potential acute coronary syndromes
  • Calibration methods used in cancer simulation models and suggested reporting guidelines
  • Preschool vision screening: what should we be detecting and how should we report it? Uniform guidelines for reporting results of preschool vision screening studies
  • The lessons of QUANTEC : recommendations for reporting and gathering data on dose-volume dependencies of treatment outcome
  • A reporting guideline for clinical platelet transfusion studies from the BEST Collaborative
  • Clinical trials focusing on cancer pain educational interventions: core components to include during planning and reporting
  • Reporting disease activity in clinical trials of patients with rheumatoid arthritis: EULAR / ACR collaborative recommendations
  • Exercise therapy and low back pain: insights and proposals to improve the design, conduct, and reporting of clinical trials
  • Eligibility and outcomes reporting guidelines for clinical trials for patients in the state of a rising prostate-specific antigen: recommendations from the Prostate -Specific Antigen Working Group
  • Consensus guidelines for the conduct and reporting of clinical trials in systemic light-chain amyloidosis
  • Diagnosis and management of acute myeloid leukemia in adults: recommendations from an international expert panel, on behalf of the European LeukemiaNet
  • Revised recommendations of the International Working Group for Diagnosis, Standardization of Response Criteria, Treatment Outcomes, and Reporting Standards for Therapeutic Trials in Acute Myeloid Leukemia
  • Methodological challenges when using actigraphy in research
  • Draft STROBE checklist for conference abstracts
  • Conflict of Interest in Peer -Reviewed Medical Journals
  • Financial Conflicts of Interest Checklist 2010 for clinical research studies
  • Professional medical associations and their relationships with industry: a proposal for controlling conflict of interest
  • How to formulate research recommendations
  • Suggestions for improving the reporting of clinical research: the role of narrative
  • The case for structuring the discussion of scientific papers
  • More medical journals should inform their contributors about three key principles of graph construction
  • Figures in clinical trial reports: current practice & scope for improvement
  • Recommendations for the assessment and reporting of multivariable logistic regression in transplantation literature
  • Reporting results of latent growth modeling and multilevel modeling analyses: some recommendations for rehabilitation psychology
  • Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls
  • Assessing and reporting heterogeneity in treatment effects in clinical trials: a proposal
  • Statistics in medicine–reporting of subgroup analyses in clinical trials
  • Seven items were identified for inclusion when reporting a Bayesian analysis of a clinical study
  • Bayesian methods in health technology assessment: a review
  • Basic Statistical Reporting for Articles Published in Biomedical Journals: The “Statistical Analyses and Methods in the Published Literature” or The SAMPL Guidelines
  • Establishing a knowledge trail from molecular experiments to clinical trials
  • Preparing raw clinical data for publication: guidance for journal editors, authors, and peer reviewers
  • Best practices in the reporting of participatory action research: Embracing both the forest and the trees
  • A comprehensive checklist for reporting the use of OSCEs
  • Quality of standardised patient research reports in the medical education literature: review and recommendations
  • Development and use of reporting guidelines for assessing the quality of validation studies of health administrative data
  • Perspective: Guidelines for reporting team-based learning activities in the medical and health sciences education literature
  • Authors’ Submission Toolkit: a practical guide to getting your research published
  • Good publication practice for communicating company sponsored medical research: GPP 3
  • Standardized reporting of clinical practice guidelines: a proposal from the Conference on Guideline Standardization
  • A new structure for quality improvement reports
  • Guidelines for conducting and reporting economic evaluation of fall prevention strategies
  • Design, execution, interpretation, and reporting of economic evaluation studies in obstetrics
  • Economic evaluation using decision analytical modelling: design, conduct, analysis, and reporting
  • Reporting format for economic evaluation. Part II : Focus on modelling studies
  • Increasing the generalizability of economic evaluations: recommendations for the design, analysis, and reporting of studies
  • Good research practices for cost-effectiveness analysis alongside clinical trials: the ISPOR RCT-CEA Task Force report
  • Recommendations for Conduct, Methodological Practices, and Reporting of Cost-effectiveness Analyses: Second Panel on Cost -Effectiveness in Health and Medicine
  • The quality of mixed methods studies in health services research
  • Qualitative research review guidelines – RATS
  • Evolving guidelines for publication of qualitative research studies in psychology and related fields
  • Revealing the wood and the trees: reporting qualitative research
  • Qualitative research: standards, challenges, and guidelines
  • RAMESES publication standards: meta-narrative reviews
  • RAMESES publication standards: realist syntheses
  • Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group
  • Meta-analysis of individual participant data: rationale, conduct, and reporting
  • PRISMA-Equity 2012 Extension: Reporting Guidelines for Systematic Reviews with a Focus on Health Equity
  • PRISMA 2020 for Abstracts: Reporting Systematic Reviews in Journal and Conference Abstracts
  • The STARD statement for reporting diagnostic accuracy studies: application to the history and physical examination
  • Capturing momentary, self-report data: a proposal for reporting guidelines
  • Improving the quality of Web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES)
  • Guidelines for field surveys of the quality of medicines: a proposal
  • A guide for the design and conduct of self-administered surveys of clinicians
  • Good practice in the conduct and reporting of survey research
  • Reporting genetic results in research studies: summary and recommendations of an NHLBI working group
  • Recommendations for biomarker identification and qualification in clinical proteomics
  • Missing covariate data within cancer prognostic studies: a review of current reporting and proposed guidelines
  • Gene expression-based prognostic signatures in lung cancer: ready for clinical use?
  • Anecdotes as evidence
  • Recommendations for reporting adverse drug reactions and adverse events of traditional Chinese medicine
  • Guidelines for submitting adverse event reports for publication
  • Guidelines for clinical case reports in behavioral clinical psychology
  • Instructions to authors for case reporting are limited: a review of a core journal list
  • Reporting participation in case-control studies
  • Conducting and reporting case series and audits–author guidelines for acupuncture in medicine
  • Appropriate use and reporting of uncontrolled case series in the medical literature
  • Improving the reporting of clinical case series
  • EULAR points to consider when establishing, analysing and reporting safety data of biologics registers in rheumatology
  • Preliminary core set of domains and reporting requirements for longitudinal observational studies in rheumatology
  • A community standard for immunogenomic data reporting and analysis: proposal for a STrengthening the REporting of Immunogenomic Studies statement
  • STrengthening the Reporting of OBservational studies in Epidemiology – Molecular Epidemiology (STROBE-ME) : An extension of the STROBE statement
  • STrengthening the REporting of Genetic Association Studies (STREGA) : An Extension of the STROBE Statement.
  • Guidelines for conducting and reporting mixed research in the field of counseling and beyond
  • Reporting experiments in homeopathic basic research (REHBaR) – a detailed guideline for authors
  • Guidelines for the design, conduct and reporting of human intervention studies to evaluate the health benefits of foods
  • Good research practices for comparative effectiveness research: Defining, reporting and interpreting nonrandomized studies of treatment effects using secondary data sources
  • Guidelines for reporting non-randomised studies
  • Setting the bar in phase II trials: the use of historical data for determining “go/no go” decision for definitive phase III testing
  • GNOSIS : Guidelines for Neuro -Oncology : Standards for Investigational Studies – reporting of surgically based therapeutic clinical trials
  • The standard of reporting of health-related quality of life in clinical cancer trials
  • A systematic review of the reporting of Data Monitoring Committees’ roles, interim analysis and early termination in pediatric clinical trials
  • “Brimful of STARLITE ”: toward standards for reporting literature searches
  • Systematic prioritization of the STARE-HI reporting items. An application to short conference papers on health informatics evaluation
  • CONSORT-EHEALTH : improving and standardizing evaluation reports of Web-based and mobile health interventions
  • Reporting of patient-reported outcomes in randomized trials: the CONSORT PRO extension
  • Consolidated Health Economic Evaluation Reporting Standards 2022 (CHEERS 2022) Statement: Updated Reporting Guidance for Health Economic Evaluations
  • Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ
  • Inadequate planning and reporting of adjudication committees in clinical trials: recommendation proposal
  • Relevance of CONSORT reporting criteria for research on eHealth interventions
  • Reporting guidelines for music-based interventions
  • Reporting whole-body vibration intervention studies: recommendations of the International Society of Musculoskeletal and Neuronal Interactions
  • Reporting standards for studies of tailored interventions
  • Reporting data on homeopathic treatments (RedHot) : A supplement to CONSORT
  • Evaluating the quality of reporting occupational therapy randomized controlled trials by expanding the CONSORT criteria
  • WIDER recommendations for reporting of behaviour change interventions
  • Evidence-based behavioral medicine: what is it and how do we achieve it?
  • CONSORT 2010: CONSORT-C (children)
  • The CONSORT statement checklist in allergen-specific immunotherapy: a GA 2 LEN paper
  • Improving the reporting of pragmatic trials: an extension of the CONSORT statement
  • CONSORT for reporting randomised trials in journal and conference abstracts
  • CONSORT Statement for Randomized Trials of Nonpharmacologic Treatments: A 2017 Update and a CONSORT Extension for Nonpharmacologic Trial Abstracts
  • Reporting randomized, controlled trials of herbal interventions: an elaborated CONSORT Statement
  • Consort 2010 statement: extension to cluster randomised trials
  • Reporting of noninferiority and equivalence randomized trials: extension of the CONSORT 2010 statement
  • Revised STandards for Reporting Interventions in Clinical Trials of Acupuncture (STRICTA) : extending the CONSORT statement
  • STARE-HI – Statement on reporting of evaluation studies in Health Informatics
  • The ORION statement: guidelines for transparent reporting of Outbreak Reports and Intervention studies Of Nosocomial infection
  • Consensus recommendations for the uniform reporting of clinical trials: report of the International Myeloma Workshop Consensus Panel 1
  • Economic evaluation alongside randomised controlled trials: design, conduct, analysis, and reporting
  • Reporting and presenting information retrieval processes: the need for optimizing common practice in health technology assessment
  • Guidelines for reporting reliability and agreement studies (GRRAS) were proposed
  • SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process
  • Refining a checklist for reporting patient populations and service characteristics in hospice and palliative care research
  • Point and interval estimates of effect sizes for the case-controls design in neuropsychology: rationale, methods, implementations, and proposed reporting standards
  • Recommended guidelines for uniform reporting of pediatric advanced life support: the pediatric Utstein style
  • Consolidated criteria for reporting qualitative research (COREQ) : a 32-item checklist for interviews and focus groups
  • Guidelines for reporting results of quality of life assessments in clinical trials
  • Recommendations for reporting economic evaluations of haemophilia prophylaxis
  • Overview of methods used in cross-cultural comparisons of menopausal symptoms and their determinants: Guidelines for Strengthening the Reporting of Menopause and Aging (STROMA) studies
  • Strengthening the reporting of Genetic RIsk Prediction Studies: the GRIPS Statement
  • GNOSIS : guidelines for neuro-oncology: standards for investigational studies-reporting of phase 1 and phase 2 clinical trials
  • Standard guidelines for publication of deep brain stimulation studies in Parkinson’s disease (Guide 4 DBS-PD)

Reporting guidelines for main study types

Translations.

Some reporting guidelines are also available in languages other than English. Find out more in our Translations section .

  • About the Library

For information about Library scope and content, identification of reporting guidelines and inclusion/exclusion criteria please visit About the Library .

Visit our Help page for information about searching for reporting guidelines and for general information about using our website.

Library index

  • What is a reporting guideline?
  • Browse reporting guidelines by specialty
  • Reporting guidelines under development
  • Translations of reporting guidelines
  • EQUATOR Network reporting guideline manual
  • Reporting guidelines for animal research
  • Guidance on scientific writing
  • Guidance developed by editorial groups
  • Research funders’ guidance on reporting requirements
  • Professional medical writing support
  • Research ethics, publication ethics and good practice guidelines
  • Links to other resources

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Contemp Clin Trials Commun
  • v.14; 2019 Jun

Study reporting guidelines: How valid are they?

Associated data.

Reporting guidelines help improve the reporting of specific study designs, and clear guidance on the best approaches for developing guidelines is available. The methodological strength, or validation of guidelines is however unclear. This article explores what validation of reporting guidelines might involve, and whether this has been conducted for key reporting guidelines.

1. Introduction

Comprehensive reporting can reduce reporting bias, enable informed decision making in clinical practice, limit duplication of effort and inform subsequent research [ 1 ]. The quality of reporting of research activity continues to be inadequate, presenting readers with difficulties in judging the reliability of research findings, or how best to interpret results for individual settings [ 2 , 3 ].

Reporting guidelines have been developed to help improve the reporting of specific study designs. If followed by authors this should enable users to understand the design, conduct and analysis of the research, to critically appraise and review the findings and interpret the conclusions appropriately [ 4 ].

A guideline is a checklist, diagram or explicit text which guides authors in reporting research, and should be developed using explicit methodology [ 2 ]. Many already exist, mostly as checklists, and clear guidance, including a checklist of recommended steps, for developing such tools is available [ 2 ].

2. Use of guidelines

A search of the websites of five leading medical and health research journals (BMJ, Journal of the American Medical Association (JAMA), Lancet, New England Journal of Medicine (NEJM) and BMC Trials) identified the reporting guidelines included in the journals’ instructions to authors. These journals were purposively sampled as they: are prominent in the publication of a wide range of research topics and study designs; each publish significant volumes of research over a 3-month period (RCT publication rate, range 1–10 per month; 4–31 per quarter); and represent a range of impact factors (Range: 2.067 to 79.258).

All five journals require the use of the CONSORT reporting guidelines for randomised controlled trial (RCT) manuscripts. For RCTs, the BMJ also specifically recommend use of the TIDieR checklist to ensure accurate and complete reporting of a trial intervention [ 5 ]. BMC Trials, BMJ, JAMA and Lancet promote the use of reporting guidelines for other study designs and refer authors to the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network database of reporting guidelines [ 6 ].

The EQUATOR Network is an international multidisciplinary group that promotes transparent and accurate health research reporting through use of reporting guidelines. The network provides access to a comprehensive range of such guidelines in a searchable database ( http://www.equator-network.org ). However, there is evidence that simply having reporting guidelines, even with journal endorsement of their use, is insufficient [ 7 ].

The EQUATOR Network takes an inclusive approach and are clear that there is no indication of methodological strength, or validation of the guidelines listed. The reporting guidelines for the main study types, such as CONSORT for RCTs, STROBE for observational studies, and STARD for diagnostic/prognostic studies, are highlighted on the front page of the EQUATOR Network website. We decided to explore what validation of reporting guidelines might involve.

3. Validation

Validation is ‘the action of checking or proving the validity or accuracy of something’ [ 8 ] principles already well established in the development of health care and research documentation.

In the UK the National Institute for Health and Care Excellence (NICE) requires validation of guidelines designed to inform decisions in health, public health and social care ( https://www.nice.org.uk/about/what-we-do/our-programmes/nice-guidance ). As a minimum, validation comprises stakeholder review, with fieldwork to trial implementation and discussions with service users, with external review also included if warranted, for example for guidelines in complex or sensitive areas ( https://www.nice.org.uk/process/pmg20/chapter/the-validation-process-for-draft-guidelines-and-dealing-with-stakeholder-comments ).

A number of initiatives provide standards for the development and validation of patient reported outcome measures (PROMs) for research [ [9] , [10] , [11] ]. This includes identifying the scope and focus of the measure, reviewing literature, engaging relevant stakeholders, and consensus assessment. Guidance for the development of health research reporting guidelines suggests similar features to those used for PROMs: 1) Literature review; 2) Delphi process; 3) Identification of key items; 4) Meeting with collaborators; 5) Iterative revision and review; 6) Pilot testing, all of which are derived from the authors’ comprehensive experience in the development of reporting guidelines [ 2 ]. While there are many similarities in the proposed activities for the development of PROMs and reporting guidelines, unlike PROMS, methods of validation of reporting guidelines are not explicitly mentioned.

In 2016 we conducted a systematic literature search using MEDLINE to identify validation methods commonly used for PROMs. Search strategies are provided in Supplementary Document 1. Our pre-defined inclusion criteria were for studies: focused on PROMs; detailing a validation method; references other publications regarding validation. Two authors independently screened the search results against the inclusion criteria, identified 73 relevant papers. Details of the included papers are provided in Supplementary Document 2. Data on PROM type, validation method, and if this was noted as a strength or limitation were extracted and a summary of the validation methods identified is detailed in Table 1 , Table 2 .

Types of validation method used for PROMS development.

Combinations of validation methods used for PROMS development.

By far the most common method of validation was use of statistical testing either as a single validation method or in combination with other methods. The most common combination was statistical testing in conjunction with comparison with similar measures. This corresponds to guidance published in 2011 which indicates that comparison and correlation with similar, existing measures is critical in the development of PROMS [ 10 ].

4. Are reporting guidelines validated?

Having established the methods of validation, we went on to see which had been used in the reporting guidelines highlighted on the EQUATOR Network homepage.

We conducted a literature search in 2018 to identify papers reporting the development of guidelines for the main study types as highlighted on the EQUATOR network website. Two researchers independently extracted information about the development and validation methods reported; disagreements were resolved through discussion. We excluded papers where content analysis was the sole measure used, as this was not explicitly identified as a validation activity. The results are summarised in Table 3 .

Validation methods used in reporting guidelines for main study types.

The methods described within the papers matched the principles outlined in the guidance for the development of health research reporting guidelines [ 2 ]. While some guideline developers utilised multiple components and others were more selective, we believe the overarching principles remained. In the absence of clear statements, a pragmatic interpretation would say, for example, that evidence synthesis requires a literature review, and having done a literature review, or convened a stakeholder meeting, it can be supposed that validation methods were used. Of note here, is that although following the key principles, this activity is not noted as being ‘validation’. This could account for why this term is not, used in the context of promoting the use of reporting guidelines.

5. Discussion

Reporting guidelines, are available for a wide range of health care research methodologies [ 6 ]. Many journals request their use to increase transparent research reporting, however mandated use is rare, despite evidence that reporting guidelines can have a positive impact on completeness of reporting [ 7 , 21 ].

Validation is important to ensure the validity and accuracy of tools used within the conduct and reporting of research. Whilst the validation of PROMS is frequently reported, we have identified that while validation activities for reporting guidelines do occur, the activities are not always explicitly reported as such. This may occur because some validation activities are also part of the development process, for example a consensus exercise. Reporting of the development of future reporting guidelines for research, may benefit from clearly identifying the work undertaken to ensure the accuracy of the guidelines proposed. This could be within a ‘Validation’ section of a guideline publication or by simple use of the words ‘validated/validation’ in the context of the activity being reported. For completeness of reporting, it may also be appropriate to request use of a development and validation checklist, for example that provided by Moher et al. {2}, where guideline development is reported.

The EQUATOR Network database currently contains around 406 reporting guidelines which cover a variety of research methodologies, many either specialised or narrow in scope. It is unclear how many of these included validation activities in their development, and we have not been able to identify any post development or publication validation work from our literature search. Ensuring validation activities are not only undertaken but also clearly reported could add weight to the value of reporting guidelines for both those promoting their use and those authoring papers. By understanding what validation looks like, we would suggest that journals and peer reviewers could be encouraged to mandate the use of validated checklists.

Despite being included as one of the elements for the development of health research reporting guidelines, it is surprising that a limited number of the guideline development papers used pilot testing prior to publication. Given that reporting guidelines are intended to ensure transparent reporting across similar research methodologies, pilot testing may be applicable to the development of reporting guidelines.

Although we used systematic review methods to identify and select papers, we acknowledge that some may have been missed, however any impact from missed papers is likely to be limited.

6. Conclusion

The reporting of guidelines while including details of their development, frequently fail to explicitly identify validation activities even when they have clearly been undertaken. While this may appear to be a semantic or even pedantic issue, emphasising that reporting guidelines have been validated could help encourage authors to use the guidelines, publishers and journals to mandate checklist submission with manuscripts, and peer reviewers to monitor accuracy of completion. An improvement in any, and ideally all, of these approaches would be beneficial in promoting high quality research and reducing research waste.

Conception and design: CA, AB.

Analysis and interpretation of data: CA, MN, SJ.

Drafting of the article: CA.

Critical revision for important intellectual content: AB, MN, SJ.

Conflicts of interest

CA, MN, SJ declare they have no conflicts of interest. AB is a member of the PRISMA-P group.

Appendix A Supplementary data to this article can be found online at https://doi.org/10.1016/j.conctc.2019.100343 .

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Appendix A. Supplementary data

The following are the Supplementary data to this article:

  • Open access
  • Published: 24 April 2024

Breast cancer screening motivation and behaviours of women aged over 75 years: a scoping review

  • Virginia Dickson-Swift 1 ,
  • Joanne Adams 1 ,
  • Evelien Spelten 1 ,
  • Irene Blackberry 2 ,
  • Carlene Wilson 3 , 4 , 5 &
  • Eva Yuen 3 , 6 , 7 , 8  

BMC Women's Health volume  24 , Article number:  256 ( 2024 ) Cite this article

27 Accesses

Metrics details

This scoping review aimed to identify and present the evidence describing key motivations for breast cancer screening among women aged ≥ 75 years. Few of the internationally available guidelines recommend continued biennial screening for this age group. Some suggest ongoing screening is unnecessary or should be determined on individual health status and life expectancy. Recent research has shown that despite recommendations regarding screening, older women continue to hold positive attitudes to breast screening and participate when the opportunity is available.

All original research articles that address motivation, intention and/or participation in screening for breast cancer among women aged ≥ 75 years were considered for inclusion. These included articles reporting on women who use public and private breast cancer screening services and those who do not use screening services (i.e., non-screeners).

The Joanna Briggs Institute (JBI) methodology for scoping reviews was used to guide this review. A comprehensive search strategy was developed with the assistance of a specialist librarian to access selected databases including: the Cumulative Index to Nursing and Allied Health Literature (CINAHL), Medline, Web of Science and PsychInfo. The review was restricted to original research studies published since 2009, available in English and focusing on high-income countries (as defined by the World Bank). Title and abstract screening, followed by an assessment of full-text studies against the inclusion criteria was completed by at least two reviewers. Data relating to key motivations, screening intention and behaviour were extracted, and a thematic analysis of study findings undertaken.

A total of fourteen (14) studies were included in the review. Thematic analysis resulted in identification of three themes from included studies highlighting that decisions about screening were influenced by: knowledge of the benefits and harms of screening and their relationship to age; underlying attitudes to the importance of cancer screening in women's lives; and use of decision aids to improve knowledge and guide decision-making.

The results of this review provide a comprehensive overview of current knowledge regarding the motivations and screening behaviour of older women about breast cancer screening which may inform policy development.

Peer Review reports

Introduction

Breast cancer is now the most commonly diagnosed cancer in the world overtaking lung cancer in 2021 [ 1 ]. Across the globe, breast cancer contributed to 25.8% of the total number of new cases of cancer diagnosed in 2020 [ 2 ] and accounts for a high disease burden for women [ 3 ]. Screening for breast cancer is an effective means of detecting early-stage cancer and has been shown to significantly improve survival rates [ 4 ]. A recent systematic review of international screening guidelines found that most countries recommend that women have biennial mammograms between the ages of 40–70 years [ 5 ] with some recommending that there should be no upper age limit [ 6 , 7 , 8 , 9 , 10 , 11 , 12 ] and others suggesting that benefits of continued screening for women over 75 are not clear [ 13 , 14 , 15 ].

Some guidelines suggest that the decision to end screening should be determined based on the individual health status of the woman, their life expectancy and current health issues [ 5 , 16 , 17 ]. This is because the benefits of mammography screening may be limited after 7 years due to existing comorbidities and limited life expectancy [ 18 , 19 , 20 , 21 ], with some jurisdictions recommending breast cancer screening for women ≥ 75 years only when life expectancy is estimated as at least 7–10 years [ 22 ]. Others have argued that decisions about continuing with screening mammography should depend on individual patient risk and health management preferences [ 23 ]. This decision is likely facilitated by a discussion between a health care provider and patient about the harms and benefits of screening outside the recommended ages [ 24 , 25 ]. While mammography may enable early detection of breast cancer, it is clear that false-positive results and overdiagnosis Footnote 1 may occur. Studies have estimated that up to 25% of breast cancer cases in the general population may be over diagnosed [ 26 , 27 , 28 ].

The risk of being diagnosed with breast cancer increases with age and approximately 80% of new cases of breast cancer in high-income countries are in women over the age of 50 [ 29 ]. The average age of first diagnosis of breast cancer in high income countries is comparable to that of Australian women which is now 61 years [ 2 , 4 , 29 ]. Studies show that women aged ≥ 75 years generally have positive attitudes to mammography screening and report high levels of perceived benefits including early detection of breast cancer and a desire to stay healthy as they age [ 21 , 30 , 31 , 32 ]. Some women aged over 74 participate, or plan to participate, in screening despite recommendations from health professionals and government guidelines advising against it [ 33 ]. Results of a recent review found that knowledge of the recommended guidelines and the potential harms of screening are limited and many older women believed that the benefits of continued screening outweighed the risks [ 30 ].

Very few studies have been undertaken to understand the motivations of women to screen or to establish screening participation rates among women aged ≥ 75 and older. This is surprising given that increasing age is recognised as a key risk factor for the development of breast cancer, and that screening is offered in many locations around the world every two years up until 74 years. The importance of this topic is high given the ambiguity around best practice for participation beyond 74 years. A preliminary search of Open Science Framework, PROSPERO, Cochrane Database of Systematic Reviews and JBI Evidence Synthesis in May 2022 did not locate any reviews on this topic.

This scoping review has allowed for the mapping of a broad range of research to explore the breadth and depth of the literature, summarize the evidence and identify knowledge gaps [ 34 , 35 ]. This information has supported the development of a comprehensive overview of current knowledge of motivations of women to screen and screening participation rates among women outside the targeted age of many international screening programs.

Materials and methods

Research question.

The research question for this scoping review was developed by applying the Population—Concept—Context (PCC) framework [ 36 ]. The current review addresses the research question “What research has been undertaken in high-income countries (context) exploring the key motivations to screen for breast cancer and screening participation (concepts) among women ≥ 75 years of age (population)?

Eligibility criteria

Participants.

Women aged ≥ 75 years were the key population. Specifically, motivations to screen and screening intention and behaviour and the variables that discriminate those who screen from those who do not (non-screeners) were utilised as the key predictors and outcomes respectively.

From a conceptual perspective it was considered that motivation led to behaviour, therefore articles that described motivation and corresponding behaviour were considered. These included articles reporting on women who use public (government funded) and private (fee for service) breast cancer screening services and those who do not use screening services (i.e., non-screeners).

The scope included high-income countries using the World Bank definition [ 37 ]. These countries have broadly similar health systems and opportunities for breast cancer screening in both public and private settings.

Types of sources

All studies reporting original research in peer-reviewed journals from January 2009 were eligible for inclusion, regardless of design. This date was selected due to an evaluation undertaken for BreastScreen Australia recommending expansion of the age group to include 70–74-year-old women [ 38 ]. This date was also indicative of international debate regarding breast cancer screening effectiveness at this time [ 39 , 40 ]. Reviews were also included, regardless of type—scoping, systematic, or narrative. Only sources published in English and available through the University’s extensive research holdings were eligible for inclusion. Ineligible materials were conference abstracts, letters to the editor, editorials, opinion pieces, commentaries, newspaper articles, dissertations and theses.

This scoping review was registered with the Open Science Framework database ( https://osf.io/fd3eh ) and followed Joanna Briggs Institute (JBI) methodology for scoping reviews [ 35 , 36 ]. Although ethics approval is not required for scoping reviews the broader study was approved by the University Ethics Committee (approval number HEC 21249).

Search strategy

A pilot search strategy was developed in consultation with an expert health librarian and tested in MEDLINE (OVID) and conducted on 3 June 2022. Articles from this pilot search were compared with seminal articles previously identified by the members of the team and used to refine the search terms. The search terms were then searched as both keywords and subject headings (e.g., MeSH) in the titles and abstracts and Boolean operators employed. A full MEDLINE search was then carried out by the librarian (see Table  1 ). This search strategy was adapted for use in each of the following databases: Cumulative Index to Nursing and Allied Health Literature (CINAHL), Medical Literature Analysis and Retrieval System Online (MEDLINE), Web of Science and PsychInfo databases. The references of included studies have been hand-searched to identify any additional evidence sources.

Study/source of evidence selection

Following the search, all identified citations were collated and uploaded into EndNote v.X20 (Clarivate Analytics, PA, USA) and duplicates removed. The resulting articles were then imported into Covidence – Cochrane’s systematic review management software [ 41 ]. Duplicates were removed once importation was complete, and title and abstract screening was undertaken against the eligibility criteria. A sample of 25 articles were assessed by all reviewers to ensure reliability in the application of the inclusion and exclusion criteria. Team discussion was used to ensure consistent application. The Covidence software supports blind reviewing with two reviewers required at each screening phase. Potentially relevant sources were retrieved in full text and were assessed against the inclusion criteria by two independent reviewers. Conflicts were flagged within the software which allows the team to discuss those that have disagreements until a consensus was reached. Reasons for exclusion of studies at full text were recorded and reported in the scoping review. The Preferred Reporting Items of Systematic Reviews extension for scoping reviews (PRISMA-ScR) checklist was used to guide the reporting of the review [ 42 ] and all stages were documented using the PRISMA-ScR flow chart [ 42 ].

Data extraction

A data extraction form was created in Covidence and used to extract study characteristics and to confirm the study’s relevance. This included specific details such as article author/s, title, year of publication, country, aim, population, setting, data collection methods and key findings relevant to the review question. The draft extraction form was modified as needed during the data extraction process.

Data analysis and presentation

Extracted data were summarised in tabular format (see Table  2 ). Consistent with the guidelines for the effective reporting of scoping reviews [ 43 ] and the JBI framework [ 35 ] the final stage of the review included thematic analysis of the key findings of the included studies. Study findings were imported into QSR NVivo with coding of each line of text. Descriptive codes reflected key aspects of the included studies related to the motivations and behaviours of women > 75 years about breast cancer screening.

In line with the reporting requirements for scoping reviews the search results for this review are presented in Fig.  1 [ 44 ].

figure 1

PRISMA Flowchart. From: Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021;372:n71. https://doi.org/10.1136/bmj.n71

A total of fourteen [ 14 ] studies were included in the review with studies from the following countries, US n  = 12 [ 33 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 ], UK n  = 1 [ 23 ] and France n  = 1 [ 56 ]. Sample sizes varied, with most containing fewer than 50 women ( n  = 8) [ 33 , 45 , 46 , 48 , 51 , 52 , 55 ]. Two had larger samples including a French study with 136 women (a sub-set of a larger sample) [ 56 ], and one mixed method study in the UK with a sample of 26 women undertaking interviews and 479 women completing surveys [ 23 ]. One study did not report exact numbers [ 50 ]. Three studies [ 47 , 53 , 54 ] were undertaken by a group of researchers based in the US utilising the same sample of women, however each of the papers focused on different primary outcomes. The samples in the included studies were recruited from a range of locations including primary medical care clinics, specialist medical clinics, University affiliated medical clinics, community-based health centres and community outreach clinics [ 47 , 53 , 54 ].

Data collection methods varied and included: quantitative ( n  = 8), qualitative ( n  = 5) and mixed methods ( n  = 1). A range of data collection tools and research designs were utilised; pre/post, pilot and cross-sectional surveys, interviews, and secondary analysis of existing data sets. Seven studies focused on the use of a Decision Aids (DAs), either in original or modified form, developed by Schonberg et al. [ 55 ] as a tool to increase knowledge about the harms and benefits of screening for older women [ 45 , 47 , 48 , 49 , 52 , 54 , 55 ]. Three studies focused on intention to screen [ 33 , 53 , 56 ], two on knowledge of, and attitudes to, screening [ 23 , 46 ], one on information needs relating to risks and benefits of screening discontinuation [ 51 ], and one on perceptions about discontinuation of screening and impact of social interactions on screening [ 50 ].

The three themes developed from the analysis of the included studies highlighted that decisions about screening were primarily influenced by: (1) knowledge of the benefits and harms of screening and their relationship to age; (2) underlying attitudes to the importance of cancer screening in women's lives; and (3) exposure to decision aids designed to facilitate informed decision-making. Each of these themes will be presented below drawing on the key findings of the appropriate studies. The full dataset of extracted data can be found in Table  2 .

Knowledge of the benefits and harms of screening ≥ 75 years

The decision to participate in routine mammography is influenced by individual differences in cognition and affect, interpersonal relationships, provider characteristics, and healthcare system variables. Women typically perceive mammograms as a positive, beneficial and routine component of care [ 46 ] and an important aspect of taking care of themselves [ 23 , 46 , 49 ]. One qualitative study undertaken in the US showed that few women had discussed mammography cessation or the potential harms of screening with their health care providers and some women reported they would insist on receiving mammography even without a provider recommendation to continue screening [ 46 ].

Studies suggested that ageing itself, and even poor health, were not seen as reasonable reasons for screening cessation. For many women, guidance from a health care provider was deemed the most important influence on decision-making [ 46 ]. Preferences for communication about risk and benefits were varied with one study reporting women would like to learn more about harms and risks and recommended that this information be communicated via physicians or other healthcare providers, included in brochures/pamphlets, and presented outside of clinical settings (e.g., in community-based seniors groups) [ 51 ]. Others reported that women were sometimes sceptical of expert and government recommendations [ 33 ] although some were happy to participate in discussions with health educators or care providers about breast cancer screening harms and benefits and potential cessation [ 52 ].

Underlying attitudes to the importance of cancer screening at and beyond 75 years

Included studies varied in describing the importance of screening, with some attitudes based on past attendance and some based on future intentions to screen. Three studies reported findings indicating that some women intended to continue screening after 75 years of age [ 23 , 45 , 46 ], with one study in the UK reporting that women supported an extension of the automatic recall indefinitely, regardless of age or health status. In this study, failure to invite older women to screen was interpreted as age discrimination [ 23 ]. The desire to continue screening beyond 75 was also highlighted in a study from France that found that 60% of the women ( n  = 136 aged ≥ 75) intended to pursue screening in the future, and 27 women aged ≥ 75, who had never undergone mammography previously (36%), intended to do so in the future [ 56 ]. In this same study, intentions to screen varied significantly [ 56 ]. There were no sociodemographic differences observed between screened and unscreened women with regard to level of education, income, health risk behaviour (smoking, alcohol consumption), knowledge about the importance and the process of screening, or psychological features (fear of the test, fear of the results, fear of the disease, trust in screening impact) [ 56 ]. Further analysis showed that three items were statistically correlated with a higher rate of attendance at screening: (1) screening was initiated by a physician; (2) the women had a consultation with a gynaecologist during the past 12 months; and (3) the women had already undergone at least five screening mammograms. Analysis highlighted that although average income, level of education, psychological features or other types of health risk behaviours did not impact screening intention, having a mammogram previously impacted likelihood of ongoing screening. There was no information provided that explained why women who had not previously undergone screening might do so in the future.

A mixed methods study in the UK reported similar findings [ 23 ]. Utilising interviews ( n  = 26) and questionnaires ( n  = 479) with women ≥ 70 years (median age 75 years) the overwhelming result (90.1%) was that breast screening should be offered to all women indefinitely regardless of age, health status or fitness [ 23 ], and that many older women were keen to continue screening. Both the interview and survey data confirmed women were uncertain about eligibility for breast screening. The survey data showed that just over half the women (52.9%) were unaware that they could request mammography or knew how to access it. Key reasons for screening discontinuation were not being invited for screening (52.1%) and not knowing about self-referral (35.1%).

Women reported that not being invited to continue screening sent messages that screening was no longer important or required for this age group [ 23 ]. Almost two thirds of the women completing the survey (61.6%) said they would forget to attend screening without an invitation. Other reasons for screening discontinuation included transport difficulties (25%) and not wishing to burden family members (24.7%). By contrast, other studies have reported that women do not endorse discontinuation of screening mammography due to advancing age or poor health, but some may be receptive to reducing screening frequency on recommendation from their health care provider [ 46 , 51 ].

Use of Decision Aids (DAs) to improve knowledge and guide screening decision-making

Many women reported poor knowledge about the harms and benefits of screening with studies identifying an important role for DAs. These aids have been shown to be effective in improving knowledge of the harms and benefits of screening [ 45 , 54 , 55 ] including for women with low educational attainment; as compared to women with high educational attainment [ 47 ]. DAs can increase knowledge about screening [ 47 , 49 ] and may decrease the intention to continue screening after the recommended age [ 45 , 52 , 54 ]. They can be used by primary care providers to support a conversation about breast screening intention and reasons for discontinuing screening. In one pilot study undertaken in the US using a DA, 5 of the 8 women (62.5%) indicated they intended to continue to receive mammography; however, 3 participants planned to get them less often [ 45 ]. When asked whether they thought their physician would want them to get a mammogram, 80% said “yes” on pre-test; this figure decreased to 62.5% after exposure to the DA. This pilot study suggests that the use of a decision-aid may result in fewer women ≥ 75 years old continuing to screen for breast cancer [ 45 ].

Similar findings were evident in two studies drawing on the same data undertaken in the US [ 48 , 53 ]. Using a larger sample ( n  = 283), women’s intentions to screen prior to a visit with their primary care provider and then again after exposure to the DA were compared. Results showed that 21.7% of women reduced their intention to be screened, 7.9% increased their intentions to be screened, and 70.4% did not change. Compared to those who had no change or increased their screening intentions, women who had a decrease in screening intention were significantly less likely to receive screening after 18 months. Generally, studies have shown that women aged 75 and older find DAs acceptable and helpful [ 47 , 48 , 49 , 55 ] and using them had the potential to impact on a women’s intention to screen [ 55 ].

Cadet and colleagues [ 49 ] explored the impact of educational attainment on the use of DAs. Results highlight that education moderates the utility of these aids; women with lower educational attainment were less likely to understand all the DA’s content (46.3% vs 67.5%; P < 0.001); had less knowledge of the benefits and harms of mammography (adjusted mean ± standard error knowledge score, 7.1 ± 0.3 vs 8.1 ± 0.3; p < 0.001); and were less likely to have their screening intentions impacted (adjusted percentage, 11.4% vs 19.4%; p  = 0.01).

This scoping review summarises current knowledge regarding motivations and screening behaviours of women over 75 years. The findings suggest that awareness of the importance of breast cancer screening among women aged ≥ 75 years is high [ 23 , 46 , 49 ] and that many women wish to continue screening regardless of perceived health status or age. This highlights the importance of focusing on motivation and screening behaviours and the multiple factors that influence ongoing participation in breast screening programs.

The generally high regard attributed to screening among women aged ≥ 75 years presents a complex challenge for health professionals who are focused on potential harm (from available national and international guidelines) in ongoing screening for women beyond age 75 [ 18 , 20 , 57 ]. Included studies highlight that many women relied on the advice of health care providers regarding the benefits and harms when making the decision to continue breast screening [ 46 , 51 , 52 ], however there were some that did not [ 33 ]. Having a previous pattern of screening was noted as being more significant to ongoing intention than any other identified socio-demographic feature [ 56 ]. This is perhaps because women will not readily forgo health care practices that they have always considered important and that retain ongoing importance for the broader population.

For those women who had discontinued screening after the age of 74 it was apparent that the rationale for doing so was not often based on choice or receipt of information, but rather on factors that impact decision-making in relation to screening. These included no longer receiving an invitation to attend, transport difficulties and not wanting to be a burden on relatives or friends [ 23 , 46 , 51 ]. Ongoing receipt of invitations to screen was an important aspect of maintaining a capacity to choose [ 23 ]. This was particularly important for those women who had been regular screeners.

Women over 75 require more information to make decisions regarding screening [ 23 , 52 , 54 , 55 ], however health care providers must also be aware that the element of choice is important for older women. Having a capacity to choose avoids any notion of discrimination based on age, health status, gender or sociodemographic difference and acknowledges the importance of women retaining control over their health [ 23 ]. It was apparent that some women would choose to continue screening at a reduced frequency if this option was available and that women should have access to information facilitating self-referral [ 23 , 45 , 46 , 51 , 56 ].

Decision-making regarding ongoing breast cancer screening has been facilitated via the use of Decision Aids (DAs) within clinical settings [ 54 , 55 ]. While some studies suggest that women will make a decision regardless of health status, the use of DAs has impacted women’s decision to screen. While this may have limited benefit for those of lower educational attainment [ 48 ] they have been effective in improving knowledge relating to harms and benefits of screening particularly where they have been used to support a conversation with women about the value of screening [ 54 , 55 , 56 ].

Women have identified challenges in engaging in conversations with health care providers regarding ongoing screening, because providers frequently draw on projections of life expectancy and over-diagnosis [ 17 , 51 ]. As a result, these conversations about screening after age 75 years often do not occur [ 46 ]. It is likely that health providers may need more support and guidance in leading these conversations. This may be through the use of DAs or standardised checklists. It may be possible to incorporate these within existing health preventive measures for this age group. The potential for advice regarding ongoing breast cancer screening to be available outside of clinical settings may provide important pathways for conversations with women regarding health choices. Provision of information and advice in settings such as community based seniors groups [ 51 ] offers a potential platform to broaden conversations and align sources of information, not only with health professionals but amongst women themselves. This may help to address any misconception regarding eligibility and access to services [ 23 ]. It may also be aligned with other health promotion and lifestyle messages provided to this age group.

Limitations of the review

The searches that formed the basis of this review were carried in June 2022. Although the search was comprehensive, we have only captured those studies that were published in the included databases from 2009. There may have been other studies published outside of these periods. We also limited the search to studies published in English with full-text availability.

The emphasis of a scoping review is on comprehensive coverage and synthesis of the key findings, rather than on a particular standard of evidence and, consequently a quality assessment of the included studies was not undertaken. This has resulted in the inclusion of a wide range of study designs and data collection methods. It is important to note that three studies included in the review drew on the same sample of women (283 over > 75)[ 49 , 53 , 54 ]. The results of this review provide valuable insights into motivations and behaviours for breast cancer screening for older women, however they should be interpreted with caution given the specific methodological and geographical limitations.

Conclusion and recommendations

This scoping review highlighted a range of key motivations and behaviours in relation to breast cancer screening for women ≥ 75 years of age. The results provide some insight into how decisions about screening continuation after 74 are made and how informed decision-making can be supported. Specifically, this review supports the following suggestions for further research and policy direction:

Further research regarding breast cancer screening motivations and behaviours for women over 75 would provide valuable insight for health providers delivering services to women in this age group.

Health providers may benefit from the broader use of decision aids or structured checklists to guide conversations with women over 75 regarding ongoing health promotion/preventive measures.

Providing health-based information in non-clinical settings frequented by women in this age group may provide a broader reach of information and facilitate choices. This may help to reduce any perception of discrimination based on age, health status or socio-demographic factors.

Availability of data and materials

All data generated or analysed during this study is included in this published article (see Table  2 above).

Cancer Australia, in their 2014 position statement, define “overdiagnosis” in the following way. ‘’Overdiagnosis’ from breast screening does not refer to error or misdiagnosis, but rather refers to breast cancer diagnosed by screening that would not otherwise have been diagnosed during a woman’s lifetime. “Overdiagnosis” includes all instances where cancers detected through screening (ductal carcinoma in situ or invasive breast cancer) might never have progressed to become symptomatic during a woman’s life, i.e., cancer that would not have been detected in the absence of screening. It is not possible to precisely predict at diagnosis, to which cancers overdiagnosis would apply.” (accessed 22. nd August 2022; https://www.canceraustralia.gov.au/resources/position-statements/overdiagnosis-mammographic-screening ).

World Health Organization. Breast Cancer Geneva: WHO; 2021 [Available from: https://www.who.int/news-room/fact-sheets/detail/breast-cancer#:~:text=Reducing%20global%20breast%20cancer%20mortality,and%20comprehensive%20breast%20cancer%20management .

International Agency for Research on Cancer (IARC). IARC Handbooks on Cancer Screening: Volume 15 Breast Cancer Geneva: IARC; 2016 [Available from: https://publications.iarc.fr/Book-And-Report-Series/Iarc-Handbooks-Of-Cancer-Prevention/Breast-Cancer-Screening-2016 .

Australian Institute of Health and Welfare. Cancer in Australia 2021 [Available from: https://www.canceraustralia.gov.au/cancer-types/breast-cancer/statistics .

Breast Cancer Network Australia. Current breast cancer statistics in Australia 2020 [Available from: https://www.bcna.org.au/media/7111/bcna-2019-current-breast-cancer-statistics-in-australia-11jan2019.pdf .

Ren W, Chen M, Qiao Y, Zhao F. Global guidelines for breast cancer screening: A systematic review. The Breast. 2022;64:85–99.

Article   PubMed   PubMed Central   Google Scholar  

Cardoso F, Kyriakides S, Ohno S, Penault-Llorca F, Poortmans P, Rubio IT, et al. Early breast cancer: ESMO Clinical Practice Guidelines for diagnosis, treatment and follow-up. Ann Oncol. 2019;30(8):1194–220.

Article   CAS   PubMed   Google Scholar  

Hamashima C, Hattori M, Honjo S, Kasahara Y, Katayama T, Nakai M, et al. The Japanese guidelines for breast cancer screening. Jpn J Clin Oncol. 2016;46(5):482–92.

Article   PubMed   Google Scholar  

Bevers TB, Helvie M, Bonaccio E, Calhoun KE, Daly MB, Farrar WB, et al. Breast cancer screening and diagnosis, version 3.2018, NCCN clinical practice guidelines in oncology. J Natl Compr Canc Net. 2018;16(11):1362–89.

Article   Google Scholar  

He J, Chen W, Li N, Shen H, Li J, Wang Y, et al. China guideline for the screening and early detection of female breast cancer (2021, Beijing). Zhonghua Zhong liu za zhi [Chinese Journal of Oncology]. 2021;43(4):357–82.

CAS   PubMed   Google Scholar  

Cancer Australia. Early detection of breast cancer 2021 [cited 2022 25 July]. Available from: https://www.canceraustralia.gov.au/resources/position-statements/early-detection-breast-cancer .

Schünemann HJ, Lerda D, Quinn C, Follmann M, Alonso-Coello P, Rossi PG, et al. Breast Cancer Screening and Diagnosis: A Synopsis of the European Breast Guidelines. Ann Intern Med. 2019;172(1):46–56.

World Health Organization. WHO Position Paper on Mammography Screening Geneva WHO. 2016.

Google Scholar  

Lansdorp-Vogelaar I, Gulati R, Mariotto AB. Personalizing age of cancer screening cessation based on comorbid conditions: model estimates of harms and benefits. Ann Intern Med. 2014;161:104.

Lee CS, Moy L, Joe BN, Sickles EA, Niell BL. Screening for Breast Cancer in Women Age 75 Years and Older. Am J Roentgenol. 2017;210(2):256–63.

Broeders M, Moss S, Nystrom L. The impact of mammographic screening on breast cancer mortality in Europe: a review of observational studies. J Med Screen. 2012;19(suppl 1):14.

Oeffinger KC, Fontham ETH, Etzioni R, Herzig A, Michaelson JS, Shih YCT, et al. Breast cancer screening for women at average risk: 2015 Guideline update from the American cancer society. JAMA - Journal of the American Medical Association. 2015;314(15):1599–614.

Walter LC, Schonberg MA. Screening mammography in older women: a review. JAMA. 2014;311:1336.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Braithwaite D, Walter LC, Izano M, Kerlikowske K. Benefits and harms of screening mammography by comorbidity and age: a qualitative synthesis of observational studies and decision analyses. J Gen Intern Med. 2016;31:561.

Braithwaite D, Mandelblatt JS, Kerlikowske K. To screen or not to screen older women for breast cancer: a conundrum. Future Oncol. 2013;9(6):763–6.

Demb J, Abraham L, Miglioretti DL, Sprague BL, O’Meara ES, Advani S, et al. Screening Mammography Outcomes: Risk of Breast Cancer and Mortality by Comorbidity Score and Age. Jnci-Journal of the National Cancer Institute. 2020;112(6):599–606.

Demb J, Akinyemiju T, Allen I, Onega T, Hiatt RA, Braithwaite D. Screening mammography use in older women according to health status: a systematic review and meta-analysis. Clin Interv Aging. 2018;13:1987–97.

Qaseem A, Lin JS, Mustafa RA, Horwitch CA, Wilt TJ. Screening for Breast Cancer in Average-Risk Women: A Guidance Statement From the American College of Physicians. Ann Intern Med. 2019;170(8):547–60.

Collins K, Winslow M, Reed MW, Walters SJ, Robinson T, Madan J, et al. The views of older women towards mammographic screening: a qualitative and quantitative study. Br J Cancer. 2010;102(10):1461–7.

Welch HG, Black WC. Overdiagnosis in cancer. J Natl Cancer Inst. 2010;102(9):605–13.

Hersch J, Jansen J, Barratt A, Irwig L, Houssami N, Howard K, et al. Women’s views on overdiagnosis in breast cancer screening: a qualitative study. BMJ : British Medical Journal. 2013;346:f158.

De Gelder R, Heijnsdijk EAM, Van Ravesteyn NT, Fracheboud J, Draisma G, De Koning HJ. Interpreting overdiagnosis estimates in population-based mammography screening. Epidemiol Rev. 2011;33(1):111–21.

Monticciolo DL, Helvie MA, Edward HR. Current issues in the overdiagnosis and overtreatment of breast cancer. Am J Roentgenol. 2018;210(2):285–91.

Shepardson LB, Dean L. Current controversies in breast cancer screening. Semin Oncol. 2020;47(4):177–81.

National Cancer Control Centre. Cancer incidence in Australia 2022 [Available from: https://ncci.canceraustralia.gov.au/diagnosis/cancer-incidence/cancer-incidence .

Austin JD, Shelton RC, Lee Argov EJ, Tehranifar P. Older Women’s Perspectives Driving Mammography Screening Use and Overuse: a Narrative Review of Mixed-Methods Studies. Current Epidemiology Reports. 2020;7(4):274–89.

Austin JD, Tehranifar P, Rodriguez CB, Brotzman L, Agovino M, Ziazadeh D, et al. A mixed-methods study of multi-level factors influencing mammography overuse among an older ethnically diverse screening population: implications for de-implementation. Implementation Science Communications. 2021;2(1):110.

Demb J, Allen I, Braithwaite D. Utilization of screening mammography in older women according to comorbidity and age: protocol for a systematic review. Syst Rev. 2016;5(1):168.

Housten AJ, Pappadis MR, Krishnan S, Weller SC, Giordano SH, Bevers TB, et al. Resistance to discontinuing breast cancer screening in older women: A qualitative study. Psychooncology. 2018;27(6):1635–41.

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Peters M, Godfrey C, McInerney P, Munn Z, Tricco A, Khalil HAE, et al. Chapter 11: Scoping reviews. JBI Manual for Evidence Synthesis 2020 [Available from: https://jbi-global-wiki.refined.site/space/MANUAL .

Peters MD, Godfrey C, McInerney P, Khalil H, Larsen P, Marnie C, et al. Best practice guidance and reporting items for the development of scoping review protocols. JBI evidence synthesis. 2022;20(4):953–68.

Fantom NJ, Serajuddin U. The World Bank’s classification of countries by income. World Bank Policy Research Working Paper; 2016.

Book   Google Scholar  

BreastScreen Australia Evaluation Taskforce. BreastScreen Australia Evaluation. Evaluation final report: Screening Monograph No 1/2009. Canberra; Australia Australian Government Department of Health and Ageing; 2009.

Nelson HD, Cantor A, Humphrey L. Screening for breast cancer: a systematic review to update the 2009 U.S. Preventive Services Task Force recommendation2016.

Woolf SH. The 2009 breast cancer screening recommendations of the US Preventive Services Task Force. JAMA. 2010;303(2):162–3.

Covidence systematic review software. [Internet]. Veritas-Health-Innovation 2020. Available from: https://www.covidence.org/ .

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann Intern Med. 2018;169(7):467–73.

Tricco AC, Lillie E, Zarin W, O’Brien K, Colquhoun H, Kastner M, et al. A scoping review on the conduct and reporting of scoping reviews. BMC Med Res Methodol. 2016;16(1):15.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71.

Beckmeyer A, Smith RM, Miles L, Schonberg MA, Toland AE, Hirsch H. Pilot Evaluation of Patient-centered Survey Tools for Breast Cancer Screening Decision-making in Women 75 and Older. Health Behavior and Policy Review. 2020;7(1):13–8.

Brotzman LE, Shelton RC, Austin JD, Rodriguez CB, Agovino M, Moise N, et al. “It’s something I’ll do until I die”: A qualitative examination into why older women in the U.S. continue screening mammography. Canc Med. 2022;11(20):3854–62.

Article   CAS   Google Scholar  

Cadet T, Pinheiro A, Karamourtopoulos M, Jacobson AR, Aliberti GM, Kistler CE, et al. Effects by educational attainment of a mammography screening patient decision aid for women aged 75 years and older. Cancer. 2021;127(23):4455–63.

Cadet T, Aliberti G, Karamourtopoulos M, Jacobson A, Gilliam EA, Primeau S, et al. Evaluation of a mammography decision aid for women 75 and older at risk for lower health literacy in a pretest-posttest trial. Patient Educ Couns. 2021;104(9):2344–50.

Cadet T, Aliberti G, Karamourtopoulos M, Jacobson A, Siska M, Schonberg MA. Modifying a mammography decision aid for older adult women with risk factors for low health literacy.  Health Lit Res Prac. 2021;5(2):e78–90.

Gray N, Picone G. Evidence of Large-Scale Social Interactions in Mammography in the United States. Atl Econ J. 2018;46(4):441–57.

Hoover DS, Pappadis MR, Housten AJ, Krishnan S, Weller SC, Giordano SH, et al. Preferences for Communicating about Breast Cancer Screening Among Racially/Ethnically Diverse Older Women. Health Commun. 2019;34(7):702–6.

Salzman B, Bistline A, Cunningham A, Silverio A, Sifri R. Breast Cancer Screening Shared Decision-Making in Older African-American Women. J Natl Med Assoc. 2020;112(5):556–60.

PubMed   Google Scholar  

Schoenborn NL, Pinheiro A, Kistler CE, Schonberg MA. Association between Breast Cancer Screening Intention and Behavior in the Context of Screening Cessation in Older Women. Med Decis Making. 2021;41(2):240–4.

Schonberg MA, Kistler CE, Pinheiro A, Jacobson AR, Aliberti GM, Karamourtopoulos M, et al. Effect of a Mammography Screening Decision Aid for Women 75 Years and Older: A Cluster Randomized Clinical Trial. JAMA Intern Med. 2020;180(6):831–42.

Schonberg MA, Hamel MB, Davis RB. Development and evaluation of a decision aid on mammography screening for women 75 years and older. JAMA Intern Med. 2014;174:417.

Eisinger F, Viguier J, Blay J-Y, Morère J-F, Coscas Y, Roussel C, et al. Uptake of breast cancer screening in women aged over 75years: a controversy to come? Eur J Cancer Prev. 2011;20(Suppl 1):S13-5.

Schonberg MA, Breslau ES, McCarthy EP. Targeting of Mammography Screening According to Life Expectancy in Women Aged 75 and Older. J Am Geriatr Soc. 2013;61(3):388–95.

Download references

Acknowledgements

We would like to acknowledge Ange Hayden-Johns (expert librarian) who assisted with the development of the search criteria and undertook the relevant searches and Tejashree Kangutkar who assisted with some of the Covidence work.

This work was supported by funding from the Australian Government Department of Health and Aged Care (ID: Health/20–21/E21-10463).

Author information

Authors and affiliations.

Violet Vines Centre for Rural Health Research, La Trobe Rural Health School, La Trobe University, P.O. Box 199, Bendigo, VIC, 3552, Australia

Virginia Dickson-Swift, Joanne Adams & Evelien Spelten

Care Economy Research Institute, La Trobe University, Wodonga, Australia

Irene Blackberry

Olivia Newton-John Cancer Wellness and Research Centre, Austin Health, Melbourne, Australia

Carlene Wilson & Eva Yuen

Melbourne School of Population and Global Health, Melbourne University, Melbourne, Australia

Carlene Wilson

School of Psychology and Public Health, La Trobe University, Bundoora, Australia

Institute for Health Transformation, Deakin University, Burwood, Australia

Centre for Quality and Patient Safety, Monash Health Partnership, Monash Health, Clayton, Australia

You can also search for this author in PubMed   Google Scholar

Contributions

VDS conceived and designed the scoping review. VDS & JA developed the search strategy with librarian support, and all authors (VDS, JA, ES, IB, CW, EY) participated in the screening and data extraction stages and assisted with writing the review. All authors provided editorial support and read and approved the final manuscript prior to submission.

Corresponding author

Correspondence to Joanne Adams .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethics approval and consent to participate

Ethics approval and consent to participate was not required for this study.

Consent for publication

Consent for publication was not required for this study.

Competing interest

The authors declare they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Dickson-Swift, V., Adams, J., Spelten, E. et al. Breast cancer screening motivation and behaviours of women aged over 75 years: a scoping review. BMC Women's Health 24 , 256 (2024). https://doi.org/10.1186/s12905-024-03094-z

Download citation

Received : 06 September 2023

Accepted : 15 April 2024

Published : 24 April 2024

DOI : https://doi.org/10.1186/s12905-024-03094-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Breast cancer
  • Mammography
  • Older women
  • Scoping review

BMC Women's Health

ISSN: 1472-6874

literature review reporting guidelines

IMAGES

  1. What are the reporting guidelines to be followed while writing a

    literature review reporting guidelines

  2. 20 FREE Literature Review Templates and Examples (APA)

    literature review reporting guidelines

  3. The Importance of Literature Review in Scientific Research Writing

    literature review reporting guidelines

  4. Literature Review Guidelines

    literature review reporting guidelines

  5. 50 Smart Literature Review Templates (APA) ᐅ TemplateLab

    literature review reporting guidelines

  6. Guidelines for Writing a Literature Review

    literature review reporting guidelines

VIDEO

  1. LITERATURE REVIEW HPEF7063 ACADEMIC WRITING FOR POSTGRADURATES

  2. Literature Review for Research #hazarauniversity #trendingvideo #pakistan

  3. A literature review on online learning

  4. LITERATURE REVIEW

  5. For Literature Review and Reading| ጊዜዎን የሚቀጥብ ጠቃሚ AI Tool

  6. Reporting Guidelines| CONSORT Guideline Intro |#CONSORTguideline #research #biomedicalresearch

COMMENTS

  1. PRISMA statement

    Here you can access information about the PRISMA reporting guidelines, which are designed to help authors transparently report why their systematic review was done, what methods they used, and what they found. The main PRISMA reporting guideline (the PRISMA 2020 statement) primarily provides guidance for the reporting of systematic reviews ...

  2. The PRISMA 2020 statement: an updated guideline for reporting ...

    The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement ...

  3. The PRISMA 2020 statement: an updated guideline for reporting ...

    Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the ...

  4. PRISMA 2020 explanation and elaboration: updated guidance and exemplars

    The PRISMA 2020 items are relevant for mixed-methods systematic reviews (which include quantitative and qualitative studies), but reporting guidelines addressing the presentation and synthesis of qualitative data should also be consulted. 14 15 PRISMA 2020 can be used for original systematic reviews, updated systematic reviews, or continually ...

  5. Standards for Reporting Systematic Reviews

    5. Standards for Reporting Systematic Reviews. Abstract: Authors of publicly sponsored systematic reviews (SRs) should produce a detailed, comprehensive final report. The committee recommends three related standards for documenting the SR process, responding to input from peer reviewers and other users and stakeholders, and making the final ...

  6. The PRISMA 2020 statement: An updated guideline for reporting

    Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the ...

  7. Search for reporting guidelines

    PMID: 26348799. PRISMA-Abstracts 2020: The PRISMA 2020 for Abstracts reporting guideline is contained within the main PRISMA 2020 Statement paper. PRISMA-P: Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA. Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015 statement.

  8. Reporting Standards for Literature Reviews

    For the archetypes systematic literature review and systematic review detailed formats and guidelines are available; these have been developed with specific domains and characteristics of retrieved studies in mind. In addition to proper reporting of literature review, extracted data, analysis and synthesis should be made accessible.

  9. How to properly use the PRISMA Statement

    It has been more than a decade since the original publication of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement [], and it has become one of the most cited reporting guidelines in biomedical literature [2, 3].Since its publication, multiple extensions of the PRISMA Statement have been published concomitant with the advancement of knowledge synthesis ...

  10. Chapter III: Reporting the review

    See the Introduction and Methods sections of PRISMA 2020 for the reporting items relevant to protocols for new Cochrane Reviews. All these items are also covered in PRISMA for Protocols, an extension to the PRISMA guidelines for the reporting of systematic review protocols (Moher et al 2015, Shamseer et al 2015). They include guidance for ...

  11. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  12. How to Do a Systematic Review: A Best Practice Guide for ...

    The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information.

  13. The PRISMA 2020 statement: an updated guideline for reporting

    The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement published in 2009 (hereafter referred to as PRISMA 2009) [4,5,6,7,8,9,10] is a reporting guideline designed to address poor reporting of systematic reviews [].The PRISMA 2009 statement comprised a checklist of 27 items recommended for reporting in systematic reviews and an "explanation and elaboration ...

  14. Literature review as a research methodology: An overview and guidelines

    As mentioned previously, there are a number of existing guidelines for literature reviews. Depending on the methodology needed to achieve the purpose of the review, all types can be helpful and appropriate to reach a specific goal (for examples, please see Table 1).These approaches can be qualitative, quantitative, or have a mixed design depending on the phase of the review.

  15. Which reporting guideline should I use and why?

    For those doing a literature review, these include the PRISMA-P statement (Page et al., 2021) for systematic reviews with meta-analysis, the ENTREQ guidelines (Tong et al., 2007) for reviews of qualitative research, and the eMERGe guidelines for the reviews undertaken using a meta-ethnographic approach (France et al., 2019). There are PRISMA ...

  16. Guidance on Conducting a Systematic Literature Review

    Literature review is an essential feature of academic research. Fundamentally, knowledge advancement must be built on prior existing work. To push the knowledge frontier, we must know where the frontier is. By reviewing relevant literature, we understand the breadth and depth of the existing body of work and identify gaps to explore.

  17. The PRISMA 2020 statement: an updated guideline for reporting

    The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded ...

  18. Reporting guidelines under development for systematic reviews

    The development of this PRISMA extension will follow the steps described in the EQUATOR Toolkit for reporting guidelines, including literature review and consensus exercises. The group plans to publish the PRISMA-AI statement as an open-access document in 2022, and also have it translated to Spanish, French and Chinese.

  19. Review article: reporting guidelines in the biomedical literature

    Purpose: Complete and accurate reporting of original research in the biomedical literature is essential for healthcare professionals to translate research outcomes appropriately into clinical practice. Use of reporting guidelines has become commonplace among journals, peer reviewers, and authors. This narrative review aims 1) to inform investigators, peer reviewers, and authors of original ...

  20. Ten Simple Rules for Writing a Literature Review

    A diversity of feedback perspectives on a literature review can help identify where the consensus view stands in the ... (2011) Reproducibility of literature search reporting in medical education reviews. Acad Med 86: 1049-1054 ... Guyatt GH (1988) Guidelines for reading literature reviews. CMAJ 138: 697-703. [PMC free article] ...

  21. Writing a Literature Review

    A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays).

  22. Reporting guidelines

    Preliminary guideline for reporting bibliometric reviews of the biomedical literature (BIBLIO) : a minimum requirements. 19. Appropriate design and reporting of superiority, equivalence and non-inferiority clinical trials incorporating a benefit-risk assessment: the BRAINS study including expert workshop. 20.

  23. Study reporting guidelines: How valid are they?

    Reporting guidelines have been developed to help improve the reporting of specific study designs. If followed by authors this should enable users to understand the design, conduct and analysis of the research, to critically appraise and review the findings and interpret the conclusions appropriately [ 4 ]. A guideline is a checklist, diagram or ...

  24. Breast cancer screening motivation and behaviours of women aged over 75

    This scoping review aimed to identify and present the evidence describing key motivations for breast cancer screening among women aged ≥ 75 years. Few of the internationally available guidelines recommend continued biennial screening for this age group. Some suggest ongoing screening is unnecessary or should be determined on individual health status and life expectancy.