Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Published: 08 March 2018

Meta-analysis and the science of research synthesis

  • Jessica Gurevitch 1 ,
  • Julia Koricheva 2 ,
  • Shinichi Nakagawa 3 , 4 &
  • Gavin Stewart 5  

Nature volume  555 ,  pages 175–182 ( 2018 ) Cite this article

54k Accesses

870 Citations

881 Altmetric

Metrics details

  • Biodiversity
  • Outcomes research

Meta-analysis is the quantitative, scientific synthesis of research results. Since the term and modern approaches to research synthesis were first introduced in the 1970s, meta-analysis has had a revolutionary effect in many scientific fields, helping to establish evidence-based practice and to resolve seemingly contradictory research outcomes. At the same time, its implementation has engendered criticism and controversy, in some cases general and others specific to particular disciplines. Here we take the opportunity provided by the recent fortieth anniversary of meta-analysis to reflect on the accomplishments, limitations, recent advances and directions for future developments in the field of research synthesis.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

research synthesis systematic review

Similar content being viewed by others

research synthesis systematic review

Eight problems with literature reviews and how to fix them

research synthesis systematic review

The past, present and future of Registered Reports

research synthesis systematic review

Raiders of the lost HARK: a reproducible inference framework for big data science

Jennions, M. D ., Lortie, C. J. & Koricheva, J. in The Handbook of Meta-analysis in Ecology and Evolution (eds Koricheva, J . et al.) Ch. 23 , 364–380 (Princeton Univ. Press, 2013)

Article   Google Scholar  

Roberts, P. D ., Stewart, G. B. & Pullin, A. S. Are review articles a reliable source of evidence to support conservation and environmental management? A comparison with medicine. Biol. Conserv. 132 , 409–423 (2006)

Bastian, H ., Glasziou, P . & Chalmers, I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 7 , e1000326 (2010)

Article   PubMed   PubMed Central   Google Scholar  

Borman, G. D. & Grigg, J. A. in The Handbook of Research Synthesis and Meta-analysis 2nd edn (eds Cooper, H. M . et al.) 497–519 (Russell Sage Foundation, 2009)

Ioannidis, J. P. A. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q. 94 , 485–514 (2016)

Koricheva, J . & Gurevitch, J. Uses and misuses of meta-analysis in plant ecology. J. Ecol. 102 , 828–844 (2014)

Littell, J. H . & Shlonsky, A. Making sense of meta-analysis: a critique of “effectiveness of long-term psychodynamic psychotherapy”. Clin. Soc. Work J. 39 , 340–346 (2011)

Morrissey, M. B. Meta-analysis of magnitudes, differences and variation in evolutionary parameters. J. Evol. Biol. 29 , 1882–1904 (2016)

Article   CAS   PubMed   Google Scholar  

Whittaker, R. J. Meta-analyses and mega-mistakes: calling time on meta-analysis of the species richness-productivity relationship. Ecology 91 , 2522–2533 (2010)

Article   PubMed   Google Scholar  

Begley, C. G . & Ellis, L. M. Drug development: Raise standards for preclinical cancer research. Nature 483 , 531–533 (2012); clarification 485 , 41 (2012)

Article   CAS   ADS   PubMed   Google Scholar  

Hillebrand, H . & Cardinale, B. J. A critique for meta-analyses and the productivity-diversity relationship. Ecology 91 , 2545–2549 (2010)

Moher, D . et al. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 6 , e1000097 (2009). This paper provides a consensus regarding the reporting requirements for medical meta-analysis and has been highly influential in ensuring good reporting practice and standardizing language in evidence-based medicine, with further guidance for protocols, individual patient data meta-analyses and animal studies.

Moher, D . et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst. Rev. 4 , 1 (2015)

Nakagawa, S . & Santos, E. S. A. Methodological issues and advances in biological meta-analysis. Evol. Ecol. 26 , 1253–1274 (2012)

Nakagawa, S ., Noble, D. W. A ., Senior, A. M. & Lagisz, M. Meta-evaluation of meta-analysis: ten appraisal questions for biologists. BMC Biol. 15 , 18 (2017)

Hedges, L. & Olkin, I. Statistical Methods for Meta-analysis (Academic Press, 1985)

Viechtbauer, W. Conducting meta-analyses in R with the metafor package. J. Stat. Softw. 36 , 1–48 (2010)

Anzures-Cabrera, J . & Higgins, J. P. T. Graphical displays for meta-analysis: an overview with suggestions for practice. Res. Synth. Methods 1 , 66–80 (2010)

Egger, M ., Davey Smith, G ., Schneider, M. & Minder, C. Bias in meta-analysis detected by a simple, graphical test. Br. Med. J. 315 , 629–634 (1997)

Article   CAS   Google Scholar  

Duval, S . & Tweedie, R. Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics 56 , 455–463 (2000)

Article   CAS   MATH   PubMed   Google Scholar  

Leimu, R . & Koricheva, J. Cumulative meta-analysis: a new tool for detection of temporal trends and publication bias in ecology. Proc. R. Soc. Lond. B 271 , 1961–1966 (2004)

Higgins, J. P. T . & Green, S. (eds) Cochrane Handbook for Systematic Reviews of Interventions : Version 5.1.0 (Wiley, 2011). This large collaborative work provides definitive guidance for the production of systematic reviews in medicine and is of broad interest for methods development outside the medical field.

Lau, J ., Rothstein, H. R . & Stewart, G. B. in The Handbook of Meta-analysis in Ecology and Evolution (eds Koricheva, J . et al.) Ch. 25 , 407–419 (Princeton Univ. Press, 2013)

Lortie, C. J ., Stewart, G ., Rothstein, H. & Lau, J. How to critically read ecological meta-analyses. Res. Synth. Methods 6 , 124–133 (2015)

Murad, M. H . & Montori, V. M. Synthesizing evidence: shifting the focus from individual studies to the body of evidence. J. Am. Med. Assoc. 309 , 2217–2218 (2013)

Rasmussen, S. A ., Chu, S. Y ., Kim, S. Y ., Schmid, C. H . & Lau, J. Maternal obesity and risk of neural tube defects: a meta-analysis. Am. J. Obstet. Gynecol. 198 , 611–619 (2008)

Littell, J. H ., Campbell, M ., Green, S . & Toews, B. Multisystemic therapy for social, emotional, and behavioral problems in youth aged 10–17. Cochrane Database Syst. Rev. https://doi.org/10.1002/14651858.CD004797.pub4 (2005)

Schmidt, F. L. What do data really mean? Research findings, meta-analysis, and cumulative knowledge in psychology. Am. Psychol. 47 , 1173–1181 (1992)

Button, K. S . et al. Power failure: why small sample size undermines the reliability of neuroscience. Nat. Rev. Neurosci. 14 , 365–376 (2013); erratum 14 , 451 (2013)

Parker, T. H . et al. Transparency in ecology and evolution: real problems, real solutions. Trends Ecol. Evol. 31 , 711–719 (2016)

Stewart, G. Meta-analysis in applied ecology. Biol. Lett. 6 , 78–81 (2010)

Sutherland, W. J ., Pullin, A. S ., Dolman, P. M . & Knight, T. M. The need for evidence-based conservation. Trends Ecol. Evol. 19 , 305–308 (2004)

Lowry, E . et al. Biological invasions: a field synopsis, systematic review, and database of the literature. Ecol. Evol. 3 , 182–196 (2013)

Article   PubMed Central   Google Scholar  

Parmesan, C . & Yohe, G. A globally coherent fingerprint of climate change impacts across natural systems. Nature 421 , 37–42 (2003)

Jennions, M. D ., Lortie, C. J . & Koricheva, J. in The Handbook of Meta-analysis in Ecology and Evolution (eds Koricheva, J . et al.) Ch. 24 , 381–403 (Princeton Univ. Press, 2013)

Balvanera, P . et al. Quantifying the evidence for biodiversity effects on ecosystem functioning and services. Ecol. Lett. 9 , 1146–1156 (2006)

Cardinale, B. J . et al. Effects of biodiversity on the functioning of trophic groups and ecosystems. Nature 443 , 989–992 (2006)

Rey Benayas, J. M ., Newton, A. C ., Diaz, A. & Bullock, J. M. Enhancement of biodiversity and ecosystem services by ecological restoration: a meta-analysis. Science 325 , 1121–1124 (2009)

Article   ADS   PubMed   CAS   Google Scholar  

Leimu, R ., Mutikainen, P. I. A ., Koricheva, J. & Fischer, M. How general are positive relationships between plant population size, fitness and genetic variation? J. Ecol. 94 , 942–952 (2006)

Hillebrand, H. On the generality of the latitudinal diversity gradient. Am. Nat. 163 , 192–211 (2004)

Gurevitch, J. in The Handbook of Meta-analysis in Ecology and Evolution (eds Koricheva, J . et al.) Ch. 19 , 313–320 (Princeton Univ. Press, 2013)

Rustad, L . et al. A meta-analysis of the response of soil respiration, net nitrogen mineralization, and aboveground plant growth to experimental ecosystem warming. Oecologia 126 , 543–562 (2001)

Adams, D. C. Phylogenetic meta-analysis. Evolution 62 , 567–572 (2008)

Hadfield, J. D . & Nakagawa, S. General quantitative genetic methods for comparative biology: phylogenies, taxonomies and multi-trait models for continuous and categorical characters. J. Evol. Biol. 23 , 494–508 (2010)

Lajeunesse, M. J. Meta-analysis and the comparative phylogenetic method. Am. Nat. 174 , 369–381 (2009)

Rosenberg, M. S ., Adams, D. C . & Gurevitch, J. MetaWin: Statistical Software for Meta-Analysis with Resampling Tests Version 1 (Sinauer Associates, 1997)

Wallace, B. C . et al. OpenMEE: intuitive, open-source software for meta-analysis in ecology and evolutionary biology. Methods Ecol. Evol. 8 , 941–947 (2016)

Gurevitch, J ., Morrison, J. A . & Hedges, L. V. The interaction between competition and predation: a meta-analysis of field experiments. Am. Nat. 155 , 435–453 (2000)

Adams, D. C ., Gurevitch, J . & Rosenberg, M. S. Resampling tests for meta-analysis of ecological data. Ecology 78 , 1277–1283 (1997)

Gurevitch, J . & Hedges, L. V. Statistical issues in ecological meta-analyses. Ecology 80 , 1142–1149 (1999)

Schmid, C. H . & Mengersen, K. in The Handbook of Meta-analysis in Ecology and Evolution (eds Koricheva, J . et al.) Ch. 11 , 145–173 (Princeton Univ. Press, 2013)

Eysenck, H. J. Exercise in mega-silliness. Am. Psychol. 33 , 517 (1978)

Simberloff, D. Rejoinder to: Don’t calculate effect sizes; study ecological effects. Ecol. Lett. 9 , 921–922 (2006)

Cadotte, M. W ., Mehrkens, L. R . & Menge, D. N. L. Gauging the impact of meta-analysis on ecology. Evol. Ecol. 26 , 1153–1167 (2012)

Koricheva, J ., Jennions, M. D. & Lau, J. in The Handbook of Meta-analysis in Ecology and Evolution (eds Koricheva, J . et al.) Ch. 15 , 237–254 (Princeton Univ. Press, 2013)

Lau, J ., Ioannidis, J. P. A ., Terrin, N ., Schmid, C. H . & Olkin, I. The case of the misleading funnel plot. Br. Med. J. 333 , 597–600 (2006)

Vetter, D ., Rucker, G. & Storch, I. Meta-analysis: a need for well-defined usage in ecology and conservation biology. Ecosphere 4 , 1–24 (2013)

Mengersen, K ., Jennions, M. D. & Schmid, C. H. in The Handbook of Meta-analysis in Ecology and Evolution (eds Koricheva, J. et al.) Ch. 16 , 255–283 (Princeton Univ. Press, 2013)

Patsopoulos, N. A ., Analatos, A. A. & Ioannidis, J. P. A. Relative citation impact of various study designs in the health sciences. J. Am. Med. Assoc. 293 , 2362–2366 (2005)

Kueffer, C . et al. Fame, glory and neglect in meta-analyses. Trends Ecol. Evol. 26 , 493–494 (2011)

Cohnstaedt, L. W. & Poland, J. Review Articles: The black-market of scientific currency. Ann. Entomol. Soc. Am. 110 , 90 (2017)

Longo, D. L. & Drazen, J. M. Data sharing. N. Engl. J. Med. 374 , 276–277 (2016)

Gauch, H. G. Scientific Method in Practice (Cambridge Univ. Press, 2003)

Science Staff. Dealing with data: introduction. Challenges and opportunities. Science 331 , 692–693 (2011)

Nosek, B. A . et al. Promoting an open research culture. Science 348 , 1422–1425 (2015)

Article   CAS   ADS   PubMed   PubMed Central   Google Scholar  

Stewart, L. A . et al. Preferred reporting items for a systematic review and meta-analysis of individual participant data: the PRISMA-IPD statement. J. Am. Med. Assoc. 313 , 1657–1665 (2015)

Saldanha, I. J . et al. Evaluating Data Abstraction Assistant, a novel software application for data abstraction during systematic reviews: protocol for a randomized controlled trial. Syst. Rev. 5 , 196 (2016)

Tipton, E. & Pustejovsky, J. E. Small-sample adjustments for tests of moderators and model fit using robust variance estimation in meta-regression. J. Educ. Behav. Stat. 40 , 604–634 (2015)

Mengersen, K ., MacNeil, M. A . & Caley, M. J. The potential for meta-analysis to support decision analysis in ecology. Res. Synth. Methods 6 , 111–121 (2015)

Ashby, D. Bayesian statistics in medicine: a 25 year review. Stat. Med. 25 , 3589–3631 (2006)

Article   MathSciNet   PubMed   Google Scholar  

Senior, A. M . et al. Heterogeneity in ecological and evolutionary meta-analyses: its magnitude and implications. Ecology 97 , 3293–3299 (2016)

McAuley, L ., Pham, B ., Tugwell, P . & Moher, D. Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses? Lancet 356 , 1228–1231 (2000)

Koricheva, J ., Gurevitch, J . & Mengersen, K. (eds) The Handbook of Meta-Analysis in Ecology and Evolution (Princeton Univ. Press, 2013) This book provides the first comprehensive guide to undertaking meta-analyses in ecology and evolution and is also relevant to other fields where heterogeneity is expected, incorporating explicit consideration of the different approaches used in different domains.

Lumley, T. Network meta-analysis for indirect treatment comparisons. Stat. Med. 21 , 2313–2324 (2002)

Zarin, W . et al. Characteristics and knowledge synthesis approach for 456 network meta-analyses: a scoping review. BMC Med. 15 , 3 (2017)

Elliott, J. H . et al. Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap. PLoS Med. 11 , e1001603 (2014)

Vandvik, P. O ., Brignardello-Petersen, R . & Guyatt, G. H. Living cumulative network meta-analysis to reduce waste in research: a paradigmatic shift for systematic reviews? BMC Med. 14 , 59 (2016)

Jarvinen, A. A meta-analytic study of the effects of female age on laying date and clutch size in the Great Tit Parus major and the Pied Flycatcher Ficedula hypoleuca . Ibis 133 , 62–67 (1991)

Arnqvist, G. & Wooster, D. Meta-analysis: synthesizing research findings in ecology and evolution. Trends Ecol. Evol. 10 , 236–240 (1995)

Hedges, L. V ., Gurevitch, J . & Curtis, P. S. The meta-analysis of response ratios in experimental ecology. Ecology 80 , 1150–1156 (1999)

Gurevitch, J ., Curtis, P. S. & Jones, M. H. Meta-analysis in ecology. Adv. Ecol. Res 32 , 199–247 (2001)

Lajeunesse, M. J. phyloMeta: a program for phylogenetic comparative analyses with meta-analysis. Bioinformatics 27 , 2603–2604 (2011)

CAS   PubMed   Google Scholar  

Pearson, K. Report on certain enteric fever inoculation statistics. Br. Med. J. 2 , 1243–1246 (1904)

Fisher, R. A. Statistical Methods for Research Workers (Oliver and Boyd, 1925)

Yates, F. & Cochran, W. G. The analysis of groups of experiments. J. Agric. Sci. 28 , 556–580 (1938)

Cochran, W. G. The combination of estimates from different experiments. Biometrics 10 , 101–129 (1954)

Smith, M. L . & Glass, G. V. Meta-analysis of psychotherapy outcome studies. Am. Psychol. 32 , 752–760 (1977)

Glass, G. V. Meta-analysis at middle age: a personal history. Res. Synth. Methods 6 , 221–231 (2015)

Cooper, H. M ., Hedges, L. V . & Valentine, J. C. (eds) The Handbook of Research Synthesis and Meta-analysis 2nd edn (Russell Sage Foundation, 2009). This book is an important compilation that builds on the ground-breaking first edition to set the standard for best practice in meta-analysis, primarily in the social sciences but with applications to medicine and other fields.

Rosenthal, R. Meta-analytic Procedures for Social Research (Sage, 1991)

Hunter, J. E ., Schmidt, F. L. & Jackson, G. B. Meta-analysis: Cumulating Research Findings Across Studies (Sage, 1982)

Gurevitch, J ., Morrow, L. L ., Wallace, A . & Walsh, J. S. A meta-analysis of competition in field experiments. Am. Nat. 140 , 539–572 (1992). This influential early ecological meta-analysis reports multiple experimental outcomes on a longstanding and controversial topic that introduced a wide range of ecologists to research synthesis methods.

O’Rourke, K. An historical perspective on meta-analysis: dealing quantitatively with varying study results. J. R. Soc. Med. 100 , 579–582 (2007)

Shadish, W. R . & Lecy, J. D. The meta-analytic big bang. Res. Synth. Methods 6 , 246–264 (2015)

Glass, G. V. Primary, secondary, and meta-analysis of research. Educ. Res. 5 , 3–8 (1976)

DerSimonian, R . & Laird, N. Meta-analysis in clinical trials. Control. Clin. Trials 7 , 177–188 (1986)

Lipsey, M. W . & Wilson, D. B. The efficacy of psychological, educational, and behavioral treatment. Confirmation from meta-analysis. Am. Psychol. 48 , 1181–1209 (1993)

Chalmers, I. & Altman, D. G. Systematic Reviews (BMJ Publishing Group, 1995)

Moher, D . et al. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of reporting of meta-analyses. Lancet 354 , 1896–1900 (1999)

Higgins, J. P. & Thompson, S. G. Quantifying heterogeneity in a meta-analysis. Stat. Med. 21 , 1539–1558 (2002)

Download references

Acknowledgements

We dedicate this Review to the memory of Ingram Olkin and William Shadish, founding members of the Society for Research Synthesis Methodology who made tremendous contributions to the development of meta-analysis and research synthesis and to the supervision of generations of students. We thank L. Lagisz for help in preparing the figures. We are grateful to the Center for Open Science and the Laura and John Arnold Foundation for hosting and funding a workshop, which was the origination of this article. S.N. is supported by Australian Research Council Future Fellowship (FT130100268). J.G. acknowledges funding from the US National Science Foundation (ABI 1262402).

Author information

Authors and affiliations.

Department of Ecology and Evolution, Stony Brook University, Stony Brook, 11794-5245, New York, USA

Jessica Gurevitch

School of Biological Sciences, Royal Holloway University of London, Egham, TW20 0EX, Surrey, UK

Julia Koricheva

Evolution and Ecology Research Centre and School of Biological, Earth and Environmental Sciences, University of New South Wales, Sydney, 2052, New South Wales, Australia

Shinichi Nakagawa

Diabetes and Metabolism Division, Garvan Institute of Medical Research, 384 Victoria Street, Darlinghurst, Sydney, 2010, New South Wales, Australia

School of Natural and Environmental Sciences, Newcastle University, Newcastle upon Tyne, NE1 7RU, UK

Gavin Stewart

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed equally in designing the study and writing the manuscript, and so are listed alphabetically.

Corresponding authors

Correspondence to Jessica Gurevitch , Julia Koricheva , Shinichi Nakagawa or Gavin Stewart .

Ethics declarations

Competing interests.

The authors declare no competing financial interests.

Additional information

Reviewer Information Nature thanks D. Altman, M. Lajeunesse, D. Moher and G. Romero for their contribution to the peer review of this work.

Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

PowerPoint slides

Powerpoint slide for fig. 1, rights and permissions.

Reprints and permissions

About this article

Cite this article.

Gurevitch, J., Koricheva, J., Nakagawa, S. et al. Meta-analysis and the science of research synthesis. Nature 555 , 175–182 (2018). https://doi.org/10.1038/nature25753

Download citation

Received : 04 March 2017

Accepted : 12 January 2018

Published : 08 March 2018

Issue Date : 08 March 2018

DOI : https://doi.org/10.1038/nature25753

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

research synthesis systematic review

How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses

Affiliations.

  • 1 Behavioural Science Centre, Stirling Management School, University of Stirling, Stirling FK9 4LA, United Kingdom; email: [email protected].
  • 2 Department of Psychological and Behavioural Science, London School of Economics and Political Science, London WC2A 2AE, United Kingdom.
  • 3 Department of Statistics, Northwestern University, Evanston, Illinois 60208, USA; email: [email protected].
  • PMID: 30089228
  • DOI: 10.1146/annurev-psych-010418-102803

Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems. Although this guide targets psychological scientists, its high level of abstraction makes it potentially relevant to any subject area or discipline. We argue that systematic reviews are a key methodology for clarifying whether and how research findings replicate and for explaining possible inconsistencies, and we call for researchers to conduct systematic reviews to help elucidate whether there is a replication crisis.

Keywords: evidence; guide; meta-analysis; meta-synthesis; narrative; systematic review; theory.

  • Guidelines as Topic
  • Meta-Analysis as Topic*
  • Publication Bias
  • Review Literature as Topic
  • Systematic Reviews as Topic*

X

Library Services

UCL LIBRARY SERVICES

  • Guides and databases
  • Library skills
  • Systematic reviews

Synthesis and systematic maps

  • What are systematic reviews?
  • Types of systematic reviews
  • Formulating a research question
  • Identifying studies
  • Searching databases
  • Describing and appraising studies
  • Software for systematic reviews
  • Online training and support
  • Live and face to face training
  • Individual support
  • Further help

Searching for information

On this page:

Types of synthesis

  • Systematic evidence map

Synthesis is the process of combining the findings of research studies. A synthesis is also the product and output of the combined studies. This output may be a written narrative, a table, or graphical plots, including statistical meta-analysis. The process of combining studies and the way the output is reported varies according to the research question of the review.

In primary research there are many research questions and many different methods to address them. The same is true of systematic reviews. Two common and different types of review are those asking about the evidence of impact (effectiveness) of an intervention and those asking about ways of understanding a social phenomena.

If a systematic review question is about the effectiveness of an intervention, then the included studies are likely to be experimental studies that test whether an intervention is effective or not. These studies report evidence of the relative effect of an intervention compared to control conditions.

A synthesis of these types of studies aggregates the findings of the studies together. This produces an overall measure of effect of the intervention (after taking into account the sample sizes of the studies). This is a type of quantitative synthesis that is testing a hypothesis (that an intervention is effective) and the review methods are described in advance (using a deductive a priori paradigm).

  • Ongoing developments in meta-analytic and quantitative synthesis methods: Broadening the types of research questions that can be addressed O'Mara-Eves, A. and Thomas, J. (2016). This paper discusses different types of quantitative synthesis in education research.

If a systematic review question is about ways of understanding a social phenomena, it iteratively analyses the findings of studies to develop overarching concepts, theories or themes. The included studies are likely to provide theories, concepts or insights about a phenomena. This might, for example, be studies trying to explain why patients do not always take the medicines provided to them by doctors.

A synthesis of these types of studies is an arrangement or configuration of the concepts from individual studies. It provides overall ‘meta’ concepts to help understand the phenomena under study.  This type of qualitative or conceptual synthesis is more exploratory and some of the detailed methods may develop during the process of the review (using an inductive iterative paradigm).

  • Methods for the synthesis of qualitative research: a critical review ​Barnett-Page and Thomas, (2009). This paper summarises some of the different approaches to qualitative synthesis.

There are also multi-component reviews that ask broad question with sub-questions using different review methods.

  • Teenage pregnancy and social disadvantage: systematic review integrating controlled trials and qualitative studies. Harden et al (2009). An example of a review that combines two types of synthesis. It develops: 1) a statistical meta-analysis of controlled trials on interventions for early parenthood; and 2) a thematic synthesis of qualitative studies of young people views of early parenthood.

Systematic evidence maps

Systematic evidence maps are a product that describe the nature of research in an area. This is in contrast to a synthesis that provides uses research findings to make a statement about an evidence base. A 'systematic map' can both explain what has been studied and also indicate what has not been studied and where there are gaps in the research (gap maps). They can be useful to compare trends and differences across sets of studies.

Systematic maps can be a standalone finished product of research, without a synthesis, or may also be a component a systematic review that will synthesise studies. 

A systematic map can help to plan a synthesis. It may be that the map shows that the studies to be synthesised are very different from each other, and it may be more appropriate to use a subset of the studies. Where a subset of studies is used in the synthesis, the review question and the boundaries of the review will need to be narrowed in order to provide a rigorous approach for selecting the sub-set of studies from the map. The studies in the map that are not synthesised can help with interpreting the synthesis and drawing conclusions. Please note that, confusingly, the 'scoping review' is sometimes used by people to describe systematic evidence maps and at other times to refer to reviews that are quick, selective scopes of the nature and size of literature in an area.

A systematic map may be published in different formats, such as a written report or database. Increasingly, maps are published as databases with interactive visualisations to enable the user to investigate and visualise different parts of the map. Living systematic maps are regularly updated so the evidence stays current.

Some examples of different maps are shown here:

  • Women in Wage Labour: An evidence map of what works to increase female wage labour market participation in LMICs Filters Example of a systematic evidence map from the Africa Centre for Evidence.
  • Acceptability and uptake of vaccines: Rapid map of systematic reviews Example of a map of systematic reviews.
  • COVID-19: a living systematic map of the evidence Example of a living map of health research on COVID-19.

Meta-analysis

  • What is a meta-analysis? Helpful resource from the University of Nottingham.
  • MetaLight: software for teaching and learning meta-analysis Software tool that can help in learning about meta-analysis.
  • KTDRR Research Evidence Training: An Overview of Effect Sizes and Meta-analysis Webcast video (56 mins). Overview of effect sizes and meta-analysis.
  • << Previous: Describing and appraising studies
  • Next: Software for systematic reviews >>
  • Last Updated: Apr 4, 2024 10:09 AM
  • URL: https://library-guides.ucl.ac.uk/systematic-reviews

Systematic Reviews and Meta-Analyses: Synthesis & Discussion

  • Get Started
  • Exploratory Search
  • Where to Search
  • How to Search
  • Grey Literature
  • What about errata and retractions?
  • Eligibility Screening
  • Critical Appraisal
  • Data Extraction
  • Synthesis & Discussion
  • Assess Certainty
  • Share & Archive

One of the final steps in a systematic review is the synthesis of evidence  and writing the  discussion.

Your team began working toward this stage in the protocol  when you clearly identified the comparisons of interest. The work you've done in  data extraction  and critical appraisal  phases will feed directly into the synthesis.

Qualitative Synthesis

Qualitative synthesis in systematic reviews and/or meta-analyses.

Selecting the best approach for synthesis will depend on your scope , included material, field of research, etc. Therefore, it is important to follow methodological guidance that best matches your scope and field (e.g., a heath-focused  review guided by the Cochrane Handbook ). It can also be helpful to check out the synthesis and discussion of systematic reviews published by journals to which you plan to submit your review. 

In almost all cases, a qualitative synthesis of some kind will be part of your systematic review. A quantitative synthesis (e.g., meta-analysis ) should only be pursued as appropriate.

Meta-synthesis  and  Qualitative Evidence Synthesis are term sometimes used to describe a systematic review with only a qualitative synthesis . 

Guidance for Qualitative Synthesis

In some methodological guidance , this stage may effectively be described as a separate methodology altogether.

For example, the Cochrane Handbook,  Part 2: Core Methods covers synthesis through the lens of conducting a meta-analysis and/or quantitative synthesis. In Part 3: Specific perspectives in reviews,  Cochrane goes into more detail about qualitative evidence synthesis in Chapter 21 : Qualitative Evidence. Similarly, the JBI Manual for Evidence Synthesis contains a stand-alone chapter, Chapter 2: Systematic Reviews of Qualitative Evidence

Considerations and Decisions

  • How you will group data for your synthesis   and how grouping decisions are made , whether you're pursuing just a qualitative synthesis or both a qualitative synthesis and a meta-analysis, is an important consideration prior to starting the synthesis.
  • Assess heterogeneity  between studies, even if you don't plan to pursue a meta-analysis. Consider variability in participants studied, the definitions/measurements/frequency/etc. of interventions, or exposures, or outcomes, etc. This is part of the process to determine which studies are reasonable to synthesize.
  • Selection of a formal qualitative synthesis approach ( optional )

Qualitative Data and Analysis Tools

Check out this Library Guide for more information about tools for qualitative data anlaysis at Virginia Tech.

Qualitative Synthesis Approaches

This is not a comprehensive list of approaches.  However, it can be a jumping off point for your team as you plan. The selection of approaches listed here is partially informed by Barnett-Page & Thomas (2009)

Note: Many of these approaches are also  stand-alone qualitative research methods. 

Content Analysis

"In the case of qualitative systematic reviews, raw data consist of qualitative research findings (i.e. text) that have been systematically extracted from existing research reports...The manner in which these findings are coded is largely guided by the research topic and questions and the data that are available for analysis." ( Finfgeld-Connett, 2014 )

  • Identification of data segments
  • Memoing & diagramming

Resources for Content Analysis

  • Finfgeld-Connett D. Use of content analysis to conduct knowledge-building and theory-generating qualitative systematic reviews .  Qualitative Research . 2014;14(3):341-352. doi:10.1177/1468794113481790
  • Elo, S., & Kyngäs, H. (2008). The qualitative content analysis process . Journal of Advanced Nursing, 62(1), 107–115. https://doi.org/10.1111/j.1365-2648.2007.04569.x 
  • Mayring, P. (2015). Qualitative content analysis: Theoretical background and procedures . In A. Bikner-Ahsbahs, C. Knipping, & N. Presmeg (Eds.), Approaches to Qualitative Research in Mathematics Education: Examples of Methodology and Methods (pp. 365–380). Springer Netherlands. https://doi.org/10.1007/978-94-017-9181-6_13

Thematic Synthesis 

"Developed out of a need to conduct reviews that addressed questions relating to intervention need, appropriateness, acceptability, [and effectiveness] without compromising on key principles developed in systematic reviews"( Barnett-Paige & Thomas 2009 )

According to Thomas & Harden (2008) :

  • Code text (line-by-line) 
  • Develop descriptive themes
  • Generate analytic themes

Resources for Thematic Synthesis 

Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews . BMC Med Res Methodol. 2008 Jul 10;8:45. doi: 10.1186/1471-2288-8-45. PMID: 18616818; PMCID: PMC2478656.

Framework Synthesis

The "rationale [behind framework synthesis] is that qualitative research produces large amounts of textual data in the form of transcripts, observational fieldnotes etc. The sheer wealth of information poses a challenge for rigorous analysis. Framework synthesis offers a highly structured approach to organising and analysing data (e.g. indexing using numerical codes, rearranging data into charts etc)." ( Barnett-Page & Thomas, 2009 )

According to Brunton & James (2020) :

  • Familiarization (with existing literature)
  • Framework selection 
  • Indexing & charting
  • Mapping & interpretation

Resources for Framework Synthesis 

  • Brunton, G., Oliver, S., & Thomas, J. (2020). Innovations in framework synthesis as a systematic review method . Research Synthesis Methods, 11(3), 316–330. https://doi.org/10.1002/jrsm.1399
  • Dixon-Woods, M. Using framework-based synthesis for conducting reviews of qualitative studies .   BMC Med   9,  39 (2011). https://doi.org/10.1186/1741-7015-9-39

Grounded Theory

Grounded theory is defined as "a specific methodology developed by Glaser and Strauss (1967) for the purpose of building theory from data . In this book the term grounded theory is used in a more generic sense to denote theoretical constructs derived from qualitative analysis of data." ( Strauss & Corbin, 2008 )

According to Barnett-Paige & Thomas, 2009 , "key methods and assumptions...include":

  • " simultaneous phases of data collection and analysis;
  • inductive approach to analysis, allowing the theory to emerge from the theory ;
  • the use of constant comparison method ;
  • the use of theoretical sampling to reach theoretical saturation; and the generation of new theory"

Resources for Grounded Theory

  • Glaser, B. G., & Strauss, A. L. (1967). The Discovery of Grounded Theory: Strategies for Qualitative Research . Aldine.
  • Corbin, J., & Strauss, A. (2008).  Basics of qualitative research (3rd ed.): Techniques and procedures for developing grounded theory . SAGE Publications, Inc. https://dx. doi. org/10.4135/9781452230153

Meta-Ethnography 

This is proposed as an alternative to "Meta-Analysis" (Nolbit & Hare, 1998;  Barnett-Paige & Thomas 2009 ) and "should be interpretive rather than aggregative . We make the case that is should take the form of reciprocal translations of studies into one another" (Nolbit & Hare, 1998)

  • Reciprocal translational analysis (RTA) - translate concepts; evolve overarching concepts
  • Refutational synthesis - explore and explain contradictions between studies 
  • Lines-of-argument (LOA) synthesis - building up a picture of a whole from the parts (the individual studies) 

Reporting Guideline

Improving reporting of meta-ethnography: The eMERGe reporting guidance (documents the development of eMERGe)

Resources for Meta-Ethnography

  • Sattar, R., Lawton, R., Panagioti, M.  et al.   Meta-ethnography in healthcare research: a guide to using a meta-ethnographic approach for literature synthesis .  BMC Health Serv Res   21,  50 (2021). https://doi-org.ezproxy.lib.vt.edu/10.1186/s12913-020-06049-w
  • France, E.F., Wells, M., Lang, H.  et al.   Why, when and how to update a meta-ethnography qualitative synthesis .  Syst Rev   5,  44 (2016). https://doi-org.ezproxy.lib.vt.edu/10.1186/s13643-016-0218-4
  • Noblit, G. W., & Hare, R. D. (1988).  Meta-ethnography . SAGE Publications, Inc. https://dx. doi. org/10.4135/9781412985000
  • Barnett-Page, E., Thomas, J. Methods for the synthesis of qualitative research: a critical review .  BMC Med Res Methodoly   9,  59 (2009). https://doi-org.ezproxy.lib.vt.edu/10.1186/1471-2288-9-59 
  • Flemming, K., & Noyes, J. (2021). Qualitative Evidence Synthesis: Where Are We at?   International Journal of Qualitative Methods .  https://doi.org/10.1177/1609406921993276

Meta-Analysis

  • Presenting Results
  • Alternative Quantitative Synthesis

Meta-analysis

“The statistical analysis of a large collection of analysis results from individual studies for the purpose of integrating the findings .” ( Glass, 1976 )

“A statistical analysis which combines the results of several independent studies considered by the analyst to be ‘combinable’. ” (Huque, 1988)

“Meta-analysis is the statistical combination of results from two or more separate studies .” (Cochrane Handbook for Systematic Reviews of Interventions version 6.3, Chapter 10 )

The Cochrane Handbook ( Chapter 10.1 ) states:

"Do not start here!" ...results of meta-analyses can be very misleading if suitable attention has not been given to formulating the review question; specifying eligibility criteria; identifying and selecting studies; collecting appropriate data; considering risk of bias; planning intervention comparisons; and deciding what data would be meaningful to analyse. 

Choosing to pursue a Meta-Analysis

Reasons to pursue a meta-analysis.

Meta-analyses are a desirable end-goal as a this kind of synthesis can:

  • Increase statistical power / improve precision
  • Result in a summary estimate of the direction and size of the effect or association 
  • Determine consistent across studies and explore why studies found different results
  • Address questions that can’t be addressed by the individual studies (related to factors that differ across studies)
  • Potentially resolve uncertainties if disagreement in literature and identify areas where evidence is insufficient

Reasons  not  to pursue a Meta-Analysis

Despite the appeal of the meta-analytic approach, it is vital that studies in the meta-analysis measure the same thing in the same way - that the studies themselves are reasonable to combine statistically .

According to Cochrane Chapter 12.1 , "Legitimate reasons [for not conducting a meta-analysis] include limited evidence ; incompletely reported outcome/effect estimates, or different effect measures used across studies; and bias in the evidence."  Table 12.1.a describes scenarios that may preclude meta-analyses, with possible solutions

Likewise, a synthesis is only as good as the studies included . In other words, a meta-analysis cannot improve poor quality studies.

This is not a comprehensive list - as with any analysis, you'll need to select specific approaches based on the kind of data you have.

  • How you will  group data  for your synthesis   and  how grouping decisions are made , is an important consideration prior to starting the synthesis.
  • Effect size measures  must be comparable across included studies and/or computable given the information available in the primary studies. For example, in a review of weight loss studies, you may convert all effects to pounds of lost weight. 
  • Fixed-Effects: "assumes (1) all studies are measuring the same common (true) effect size (why we call it fixed), [and] (2) the observed results would be identical expect for random (sampling error)" ( Borenstein, 2009 )
  • Random-Effects:  "assumes (1) there are multiple population effects that the studies are estimating - different effect sizes underlying different studies, [and] (2) variability between effect sizes is due to sampling error + variability in population of effects" ( Borenstein, 2009 )
  • There are some additional analyses you'll need to run to determine  heterogeneity  (how different studies are from each other). A  sensitivity analysis  or meta-regression  is used to evaluate the effects of including or excluding certain groups of studies in your analysis, for example studies rated as low quality or high-risk of bias during the critical appraisal. You can also consider  publication bias  in your sample using a funnel plot (although there are valid critiques of the reliability of this practice).
  • Glass, Gene V. “ Primary, Secondary, and Meta-Analysis of Research .”   Educational Researcher , vol. 5, no. 10, 1976, pp. 3–8, https://doi.org/10.2307/1174772.
  • Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to Meta-Analysis . John Wiley & Sons, Ltd. https://doi.org/10.1002/9780470743386
  • Pigott, T. D., & Polanin, J. R. (2020). Methodological Guidance Paper: High-Quality Meta-Analysis in a Systematic Review .  Review of Educational Research ,  90 (1), 24–46.  https://doi.org/10.3102/0034654319877153

Tools for Meta-Analyses

Several tools exist for running your own meta-analyses. If you need further support, check out the help tab in this box.

Graphical User Interface (no programming required) 

  • RevMan | Developed by the Cochrane Collaboration; good for beginners
  • PyMeta | Built from PythonMeta package for command line interface in python  
  • Comprehensive Meta-Analysis  |  fee-based
  • MedCalc   |  fee-based

Command Line Interface (programming required)

  • Metafor | R package; introduction from creator, Wolfgang Viechtbauer
  • xmeta | R package; toolbox for multivariate meta-analyses
  • PythonMeta | Python package; graphical interface available as PyMeta
  • Polanin, J. R., Hennessy, E. A., & Tanner-Smith, E. E. (2017). A Review of Meta-Analysis Packages in R . Journal of Educational and Behavioral Statistics , 42 (2), 206–242. https://doi.org/10.3102/1076998616674315
  • Video for using R for Meta package

Present Meta-Analysis Results

A meta-analysis is most commonly presented as a Forest Plot.

Forest Plot

If you are new to the concept of forest plots, check out Dr. Terry Shaneyfelt from UAB School of Medicine How to interpret a forest plot .

Alternative Quantitative Synthesis Methods

According to Cochrane Chapter 9.5 , "There are circumstances under which a meta-analysis is not possible, however, and other statistical synthesis methods might be considered, so as to make best use of the available data."

Table 9.5.a from the Cochrane Handbook , represented below, outlines some alternative synthesis method (and one  summary  method in the first row).

While the Evidence Synthesis Services (ESS) team at the University Libraries is available to support the other stages of a systematic review and/or meta-analysis,

we recommend reaching out to the Statistical Applications and Innovations Group (SAIG) for support in the statistical synthesis / meta-analysis. 

Linked image of SAIG landing page

Methodological Guidance

  • Health Sciences
  • Animal, Food Sciences
  • Social Sciences
  • Environmental Sciences

Cochrane Handbook  - Part 1: About Cochrane Reviews

Chapter III : Reporting the Review  (specifically part  III.III );  Note: if you are not conducting a Cochrane Review, use this resource as a guidepost

Cochrane Handbook  -  Part 2: Core Methods

Chapter 9 : Summarizing study characteristics and preparing for synthesis

  • 9.2 A general framework for synthesis 
  • 9.3 Preliminary steps of a synthesis 
  • 9.4 Checking data before synthesis
  • 9.5  Types of synthesis

Chapter 10 : Analyzing data and undertaking meta-analyses  

  • 10.1 Do not start here!
  • 10.2 Introduction to meta-analysis
  • 10.3 A generic inverse-variance approach to meta-analysis 
  • 10.4 Meta-analysis of dichotomous outcomes
  • 10.5 Meta-analysis of continuous outcomes 
  • 10.6 Combining dichotomous and continuous outcomes
  • 10.7 Meta-analysis of ordinal outcomes and measurement scales
  • 10.8 Meta-analysis of counts and rates
  • 10.9 Meta-analysis of time-to-event outcomes
  • 10.10 Heterogeneity
  • 10.11 Investing heterogeneity
  • 10.12 Missing data
  • 10.13 Bayesian approaches to meta-analysis 
  • 10.14 Sensitivity analyses  
  • 10.S1 Supplementary material: Statistical algorithms in Review Manager 5.1

Chapter 12 : Synthesizing and presenting findings using other methods

  • 12.1 Why a meta-analysis of effect estimates may not be possible 
  • 12.2 Statistical synthesis when meta-analysis of effect estimates is not possible
  • 12.3 Visual display and presentation of the data

Chapter 13 : Assessing risk of bias due to missing results in a synthesis

  • 13.2 Minimizing risk of bias due to missing results
  • 13.3 A framework for assessing risk of bias due to missing results in a synthesis

Chapter 15: Interpreting results and drawing conclusions 

  • 15.2 Issues of indirectness and applicability 
  • 15.3 Interpreting results of statistical analyses 
  • 15.4 Interpreting results from dichotomous outcomes
  • 15.5 Interpreting results from continuous outcomes (including standardized mean differences)
  • 15.6 Drawing conclusions 

Cochrane Handbook  - Part 3: Specific Perspectives in Reviews

Chapter 21 : Qualitative Evidence 

  • 21.2 Designs for synthesizing and integrating qualitative evidence with intervention reviews
  • 21.3 Defining qualitative evidence and studies
  • 21.4 Planning qualitative evidence synthesis linked to an intervention review
  • 21.5 Question development 
  • 21.13 Methods for integrating the qualitative evidence synthesis with an intervention review

SYREAF Tutorials

Step 5 . data synthesis.

Conducting systematic reviews of intervention questions III: Synthesizing data from intervention studies using meta-analysis.  O’Connor AM, Sargeant JM, Wang C. Zoonoses Public Health. 2014 Jun;61 Suppl 1:52-63. doi: 10.1111/zph.12123. PMID: 24905996

Meta-analyses  including data from observational studies.  O’Connor AM, Sargeant JM. Prev Vet Med. 2014 Feb 15;113(3):313-22. doi: 10.1016/j.prevetmed.2013.10.017. Epub 2013 Oct 31. PMID: 24268538

Step 6. Presenting the results &  Step 7. Reaching a conclusion

Conducting systematic reviews of intervention questions II: Relevance screening, data extraction, assessing risk of bias, presenting the results and interpreting the findings.  Sargeant JM, O’Connor AM. Zoonoses Public Health. 2014 Jun;61 Suppl 1:39-51. doi: 10.1111/zph.12124. PMID: 24905995

Campbell -  MECCIR

C59. Addressing risk of bias / study quality in the synthesis ( review / final manuscript )

C60 . Incorporating assessments of risk of bias ( review / final manuscript )

C61. Combining different scales  ( review / final manuscript )

C62. Ensuring meta-analyses are meaningful  ( review / final manuscript )

C63. Assessing statistical heterogeneity  ( protocol &   review / final manuscript )

C64. Addressing missing outcome data  ( review / final manuscript )

C65. Addressing skewed data  ( review / final manuscript )

C66. Addressing studies with more than two groups  ( protocol &   review / final manuscript )

C67. Comparing subgroups  ( protocol &   review / final manuscript )

C68. Interpreting subgroup analyses  ( protocol &   review / final manuscript )

C69. Considering statistical heterogeneity when interpreting the results ( review / final manuscript )

C70. Addressing non-standard designs  ( protocol &   review / final manuscript )

C71. Conducting sensitivity analysis  ( protocol &   review / final manuscript )

C72. Interpreting results  ( review / final manuscript )

C73. Investigating reporting biases  ( review / final manuscript )

C77. Formulating implications for practice  ( review / final manuscript )

C78. Avoiding recommendations  ( review / final manuscript )

C79. Formulating implications for research  ( review / final manuscript )

CEE  -  Guidelines and Standards for Evidence synthesis in Environmental Management

Section 9. data synthesis.

CEE Standards for conduct and reporting

9.1 Systematic Reviews

9.1.1 Narrative Synthesis

9.1.2 Quantitative Data Synthesis 

9.1.3 Qualitative Data Synthesis

Section 10. Interpreting findings and reporting conduct

10.1 The interpretation of evidence syntheses

10.2 Reporting conduct of evidence synthesis

10.3 Reporting findings of evidence syntheses

Reporting in Protocol and Final Manuscript

  • Final Manuscript

In the Protocol |  PRISMA-P

Data synthesis   (item 15), qualitative synthesis only.

If quantitative synthesis is not appropriate, describe the type of summary planned   (Item 15d)

all of the above plus:

Describe criteria under which study data will be quantitatively synthesised   (Item 15a) ...quantitative synthesis, describe planned summary measures , methods of handling data and methods of combining data from studies , including any planned exploration of consistency (such as I2 , Kendall’s τ)  (Item 15b) ...describe any proposed additional analyses (such as sensitivity or subgroup analyses, meta-regression)  (Item 15c)

In the Final Manuscript |  PRISMA

Synthesis methods (item 13; report in  methods ), essential items.

  • Describe the processes used to decide which studies were eligible for each synthesis .  (Item 13a)
  • Report any methods required to prepare the data collected from studies for presentation or synthesis, such as handling of missing summary statistics or data conversions  (Item 13b)
  • Report chosen tabular structure(s) used to display results of individual studies and syntheses, along with details of the data presented  (Item 13c)
  • Report chosen graphical methods used to visually display results of individual studies and syntheses  (Item 13c)
  • If it was not possible to conduct a meta-analysis, describe and justify the synthesis methods ...or summary approach used   (Item 13d)
  • If a planned synthesis was not considered possible or appropriate, report this and the reason for that decision   (Item 13d)

Additional Items

  • If studies are ordered or grouped within tables or graphs based on study characteristics (such as by size of the study effect, year of publication), consider reporting the basis for the chosen ordering/grouping   (Item 13c)
  • If non-standard graphs were used, consider reporting the rationale for selecting the chosen graph   (Item 13c)

Meta-Analysis (or other quantitative methods used)

  • ...reference the software, packages, and version numbers used to implement synthesis methods (such as metan in Stata metafor (version 2.1-0) in R118)  (Item 13d)
  • the meta-analysis model (fixed-effect, fixed-effects, or random-effects) and provide rationale for the selected model.
  • the method used (such as Mantel-Haenszel, inverse-variance).
  • any methods used to identify or quantify statistical heterogeneity (such as visual inspection of results, a formal statistical test for heterogeneity, heterogeneity variance (τ2), inconsistency (such as I2), and prediction intervals) 
  • the between-study (heterogeneity) variance estimator used (such as DerSimonian and Laird, restricted maximum likelihood (REML)).
  • the method used to calculate the confidence interval for the summary effect (such as Wald-type confidence interval, Hartung-Knapp-Sidik-Jonkman) 
  • If a Bayesian approach to meta-analysis was used, describe the prior distributions about quantities of interest (such as intervention effect being analysed, amount of heterogeneity in results across studies)  (Item 13d)
  • If multiple effect estimates from a study were included in a meta-analysis...describe the method(s) used to model or account for the statistical dependency. .. (Item 13d)
  • If methods were used to explore possible causes of statistical heterogeneity , specify the method used (such as subgroup analysis, meta-regression)  (Item 13e)
  • which factors were explored, levels of those factors, and which direction of effect modification was expected and why (where possible)  (Item 13e)
  • whether analyses were conducted using study-level variables (where each study is included in one subgroup only), within-study contrasts (where data on subsets of participants within a study are available, allowing the study to be included in more than one subgroup), or some combination of the above ( Item 13e)
  • how subgroup effects were compared (such as statistical test for interaction for subgroup analyses)  (Item 13e)
  • If other methods were used to explore heterogeneity because data were not amenable to meta-analysis of effect estimates, describe the methods used (such as structuring tables to examine variation in results across studies based on subpopulation, key intervention components, or contextual factors) along with the factors and levels  (Item 13e)
  • If any analyses used to explore heterogeneity were not pre-specified, identify them as such  (Item 13e)
  • If sensitivity analyses were performed, provide d etails of each analysis (such as removal of studies at high risk of bias, use of an alternative meta-analysis model)  (Item 13f)
  • If any sensitivity analyses were not pre-specified , identify them as such  (Item 13f)

If a random-effects meta-analysis model was used, consider specifying other details about the methods used, such as the method for calculating confidence limits for the heterogeneity variance  (Item 13d)

Reporting Bias Assessment (Item 14; report in methods )

  • Specify the methods ... used to assess the risk of bias due to missing results in a synthesis (arising from reporting biases).
  • If risk of bias due to missing results was assessed using an existing tool, specify the methodological components/domains/items of the tool, and the process used to reach a judgment of overall risk of bias .
  • If any adaptations to an existing tool to assess risk of bias due to missing results were made (such as omitting or modifying items), specify the adaptations.
  • If a new tool to assess risk of bias due to missing results was developed for use in the review, describe the content of the tool and make it publicly accessible.
  • Report how many reviewers assessed risk of bias due to missing results in a synthesis, whether multiple reviewers worked independently, and any processes used to resolve disagreements between assessors.
  • Report any processes used to obtain or confirm relevant information from study investigators.
  • If an automation tool was used to assess risk of bias due to missing results, report how the tool was used , how the tool was trained , and details on the tool’s performance and internal validation

Results of Synthesis (Item 20; report in results )

  • Provide a brief summary of the characteristics and risk of bias among studies contributing to each synthesis (meta-analysis or other). The summary should focus only on study characteristics that help in interpreting the results (especially those that suggest the evidence addresses only a restricted part of the review question, or indirectly addresses the question). If the same set of studies contribute to more than one synthesis, or if the same risk of bias issues are relevant across studies for different syntheses, such a summary need be provided once only  (Item 20a)
  • Indicate which studies were included in each synthesis (such as by listing each study in a forest plot or table or citing studies in the text)  (Item 20a)
  • Report results of all statistical syntheses described in the protocol and all syntheses conducted that were not pre-specified  (Item 20b)

Meta-Analysis (or other quantitative methods used)

  • the summary estimate and its precision (such as standard error or 95% confidence/credible interval).
  • measures of statistical heterogeneity (such as τ2, I2, prediction interval).
  • If other statistical synthesis methods were used (such as summarising effect estimates, combining P values), report the synthesised result and a measure of precision (or equivalent information, for example, the number of studies and total sample size)  (Item 20b)
  • If the statistical synthesis method does not yield an estimate of effect (such as when P values are combined), report the relevant statistics (such as P value from the statistical test), along with an interpretation of the result that is consistent with the question addressed by the synthesis method (for example, “There was strong evidence of benefit of the intervention in at least one study (P < 0.001, 10 studies)” when P values have been combined)  (Item 20b)
  • If comparing groups , describe the direction of effect (such as fewer events in the intervention group, or higher pain in the comparator group)  (Item 20b)
  • If synthesising mean differences , specify for each synthesis, where applicable, the unit of measurement (such as kilograms or pounds for weight), the upper and lower limits of the measurement scale (for example, anchors range from 0 to 10), direction of benefit (for example, higher scores denote higher severity of pain), and the minimally important difference , if known. If synthesising standardised mean differences and the effect estimate is being re-expressed to a particular instrument, details of the instrument, as per the mean difference, should be reported  (Item 20b)
  • present results regardless of the statistical significance, magnitude, or direction of effect modification  (Item 20c)
  • identify the studies contributing to each subgroup   (Item 20c)
  • report results with due consideration to the observational nature of the analysis and risk of confounding due to other factors  (Item 20c)
  • If subgroup analysis was conducted, report for each analysis the exact P value for a test for interaction as well as, within each subgroup, the summary estimates , their precision (such as standard error or 95% confidence/credible interval) and measures of heterogeneity . Results from subgroup analyses might usefully be presented graphically  (Item 20c)
  • If meta-regression was conducted, report for each analysis the exact P value for the regression coefficient and its precision  (Item 20c)
  • If informal methods (that is, those that do not involve a formal statistical test) were used to investigate heterogeneity —which may arise particularly when the data are not amenable to meta-analysis— describe the results observed . For example, present a table that groups study results by dose or overall risk of bias and comment on any patterns observed  (Item 20c)
  • report the results for each sensitivity analysis  (Item 20d)
  • comment on how robust the main analysis was given the results of all corresponding sensitivity analyses   (Item 20d)
  • If subgroup analysis was conducted, consider presenting the estimate for the difference between subgroups and its precision  (Item 20c)
  • If meta-regression was conducted, consider presenting a meta-regression scatterplot with the study effect estimates plotted against the potential effect modifier   (Item 20c)
  • the summary effect estimate , a measure of precision (and potentially other relevant statistics, for example, I2 statistic) and contributing studies for the original meta-analysis;
  • the same information for the  sensitivity analysis ; and
  • details of the original and sensitivity analysis assumptions   (Item 20d)
  • presenting results of sensitivity analyses visually using forest plots   (Item 20d)

Reporting Biases (Item 21; report in results )

  • Present assessments of risk of bias due to missing results (arising from reporting biases) for each synthesis assessed.
  • If a tool was used to assess risk of bias due to missing results in a synthesis, present responses to questions in the tool, judgments about risk of bias, and any i nformation used to support such judgments to help readers understand why particular judgments were made.
  • If a funnel plot was generated to evaluate small-study effects (one cause of which is reporting biases), present the plot and specify the effect estimate and measure of precision used in the plot (presented typically on the horizontal axis and vertical axis respectively). If a contour-enhanced funnel plot was generated, specify the “milestones” of statistical significance that the plotted contour lines represent (P=0.01, 0.05, 0.1, etc).
  • If a test for funnel plot asymmetry was used, report the exact P value observed for the test and potentially other relevant statistics, such as the standardised normal deviate, from which the P value is derived.
  • If any sensitivity analyses seeking to explore the potential impact of missing results on the synthesis were conducted, present results of each analysis (see item #20d), compare them with results of the primary analysis, and report results with due consideration of the limitations of the statistical method.
  • If studies were assessed for selective non-reporting of results by comparing outcomes and analyses pre-specified in study registers, protocols, and statistical analysis plans with results that were available in study reports, consider presenting a matrix (with rows as studies and columns as syntheses) to present the availability of study results.
  • If an assessment of selective non-reporting of results reveals that some studies are missing from the synthesis, consider displaying the studies with missing results underneath a forest plot or including a table with the available study results (for example, see forest plot in Page et al)

Discussion (Item 23)

  • Provide a  general interpretation of the results  in the context of other evidence  (Item 23a)
  • Discuss any  limitations of the evidence  included in the review  (Item 23b)
  • Discuss any  limitations of the review processes  used and comment on the  potential impact  of each limitation  (Item 23c)
  • Discuss  implications of the results  for practice and policy  (Item 23d)
  • Make  explicit recommendations  for  future research  (Item 23d)
  • << Previous: Data Extraction
  • Next: Assess Certainty >>
  • Last Updated: Apr 12, 2024 12:41 PM
  • URL: https://guides.lib.vt.edu/SRMA

UCI Libraries Mobile Site

  • Langson Library
  • Science Library
  • Grunigen Medical Library
  • Law Library
  • Connect From Off-Campus
  • Accessibility
  • Gateway Study Center

Libaries home page

Email this link

Systematic reviews & evidence synthesis methods.

  • Schedule a Consultation / Meet our Team
  • What is Evidence Synthesis?
  • Types of Evidence Synthesis
  • Evidence Synthesis Across Disciplines
  • Finding and Appraising Existing Systematic Reviews
  • 1. Develop a Protocol
  • 2. Draft your Research Question
  • 3. Select Databases
  • 4. Select Grey Literature Sources
  • 5. Write a Search Strategy
  • 6. Register a Protocol
  • 7. Translate Search Strategies
  • 8. Citation Management
  • 9. Article Screening
  • 10. Risk of Bias Assessment
  • 11. Data Extraction
  • 12. Synthesize, Map, or Describe the Results
  • Open Access Evidence Synthesis Resources

Requirements for the Systematic Review Process

Systematic reviews are a huge endeavor, so here are a few requirements if you are thinking of employing this methodology:

  • Systematic reviews require time . 12-24 months is usual from conception to submission.
  • Systematic reviews require a team . Four (4) or more team members are recommended. A principal investigator, a second investigator, a librarian, and someone well-versed in statistics forms the basic team. Ideally the team might have another investigator and someone to coordinate all the moving pieces. Smaller teams are possible, three is the realistic minimum . Two investigators each wearing more than one hat and one librarian. Sometimes an investigator has the time and energy to coordinate. Occasionally one of the investigators is also a statistical guru.
  • * An exception to this rule is an "empty review," which retrieves zero studies that meet the inclusion criteria. Empty reviews are relatively uncommon, but may be used to demonstrate a need for future research in an area. However, an empty review may instead indicate that the research question was defined too narrowly. 

Why do a systematic review? A well done systematic review is a major contribution to the literature. But the requirements in time and effort are massive. Cochrane estimates one year from conception to completion. This does not including time for review, revision and publication. You need to assemble a team and they need to commit for the duration.

A good place to start is with a consultation with a librarian. Visit the " Schedule a Consultation " page to learn why.

  • << Previous: Finding and Appraising Existing Systematic Reviews
  • Next: 1. Develop a Protocol >>
  • Last Updated: Apr 22, 2024 7:04 PM
  • URL: https://guides.lib.uci.edu/evidence-synthesis

Off-campus? Please use the Software VPN and choose the group UCIFull to access licensed content. For more information, please Click here

Software VPN is not available for guests, so they may not have access to some content when connecting from off-campus.

Systematic Reviews and Meta Analysis

  • Getting Started
  • Guides and Standards
  • Review Protocols
  • Databases and Sources
  • Randomized Controlled Trials
  • Controlled Clinical Trials
  • Observational Designs
  • Tests of Diagnostic Accuracy
  • Software and Tools
  • Where do I get all those articles?
  • Collaborations
  • EPI 233/528
  • Countway Mediated Search
  • Risk of Bias (RoB)

Systematic review Q & A

What is a systematic review.

A systematic review is guided filtering and synthesis of all available evidence addressing a specific, focused research question, generally about a specific intervention or exposure. The use of standardized, systematic methods and pre-selected eligibility criteria reduce the risk of bias in identifying, selecting and analyzing relevant studies. A well-designed systematic review includes clear objectives, pre-selected criteria for identifying eligible studies, an explicit methodology, a thorough and reproducible search of the literature, an assessment of the validity or risk of bias of each included study, and a systematic synthesis, analysis and presentation of the findings of the included studies. A systematic review may include a meta-analysis.

For details about carrying out systematic reviews, see the Guides and Standards section of this guide.

Is my research topic appropriate for systematic review methods?

A systematic review is best deployed to test a specific hypothesis about a healthcare or public health intervention or exposure. By focusing on a single intervention or a few specific interventions for a particular condition, the investigator can ensure a manageable results set. Moreover, examining a single or small set of related interventions, exposures, or outcomes, will simplify the assessment of studies and the synthesis of the findings.

Systematic reviews are poor tools for hypothesis generation: for instance, to determine what interventions have been used to increase the awareness and acceptability of a vaccine or to investigate the ways that predictive analytics have been used in health care management. In the first case, we don't know what interventions to search for and so have to screen all the articles about awareness and acceptability. In the second, there is no agreed on set of methods that make up predictive analytics, and health care management is far too broad. The search will necessarily be incomplete, vague and very large all at the same time. In most cases, reviews without clearly and exactly specified populations, interventions, exposures, and outcomes will produce results sets that quickly outstrip the resources of a small team and offer no consistent way to assess and synthesize findings from the studies that are identified.

If not a systematic review, then what?

You might consider performing a scoping review . This framework allows iterative searching over a reduced number of data sources and no requirement to assess individual studies for risk of bias. The framework includes built-in mechanisms to adjust the analysis as the work progresses and more is learned about the topic. A scoping review won't help you limit the number of records you'll need to screen (broad questions lead to large results sets) but may give you means of dealing with a large set of results.

This tool can help you decide what kind of review is right for your question.

Can my student complete a systematic review during her summer project?

Probably not. Systematic reviews are a lot of work. Including creating the protocol, building and running a quality search, collecting all the papers, evaluating the studies that meet the inclusion criteria and extracting and analyzing the summary data, a well done review can require dozens to hundreds of hours of work that can span several months. Moreover, a systematic review requires subject expertise, statistical support and a librarian to help design and run the search. Be aware that librarians sometimes have queues for their search time. It may take several weeks to complete and run a search. Moreover, all guidelines for carrying out systematic reviews recommend that at least two subject experts screen the studies identified in the search. The first round of screening can consume 1 hour per screener for every 100-200 records. A systematic review is a labor-intensive team effort.

How can I know if my topic has been been reviewed already?

Before starting out on a systematic review, check to see if someone has done it already. In PubMed you can use the systematic review subset to limit to a broad group of papers that is enriched for systematic reviews. You can invoke the subset by selecting if from the Article Types filters to the left of your PubMed results, or you can append AND systematic[sb] to your search. For example:

"neoadjuvant chemotherapy" AND systematic[sb]

The systematic review subset is very noisy, however. To quickly focus on systematic reviews (knowing that you may be missing some), simply search for the word systematic in the title:

"neoadjuvant chemotherapy" AND systematic[ti]

Any PRISMA-compliant systematic review will be captured by this method since including the words "systematic review" in the title is a requirement of the PRISMA checklist. Cochrane systematic reviews do not include 'systematic' in the title, however. It's worth checking the Cochrane Database of Systematic Reviews independently.

You can also search for protocols that will indicate that another group has set out on a similar project. Many investigators will register their protocols in PROSPERO , a registry of review protocols. Other published protocols as well as Cochrane Review protocols appear in the Cochrane Methodology Register, a part of the Cochrane Library .

  • Next: Guides and Standards >>
  • Last Updated: Feb 26, 2024 3:17 PM
  • URL: https://guides.library.harvard.edu/meta-analysis

Systematic Review and Evidence Synthesis

Acknowledgements.

This guide is directly informed by and selectively reuses, with permission, content from: 

  • Systematic Reviews, Scoping Reviews, and other Knowledge Syntheses by Genevieve Gore and Jill Boruff, McGill University (CC-BY-NC-SA)
  • A Guide to Evidence Synthesis , Cornell University Library Evidence Synthesis Service

Primary University of Minnesota Libraries authors are: Meghan Lafferty, Scott Marsalis, & Erin Reardon

Last updated: September 2022

Guidance by Methodology

  • PRISMA reporting guidelines
  • Systematic review guidance
  • Meta-analysis guidance
  • Scoping review guidance
  • Evidence and gap map guidance
  • Rapid review guidance
  • Qualitative meta-synthesis guidance
  • Umbrella review guidance

PRISMA Statement introduction

Prisma statement introduction.

The PRISMA statement is the main reporting standard for evidence synthesis. The acronym refers to Preferred Reporting Items for Systematic Reviews and Meta-Analyses. In addition to the main PRISMA statement there are many extensions for other methodologies; these are notated by a suffix. Please refer to  www.prisma-statement.org  for the most recent information and updates and a complete list of extensions. We list only the most frequently used on this page.

When using PRISMA or its extensions it is important to carefully read the key documents. Typically there will be a paper introducing the standard and its development,  an "E&E" or "Explanation & Elaboration" paper which explains the components of the statement and gives examples, and a checklist, which helps the authors make sure they fully comply in reporting their study.

One of the most familiar aspects of PRISMA is the flow diagram which summarized the flow of information through the process, from record identification through screening and synthesis. Only including the flow diagram is not enough to comply with PRISMA or its extensions.

PRISMA Key Documents

  • PRISMA 2020 Checklist
  • PRISMA 2020 flow diagram
  • PRISMA 2020 Statement
  • PRISMA 2020 Explanation and Elaboration

PRISMA Extensions

  • PRISMA for Abstracts
  • PRISMA for Acupuncture
  • PRISMA for Diagnostic Test Accuracy
  • PRISMA for EcoEvo
  • PRISMA Equity
  • PRISMA Harms (for reviews including Harm outcomes)
  • PRISMA Individual Patient Data
  • PRISMA for Network Meta-Analyses
  • PRISMA for Protocols
  • PRISMA for Scoping Reviews
  • PRISMA for Searching
  • Extensions in development

Systematic Review Guidance

Cochrane Handbook for Systematic Reviews of Interventions

Finding What Works in Health Care: Standards for Systematic Reviews

An Introduction to Systematic Reviews

JBI Manual for Evidence Synthesis

Meta-Analysis Guidance

Research synthesis and meta-analysis : a step-by-step approach (Fifth edition.) Cooper. (2017). Research synthesis and meta-analysis : a step-by-step approach (Fifth edition.). SAGE Publications, Inc.

Scoping Review Guidance

  • JBI Manual for Evidence Synthesis - Chapter 11: Scoping Reviews
  • Tricco, A., Lillie, E., Zarin, W., O'Brien, K., Colquhoun, H., Levac, D., . . . Straus, S. (2018). PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Annals of Internal Medicine, 169(7), 467-473. https://doi.org/10.7326/M18-0850.  
  • Arksey, H. & O'Malley, L. (2005) Scoping studies: towards a methodological framework. International Journal of Social Research Methodology, 8:1, 19-32, DOI: 10.1080/1364557032000119616
  • Peters, M.D., Marnie, C., Tricco, A.C., Pollock, D., Munn, Z., Alexander, L., McInerney, P., Godfrey, C.M. and Khalil, H., (2020). Updated methodological guidance for the conduct of scoping reviews. JBI Evidence Synthesis, 18(10), pp.2119-2126.
  • Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18(1):143. doi: 10.1186/s12874-018-0611-x (Open access)

Evidence & Gap Map Guidance

  • White, H., Albers, B., Gaarder, M., Kornør, H., Little, J., Marshall, Z., Mathew, C., Pigott, T., Snilsveit, B., Waddington, H., & Welch, V. (2020). Guidance for producing a Campbell evidence and gap map. Campbell Systematic Review, 16(4).
  • Ashrita Saran. (2020). Evidence and gap maps. Campbell Systematic Review, 16(1).
  • Ashrita Saran, & Howard White. (2018). Evidence and gap maps: A comparison of different approaches. Campbell Systematic Review, 14(1), 1-38.

Rapid Review Guidance

  • Tricco, A., Antony, J., Zarin, W., Strifler, L., Ghassemi, M., Ivory, J., Perrier, L., Hutton, B., Moher, D., & Straus, S. (2015). A scoping review of rapid review methods . BMC Medicine, 13(1), 224.

Qualitative Meta-Synthesis Guidance

  • Aguirre, R. T., & Bolton, K. W. (2014). Qualitative interpretive meta-synthesis in social work research: Uncharted territory. Journal of Social Work, 14(3), 279-294. https://doi.org/10.1177/1468017313476797
  • Hannes, K., & Lockwood, C. (2012). Synthesizing Qualitative Research: Choosing the Right Approach. John Wiley & Sons. 
  • Sandelowski, M., & Barroso, J. (2002). Reading Qualitative Studies. International Journal of Qualitative Methods, 1(1), 74-108.

Umbrella Review Guidance

  • JBI Manual for Evidence Synthesis - Chapter 10: Umbrella Reviews
  • Aromataris, E., Fernandez, R.S., Godfrey, C., Holly, C., Khalil, H., & Tungpunkom, P. (2014). Methodology for JBI umbrella reviews.
  • << Previous: Evidence Synthesis Resources Across Disciplines
  • Next: Existing Reviews: Finding, Evaluating, Updating >>

SMU Libraries logo

  •   SMU Libraries
  • Scholarship & Research
  • Teaching & Learning
  • Bridwell Library
  • Business Library
  • DeGolyer Library
  • Fondren Library
  • Hamon Arts Library
  • Underwood Law Library
  • Fort Burgwin Library
  • Exhibits & Digital Collections
  • SMU Scholar
  • Special Collections & Archives
  • Connect With Us
  • Research Guides by Subject
  • How Do I . . . ? Guides
  • Find Your Librarian
  • Writing Support

Evidence Syntheses and Systematic Reviews: Overview

  • Choosing a Review

Analyze and Report

What is evidence synthesis.

Evidence Synthesis: general term used to refer to any method of identifying, selecting, and combining results from multiple studies. There are several types of reviews which fall under this term; the main ones are in the table below: 

Types of Reviews

General steps for conducting systematic reviews.

The number of steps for conducting Evidence Synthesis varies a little, depending on the source that one consults. However, the following steps are generally accepted in how Systematic Reviews are done:

  • Identify a gap in the literature and form a well-developed and answerable research question which will form the basis of your search
  • Select a framework that will help guide the type of study you’re undertaking
  • Different guidelines are used for documenting and reporting the protocols of your systematic review before the review is conducted. The protocol is created following whatever guideline you select.
  • Select Databases and Grey Literature Sources
  • For steps 3 and 4, it is advisable to consult a librarian before embarking on this phase of the review process. They can recommend databases and other sources to use and even help design complex searches.
  • A protocol is a detailed plan for the project, and after it is written, it should be registered with an appropriate registry.
  • Search Databases and Other Sources
  • Not all databases use the same search syntax, so when searching multiple databases, use search syntaxes that would work in individual databases.
  • Use a citation management tool to help store and organize your citations during the review process; great help when de-duplicating your citation results
  • Inclusion and exclusion criteria already developed help you remove articles that are not relevant to your topic. 
  • Assess the quality of your findings to eliminate bias in either the design of the study or in the results/conclusions (generally not done outside of Systematic Reviews).

Extract and Synthesize

  • Extract the data from what's left of the studies that have been analyzed
  • Extraction tools are used to get data from individual studies that will be analyzed or summarized. 
  • Synthesize the main findings of your research

Report Findings

Report the results using a statistical approach or in a narrative form.

Need More Help?

Librarians can:

  • Provide guidance on which methodology best suits your goals
  • Recommend databases and other information sources for searching
  • Design and implement comprehensive and reproducible database-specific search strategies 
  • Recommend software for article screening
  • Assist with the use of citation management
  • Offer best practices on documentation of searches

Related Guides

  • Literature Reviews
  • Choose a Citation Manager
  • Project Management

Steps of a Systematic Review - Video

  • Next: Choosing a Review >>
  • Last Updated: Apr 5, 2024 10:58 AM
  • URL: https://guides.smu.edu/evidencesyntheses

University of Texas

  • University of Texas Libraries
  • UT Libraries

Systematic Reviews & Evidence Synthesis Methods

  • Types of Reviews
  • Formulate Question
  • Find Existing Reviews & Protocols
  • Register a Protocol
  • Searching Systematically
  • Supplementary Searching
  • Managing Results
  • Deduplication
  • Critical Appraisal
  • Glossary of terms
  • Librarian Support
  • Video tutorials This link opens in a new window
  • Systematic Review & Evidence Synthesis Boot Camp

What is a Systematic Review?

A systematic review gathers, assesses, and synthesizes  all available empirical  research on a specific question using a comprehensive search method with an aim to minimize bias.

Or, put another way : 

A systematic review begins with a specific research question.  Authors of the review gather and evaluate all experimental studies that address the question .  Bringing together the findings of these separate studies allows the review authors to make new conclusions from what has been learned.

*The key characteristics of a systematic review are:

  • A clearly stated set of objectives with pre-defined eligibility criteria for studies;
  • An explicit, reproducible methodology;
  • A systematic search that attempts to identify all relevant research;
  • A critical appraisal of the included studies;
  • A clear and objective synthesis and presentation of the characteristics and findings of the included studies.

*Lasserson T, Thomas J, Higgins JPT. Chapter 1: Starting a review. In Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors).  Cochrane Handbook for Systematic Reviews of Interventions  version 6.4 (updated August 2023). Cochrane, 2023. Available from www.training.cochrane.org/handbook .

What is the difference between an evidence synthesis and a systematic review? A systematic review is a type of evidence synthesis.  Any literature review is a type of evidence synthesis.  For the various types of evidence syntheses/literature reviews, see the page on this guide Types of Reviews .

Systematic reviews are usually done as a team project , requiring cooperation and a commitment of (lots of) time and effort over an extended period. You will need at least 3 people and, depending on the scope of the project and the size of the database result sets, you should plan for 6-24 months from start to completion

Things to Know Before You Begin . . .

Run exploratory searches on the topic to get a sense of the plausibility of your project.

A systematic review requires a research question that is already well-covered in the primary literature.  That is, if there has been little previous work on the topic, there will be little to analyze and conclusions hard to find.

A narrowly-focused research question may add little to the knowledge of the field of study.

Make sure someone else has not already 1) written a recent systematic review on your topic, or 2) is in the midst of a similar systematic review project. Instructions on how to check .

Team members will need to use research databases for searching the literature.  If these databases are not available through library subscriptions or freely available, their use may require payment or travel. Look here for database recommendations .

It is extremely important to develop a protocol for your project.  Guidance is provided here .

Tools such as a reference manager and a screening tool will save time.  

Lynn Bostwick : Nursing, Nutrition, Pharmacy, Public Health

Meryl Brodsky : Communication and Information Studies

Hannah Chapman Tripp : Biology, Neuroscience

Carolyn Cunningham : Human Development & Family Sciences, Psychology, Sociology

Larayne Dallas : Engineering

Liz DeHart : Marine Science

Grant Hardaway : Educational Psychology, Kinesiology & Health Education, Social Work

Janelle Hedstrom : Special Education, Curriculum & Instruction, Ed Leadership & Policy ​

Susan Macicak : Linguistics

Imelda Vetter : Dell Medical School

  • Last Updated: Apr 9, 2024 8:57 PM
  • URL: https://guides.lib.utexas.edu/systematicreviews

Creative Commons License

RMIT University

Teaching and Research guides

Systematic reviews.

  • Starting the review
  • About systematic reviews
  • Research question
  • Plan your search
  • Sources to search
  • Search example
  • Screen and analyse

What is synthesis?

Quantitative synthesis (meta-analysis), qualitative synthesis.

  • Guides and software
  • Further help

Synthesis is a stage in the systematic review process where extracted data (findings of individual studies) are combined and evaluated. The synthesis part of a systematic review will determine the outcomes of the review.

There are two commonly accepted methods of synthesis in systematic reviews:

  • Quantitative data synthesis
  • Qualitative data synthesis

The way the data is extracted from your studies and synthesised and presented depends on the type of data being handled.

If you have quantitative information, some of the more common tools used to summarise data include:

  • grouping of similar data, i.e. presenting the results in tables
  • charts, e.g. pie-charts
  • graphical displays such as forest plots

If you have qualitative information, some of the more common tools used to summarise data include:

  • textual descriptions, i.e. written words
  • thematic or content analysis

Whatever tool/s you use, the general purpose of extracting and synthesising data is to show the outcomes and effects of various studies and identify issues with methodology and quality. This means that your synthesis might reveal a number of elements, including:

  • overall level of evidence
  • the degree of consistency in the findings
  • what the positive effects of a drug or treatment are, and what these effects are based on
  • how many studies found a relationship or association between two things

In a quantitative systematic review, data is presented statistically. Typically, this is referred to as a meta-analysis . 

The usual method is to combine and evaluate data from multiple studies. This is normally done in order to draw conclusions about outcomes, effects, shortcomings of studies and/or applicability of findings.

Remember, the data you synthesise should relate to your research question and protocol (plan). In the case of quantitative analysis, the data extracted and synthesised will relate to whatever method was used to generate the research question (e.g. PICO method), and whatever quality appraisals were undertaken in the analysis stage.

One way of accurately representing all of your data is in the form of a f orest plot . A forest plot is a way of combining results of multiple clinical trials in order to show point estimates arising from different studies of the same condition or treatment. 

It is comprised of a graphical representation and often also a table. The graphical display shows the mean value for each trial and often with a confidence interval (the horizontal bars). Each mean is plotted relative to the vertical line of no difference.

  • Forest Plots - Understanding a Meta-Analysis in 5 Minutes or Less (5:38 min) In this video, Dr. Maureen Dobbins, Scientific Director of the National Collaborating Centre for Methods and Tools, uses an example from social health to explain how to construct a forest plot graphic.
  • How to interpret a forest plot (5:32 min) In this video, Terry Shaneyfelt, Clinician-educator at UAB School of Medicine, talks about how to interpret information contained in a typical forest plot, including table data.
  • An introduction to meta-analysis (13 mins) Dr Christopher J. Carpenter introduces the concept of meta-analysis, a statistical approach to finding patterns and trends among research studies on the same topic. Meta-analysis allows the researcher to weight study results based on size, moderating variables, and other factors.

Journal articles

  • Neyeloff, J. L., Fuchs, S. C., & Moreira, L. B. (2012). Meta-analyses and Forest plots using a microsoft excel spreadsheet: step-by-step guide focusing on descriptive data analysis. BMC Research Notes, 5(1), 52-57. https://doi.org/10.1186/1756-0500-5-52 Provides a step-by-step guide on how to use Excel to perform a meta-analysis and generate forest plots.
  • Ried, K. (2006). Interpreting and understanding meta-analysis graphs: a practical guide. Australian Family Physician, 35(8), 635- 638. This article provides a practical guide to appraisal of meta-analysis graphs, and has been developed as part of the Primary Health Care Research Evaluation Development (PHCRED) capacity building program for training general practitioners and other primary health care professionals in research methodology.

In a qualitative systematic review, data can be presented in a number of different ways. A typical procedure in the health sciences is  thematic analysis .

As explained by James Thomas and Angela Harden (2008) in an article for  BMC Medical Research Methodology : 

"Thematic synthesis has three stages:

  • the coding of text 'line-by-line'
  • the development of 'descriptive themes'
  • and the generation of 'analytical themes'

While the development of descriptive themes remains 'close' to the primary studies, the analytical themes represent a stage of interpretation whereby the reviewers 'go beyond' the primary studies and generate new interpretive constructs, explanations or hypotheses" (p. 45).

A good example of how to conduct a thematic analysis in a systematic review is the following journal article by Jorgensen et al. (2108) on cancer patients. In it, the authors go through the process of:

(a) identifying and coding information about the selected studies' methodologies and findings on patient care

(b) organising these codes into subheadings and descriptive categories

(c) developing these categories into analytical themes

Jørgensen, C. R., Thomsen, T. G., Ross, L., Dietz, S. M., Therkildsen, S., Groenvold, M., Rasmussen, C. L., & Johnsen, A. T. (2018). What facilitates “patient empowerment” in cancer patients during follow-up: A qualitative systematic review of the literature. Qualitative Health Research, 28(2), 292-304. https://doi.org/10.1177/1049732317721477

Thomas, J., & Harden, A. (2008). Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Medical Research Methodology, 8(1), 45-54. https://doi.org/10.1186/1471-2288-8-45

  • << Previous: Screen and analyse
  • Next: Write >>

Creative Commons license: CC-BY-NC.

  • Last Updated: Apr 12, 2024 1:34 PM
  • URL: https://rmit.libguides.com/systematicreviews
  • Research article
  • Open access
  • Published: 10 July 2008

Methods for the thematic synthesis of qualitative research in systematic reviews

  • James Thomas 1 &
  • Angela Harden 1  

BMC Medical Research Methodology volume  8 , Article number:  45 ( 2008 ) Cite this article

376k Accesses

3840 Citations

102 Altmetric

Metrics details

There is a growing recognition of the value of synthesising qualitative research in the evidence base in order to facilitate effective and appropriate health care. In response to this, methods for undertaking these syntheses are currently being developed. Thematic analysis is a method that is often used to analyse data in primary qualitative research. This paper reports on the use of this type of analysis in systematic reviews to bring together and integrate the findings of multiple qualitative studies.

We describe thematic synthesis, outline several steps for its conduct and illustrate the process and outcome of this approach using a completed review of health promotion research. Thematic synthesis has three stages: the coding of text 'line-by-line'; the development of 'descriptive themes'; and the generation of 'analytical themes'. While the development of descriptive themes remains 'close' to the primary studies, the analytical themes represent a stage of interpretation whereby the reviewers 'go beyond' the primary studies and generate new interpretive constructs, explanations or hypotheses. The use of computer software can facilitate this method of synthesis; detailed guidance is given on how this can be achieved.

We used thematic synthesis to combine the studies of children's views and identified key themes to explore in the intervention studies. Most interventions were based in school and often combined learning about health benefits with 'hands-on' experience. The studies of children's views suggested that fruit and vegetables should be treated in different ways, and that messages should not focus on health warnings. Interventions that were in line with these suggestions tended to be more effective. Thematic synthesis enabled us to stay 'close' to the results of the primary studies, synthesising them in a transparent way, and facilitating the explicit production of new concepts and hypotheses.

We compare thematic synthesis to other methods for the synthesis of qualitative research, discussing issues of context and rigour. Thematic synthesis is presented as a tried and tested method that preserves an explicit and transparent link between conclusions and the text of primary studies; as such it preserves principles that have traditionally been important to systematic reviewing.

Peer Review reports

The systematic review is an important technology for the evidence-informed policy and practice movement, which aims to bring research closer to decision-making [ 1 , 2 ]. This type of review uses rigorous and explicit methods to bring together the results of primary research in order to provide reliable answers to particular questions [ 3 – 6 ]. The picture that is presented aims to be distorted neither by biases in the review process nor by biases in the primary research which the review contains [ 7 – 10 ]. Systematic review methods are well-developed for certain types of research, such as randomised controlled trials (RCTs). Methods for reviewing qualitative research in a systematic way are still emerging, and there is much ongoing development and debate [ 11 – 14 ].

In this paper we present one approach to the synthesis of findings of qualitative research, which we have called 'thematic synthesis'. We have developed and applied these methods within several systematic reviews that address questions about people's perspectives and experiences [ 15 – 18 ]. The context for this methodological development is a programme of work in health promotion and public health (HP & PH), mostly funded by the English Department of Health, at the EPPI-Centre, in the Social Science Research Unit at the Institute of Education, University of London in the UK. Early systematic reviews at the EPPI-Centre addressed the question 'what works?' and contained research testing the effects of interventions. However, policy makers and other review users also posed questions about intervention need, appropriateness and acceptability, and factors influencing intervention implementation. To address these questions, our reviews began to include a wider range of research, including research often described as 'qualitative'. We began to focus, in particular, on research that aimed to understand the health issue in question from the experiences and point of view of the groups of people targeted by HP&PH interventions (We use the term 'qualitative' research cautiously because it encompasses a multitude of research methods at the same time as an assumed range of epistemological positions. In practice it is often difficult to classify research as being either 'qualitative' or 'quantitative' as much research contains aspects of both [ 19 – 22 ]. Because the term is in common use, however, we will employ it in this paper).

When we started the work for our first series of reviews which included qualitative research in 1999 [ 23 – 26 ], there was very little published material that described methods for synthesising this type of research. We therefore experimented with a variety of techniques borrowed from standard systematic review methods and methods for analysing primary qualitative research [ 15 ]. In later reviews, we were able to refine these methods and began to apply thematic analysis in a more explicit way. The methods for thematic synthesis described in this paper have so far been used explicitly in three systematic reviews [ 16 – 18 ].

The review used as an example in this paper

To illustrate the steps involved in a thematic synthesis we draw on a review of the barriers to, and facilitators of, healthy eating amongst children aged four to 10 years old [ 17 ]. The review was commissioned by the Department of Health, England to inform policy about how to encourage children to eat healthily in the light of recent surveys highlighting that British children are eating less than half the recommended five portions of fruit and vegetables per day. While we focus on the aspects of the review that relate to qualitative studies, the review was broader than this and combined answering traditional questions of effectiveness, through reviewing controlled trials, with questions relating to children's views of healthy eating, which were answered using qualitative studies. The qualitative studies were synthesised using 'thematic synthesis' – the subject of this paper. We compared the effectiveness of interventions which appeared to be in line with recommendations from the thematic synthesis with those that did not. This enabled us to see whether the understandings we had gained from the children's views helped us to explain differences in the effectiveness of different interventions: the thematic synthesis had enabled us to generate hypotheses which could be tested against the findings of the quantitative studies – hypotheses that we could not have generated without the thematic synthesis. The methods of this part of the review are published in Thomas et al . [ 27 ] and are discussed further in Harden and Thomas [ 21 ].

Qualitative research and systematic reviews

The act of seeking to synthesise qualitative research means stepping into more complex and contested territory than is the case when only RCTs are included in a review. First, methods are much less developed in this area, with fewer completed reviews available from which to learn, and second, the whole enterprise of synthesising qualitative research is itself hotly debated. Qualitative research, it is often proposed, is not generalisable and is specific to a particular context, time and group of participants. Thus, in bringing such research together, reviewers are open to the charge that they de-contextualise findings and wrongly assume that these are commensurable [ 11 , 13 ]. These are serious concerns which it is not the purpose of this paper to contest. We note, however, that a strong case has been made for qualitative research to be valued for the potential it has to inform policy and practice [ 11 , 28 – 30 ]. In our experience, users of reviews are interested in the answers that only qualitative research can provide, but are not able to handle the deluge of data that would result if they tried to locate, read and interpret all the relevant research themselves. Thus, if we acknowledge the unique importance of qualitative research, we need also to recognise that methods are required to bring its findings together for a wide audience – at the same time as preserving and respecting its essential context and complexity.

The earliest published work that we know of that deals with methods for synthesising qualitative research was written in 1988 by Noblit and Hare [ 31 ]. This book describes the way that ethnographic research might be synthesised, but the method has been shown to be applicable to qualitative research beyond ethnography [ 32 , 11 ]. As well as meta-ethnography, other methods have been developed more recently, including 'meta-study' [ 33 ], 'critical interpretive synthesis' [ 34 ] and 'metasynthesis' [ 13 ].

Many of the newer methods being developed have much in common with meta-ethnography, as originally described by Noblit and Hare, and often state explicitly that they are drawing on this work. In essence, this method involves identifying key concepts from studies and translating them into one another. The term 'translating' in this context refers to the process of taking concepts from one study and recognising the same concepts in another study, though they may not be expressed using identical words. Explanations or theories associated with these concepts are also extracted and a 'line of argument' may be developed, pulling corroborating concepts together and, crucially, going beyond the content of the original studies (though 'refutational' concepts might not be amenable to this process). Some have claimed that this notion of 'going beyond' the primary studies is a critical component of synthesis, and is what distinguishes it from the types of summaries of findings that typify traditional literature reviews [e.g. [ 32 ], p209]. In the words of Margarete Sandelowski, "metasyntheses are integrations that are more than the sum of parts, in that they offer novel interpretations of findings. These interpretations will not be found in any one research report but, rather, are inferences derived from taking all of the reports in a sample as a whole" [[ 14 ], p1358].

Thematic analysis has been identified as one of a range of potential methods for research synthesis alongside meta-ethnography and 'metasynthesis', though precisely what the method involves is unclear, and there are few examples of it being used for synthesising research [ 35 ]. We have adopted the term 'thematic synthesis', as we translated methods for the analysis of primary research – often termed 'thematic' – for use in systematic reviews [ 36 – 38 ]. As Boyatzis [[ 36 ], p4] has observed, thematic analysis is "not another qualitative method but a process that can be used with most, if not all, qualitative methods..." . Our approach concurs with this conceptualisation of thematic analysis, since the method we employed draws on other established methods but uses techniques commonly described as 'thematic analysis' in order to formalise the identification and development of themes.

We now move to a description of the methods we used in our example systematic review. While this paper has the traditional structure for reporting the results of a research project, the detailed methods (e.g. precise terms we used for searching) and results are available online. This paper identifies the particular issues that relate especially to reviewing qualitative research systematically and then to describing the activity of thematic synthesis in detail.

When searching for studies for inclusion in a 'traditional' statistical meta-analysis, the aim of searching is to locate all relevant studies. Failing to do this can undermine the statistical models that underpin the analysis and bias the results. However, Doyle [[ 39 ], p326] states that, "like meta-analysis, meta-ethnography utilizes multiple empirical studies but, unlike meta-analysis, the sample is purposive rather than exhaustive because the purpose is interpretive explanation and not prediction" . This suggests that it may not be necessary to locate every available study because, for example, the results of a conceptual synthesis will not change if ten rather than five studies contain the same concept, but will depend on the range of concepts found in the studies, their context, and whether they are in agreement or not. Thus, principles such as aiming for 'conceptual saturation' might be more appropriate when planning a search strategy for qualitative research, although it is not yet clear how these principles can be applied in practice. Similarly, other principles from primary qualitative research methods may also be 'borrowed' such as deliberately seeking studies which might act as negative cases, aiming for maximum variability and, in essence, designing the resulting set of studies to be heterogeneous, in some ways, instead of achieving the homogeneity that is often the aim in statistical meta-analyses.

However you look, qualitative research is difficult to find [ 40 – 42 ]. In our review, it was not possible to rely on simple electronic searches of databases. We needed to search extensively in 'grey' literature, ask authors of relevant papers if they knew of more studies, and look especially for book chapters, and we spent a lot of effort screening titles and abstracts by hand and looking through journals manually. In this sense, while we were not driven by the statistical imperative of locating every relevant study, when it actually came down to searching, we found that there was very little difference in the methods we had to use to find qualitative studies compared to the methods we use when searching for studies for inclusion in a meta-analysis.

Quality assessment

Assessing the quality of qualitative research has attracted much debate and there is little consensus regarding how quality should be assessed, who should assess quality, and, indeed, whether quality can or should be assessed in relation to 'qualitative' research at all [ 43 , 22 , 44 , 45 ]. We take the view that the quality of qualitative research should be assessed to avoid drawing unreliable conclusions. However, since there is little empirical evidence on which to base decisions for excluding studies based on quality assessment, we took the approach in this review to use 'sensitivity analyses' (described below) to assess the possible impact of study quality on the review's findings.

In our example review we assessed our studies according to 12 criteria, which were derived from existing sets of criteria proposed for assessing the quality of qualitative research [ 46 – 49 ], principles of good practice for conducting social research with children [ 50 ], and whether studies employed appropriate methods for addressing our review questions. The 12 criteria covered three main quality issues. Five related to the quality of the reporting of a study's aims, context, rationale, methods and findings (e.g. was there an adequate description of the sample used and the methods for how the sample was selected and recruited?). A further four criteria related to the sufficiency of the strategies employed to establish the reliability and validity of data collection tools and methods of analysis, and hence the validity of the findings. The final three criteria related to the assessment of the appropriateness of the study methods for ensuring that findings about the barriers to, and facilitators of, healthy eating were rooted in children's own perspectives (e.g. were data collection methods appropriate for helping children to express their views?).

Extracting data from studies

One issue which is difficult to deal with when synthesising 'qualitative' studies is 'what counts as data' or 'findings'? This problem is easily addressed when a statistical meta-analysis is being conducted: the numeric results of RCTs – for example, the mean difference in outcome between the intervention and control – are taken from published reports and are entered into the software package being used to calculate the pooled effect size [ 3 , 51 ].

Deciding what to abstract from the published report of a 'qualitative' study is much more difficult. Campbell et al . [ 11 ] extracted what they called the 'key concepts' from the qualitative studies they found about patients' experiences of diabetes and diabetes care. However, finding the key concepts in 'qualitative' research is not always straightforward either. As Sandelowski and Barroso [ 52 ] discovered, identifying the findings in qualitative research can be complicated by varied reporting styles or the misrepresentation of data as findings (as for example when data are used to 'let participants speak for themselves'). Sandelowski and Barroso [ 53 ] have argued that the findings of qualitative (and, indeed, all empirical) research are distinct from the data upon which they are based, the methods used to derive them, externally sourced data, and researchers' conclusions and implications.

In our example review, while it was relatively easy to identify 'data' in the studies – usually in the form of quotations from the children themselves – it was often difficult to identify key concepts or succinct summaries of findings, especially for studies that had undertaken relatively simple analyses and had not gone much further than describing and summarising what the children had said. To resolve this problem we took study findings to be all of the text labelled as 'results' or 'findings' in study reports – though we also found 'findings' in the abstracts which were not always reported in the same way in the text. Study reports ranged in size from a few pages to full final project reports. We entered all the results of the studies verbatim into QSR's NVivo software for qualitative data analysis. Where we had the documents in electronic form this process was straightforward even for large amounts of text. When electronic versions were not available, the results sections were either re-typed or scanned in using a flat-bed or pen scanner. (We have since adapted our own reviewing system, 'EPPI-Reviewer' [ 54 ], to handle this type of synthesis and the screenshots below show this software.)

Detailed methods for thematic synthesis

The synthesis took the form of three stages which overlapped to some degree: the free line-by-line coding of the findings of primary studies; the organisation of these 'free codes' into related areas to construct 'descriptive' themes; and the development of 'analytical' themes.

Stages one and two: coding text and developing descriptive themes

In our children and healthy eating review, we originally planned to extract and synthesise study findings according to our review questions regarding the barriers to, and facilitators of, healthy eating amongst children. It soon became apparent, however, that few study findings addressed these questions directly and it appeared that we were in danger of ending up with an empty synthesis. We were also concerned about imposing the a priori framework implied by our review questions onto study findings without allowing for the possibility that a different or modified framework may be a better fit. We therefore temporarily put our review questions to one side and started from the study findings themselves to conduct an thematic analysis.

There were eight relevant qualitative studies examining children's views of healthy eating. We entered the verbatim findings of these studies into our database. Three reviewers then independently coded each line of text according to its meaning and content. Figure 1 illustrates this line-by-line coding using our specialist reviewing software, EPPI-Reviewer, which includes a component designed to support thematic synthesis. The text which was taken from the report of the primary study is on the left and codes were created inductively to capture the meaning and content of each sentence. Codes could be structured, either in a tree form (as shown in the figure) or as 'free' codes – without a hierarchical structure.

figure 1

line-by-line coding in EPPI-Reviewer.

The use of line-by-line coding enabled us to undertake what has been described as one of the key tasks in the synthesis of qualitative research: the translation of concepts from one study to another [ 32 , 55 ]. However, this process may not be regarded as a simple one of translation. As we coded each new study we added to our 'bank' of codes and developed new ones when necessary. As well as translating concepts between studies, we had already begun the process of synthesis (For another account of this process, see Doyle [[ 39 ], p331]). Every sentence had at least one code applied, and most were categorised using several codes (e.g. 'children prefer fruit to vegetables' or 'why eat healthily?'). Before completing this stage of the synthesis, we also examined all the text which had a given code applied to check consistency of interpretation and to see whether additional levels of coding were needed. (In grounded theory this is termed 'axial' coding; see Fisher [ 55 ] for further discussion of the application of axial coding in research synthesis.) This process created a total of 36 initial codes. For example, some of the text we coded as "bad food = nice, good food = awful" from one study [ 56 ] were:

'All the things that are bad for you are nice and all the things that are good for you are awful.' (Boys, year 6) [[ 56 ], p74]

'All adverts for healthy stuff go on about healthy things. The adverts for unhealthy things tell you how nice they taste.' [[ 56 ], p75]

Some children reported throwing away foods they knew had been put in because they were 'good for you' and only ate the crisps and chocolate . [[ 56 ], p75]

Reviewers looked for similarities and differences between the codes in order to start grouping them into a hierarchical tree structure. New codes were created to capture the meaning of groups of initial codes. This process resulted in a tree structure with several layers to organize a total of 12 descriptive themes (Figure 2 ). For example, the first layer divided the 12 themes into whether they were concerned with children's understandings of healthy eating or influences on children's food choice. The above example, about children's preferences for food, was placed in both areas, since the findings related both to children's reactions to the foods they were given, and to how they behaved when given the choice over what foods they might eat. A draft summary of the findings across the studies organized by the 12 descriptive themes was then written by one of the review authors. Two other review authors commented on this draft and a final version was agreed.

figure 2

relationships between descriptive themes.

Stage three: generating analytical themes

Up until this point, we had produced a synthesis which kept very close to the original findings of the included studies. The findings of each study had been combined into a whole via a listing of themes which described children's perspectives on healthy eating. However, we did not yet have a synthesis product that addressed directly the concerns of our review – regarding how to promote healthy eating, in particular fruit and vegetable intake, amongst children. Neither had we 'gone beyond' the findings of the primary studies and generated additional concepts, understandings or hypotheses. As noted earlier, the idea or step of 'going beyond' the content of the original studies has been identified by some as the defining characteristic of synthesis [ 32 , 14 ].

This stage of a qualitative synthesis is the most difficult to describe and is, potentially, the most controversial, since it is dependent on the judgement and insights of the reviewers. The equivalent stage in meta-ethnography is the development of 'third order interpretations' which go beyond the content of original studies [ 32 , 11 ]. In our example, the step of 'going beyond' the content of the original studies was achieved by using the descriptive themes that emerged from our inductive analysis of study findings to answer the review questions we had temporarily put to one side. Reviewers inferred barriers and facilitators from the views children were expressing about healthy eating or food in general, captured by the descriptive themes, and then considered the implications of children's views for intervention development. Each reviewer first did this independently and then as a group. Through this discussion more abstract or analytical themes began to emerge. The barriers and facilitators and implications for intervention development were examined again in light of these themes and changes made as necessary. This cyclical process was repeated until the new themes were sufficiently abstract to describe and/or explain all of our initial descriptive themes, our inferred barriers and facilitators and implications for intervention development.

For example, five of the 12 descriptive themes concerned the influences on children's choice of foods (food preferences, perceptions of health benefits, knowledge behaviour gap, roles and responsibilities, non-influencing factors). From these, reviewers inferred several barriers and implications for intervention development. Children identified readily that taste was the major concern for them when selecting food and that health was either a secondary factor or, in some cases, a reason for rejecting food. Children also felt that buying healthy food was not a legitimate use of their pocket money, which they would use to buy sweets that could be enjoyed with friends. These perspectives indicated to us that branding fruit and vegetables as a 'tasty' rather than 'healthy' might be more effective in increasing consumption. As one child noted astutely, 'All adverts for healthy stuff go on about healthy things. The adverts for unhealthy things tell you how nice they taste.' [[ 56 ], p75]. We captured this line of argument in the analytical theme entitled 'Children do not see it as their role to be interested in health'. Altogether, this process resulted in the generation of six analytical themes which were associated with ten recommendations for interventions.

Six main issues emerged from the studies of children's views: (1) children do not see it as their role to be interested in health; (2) children do not see messages about future health as personally relevant or credible; (3) fruit, vegetables and confectionery have very different meanings for children; (4) children actively seek ways to exercise their own choices with regard to food; (5) children value eating as a social occasion; and (6) children see the contradiction between what is promoted in theory and what adults provide in practice. The review found that most interventions were based in school (though frequently with parental involvement) and often combined learning about the health benefits of fruit and vegetables with 'hands-on' experience in the form of food preparation and taste-testing. Interventions targeted at people with particular risk factors worked better than others, and multi-component interventions that combined the promotion of physical activity with healthy eating did not work as well as those that only concentrated on healthy eating. The studies of children's views suggested that fruit and vegetables should be treated in different ways in interventions, and that messages should not focus on health warnings. Interventions that were in line with these suggestions tended to be more effective than those which were not.

Context and rigour in thematic synthesis

The process of translation, through the development of descriptive and analytical themes, can be carried out in a rigorous way that facilitates transparency of reporting. Since we aim to produce a synthesis that both generates 'abstract and formal theories' that are nevertheless 'empirically faithful to the cases from which they were developed' [[ 53 ], p1371], we see the explicit recording of the development of themes as being central to the method. The use of software as described can facilitate this by allowing reviewers to examine the contribution made to their findings by individual studies, groups of studies, or sub-populations within studies.

Some may argue against the synthesis of qualitative research on the grounds that the findings of individual studies are de-contextualised and that concepts identified in one setting are not applicable to others [ 32 ]. However, the act of synthesis could be viewed as similar to the role of a research user when reading a piece of qualitative research and deciding how useful it is to their own situation. In the case of synthesis, reviewers translate themes and concepts from one situation to another and can always be checking that each transfer is valid and whether there are any reasons that understandings gained in one context might not be transferred to another. We attempted to preserve context by providing structured summaries of each study detailing aims, methods and methodological quality, and setting and sample. This meant that readers of our review were able to judge for themselves whether or not the contexts of the studies the review contained were similar to their own. In the synthesis we also checked whether the emerging findings really were transferable across different study contexts. For example, we tried throughout the synthesis to distinguish between participants (e.g. boys and girls) where the primary research had made an appropriate distinction. We then looked to see whether some of our synthesis findings could be attributed to a particular group of children or setting. In the event, we did not find any themes that belonged to a specific group, but another outcome of this process was a realisation that the contextual information given in the reports of studies was very restricted indeed. It was therefore difficult to make the best use of context in our synthesis.

In checking that we were not translating concepts into situations where they did not belong, we were following a principle that others have followed when using synthesis methods to build grounded formal theory: that of grounding a text in the context in which it was constructed. As Margaret Kearney has noted "the conditions under which data were collected, analysis was done, findings were found, and products were written for each contributing report should be taken into consideration in developing a more generalized and abstract model" [[ 14 ], p1353]. Britten et al . [ 32 ] suggest that it may be important to make a deliberate attempt to include studies conducted across diverse settings to achieve the higher level of abstraction that is aimed for in a meta-ethnography.

Study quality and sensitivity analyses

We assessed the 'quality' of our studies with regard to the degree to which they represented the views of their participants. In doing this, we were locating the concept of 'quality' within the context of the purpose of our review – children's views – and not necessarily the context of the primary studies themselves. Our 'hierarchy of evidence', therefore, did not prioritise the research design of studies but emphasised the ability of the studies to answer our review question. A traditional systematic review of controlled trials would contain a quality assessment stage, the purpose of which is to exclude studies that do not provide a reliable answer to the review question. However, given that there were no accepted – or empirically tested – methods for excluding qualitative studies from syntheses on the basis of their quality [ 57 , 12 , 58 ], we included all studies regardless of their quality.

Nevertheless, our studies did differ according to the quality criteria they were assessed against and it was important that we considered this in some way. In systematic reviews of trials, 'sensitivity analyses' – analyses which test the effect on the synthesis of including and excluding findings from studies of differing quality – are often carried out. Dixon-Woods et al . [ 12 ] suggest that assessing the feasibility and worth of conducting sensitivity analyses within syntheses of qualitative research should be an important focus of synthesis methods work. After our thematic synthesis was complete, we examined the relative contributions of studies to our final analytic themes and recommendations for interventions. We found that the poorer quality studies contributed comparatively little to the synthesis and did not contain many unique themes; the better studies, on the other hand, appeared to have more developed analyses and contributed most to the synthesis.

This paper has discussed the rationale for reviewing and synthesising qualitative research in a systematic way and has outlined one specific approach for doing this: thematic synthesis. While it is not the only method which might be used – and we have discussed some of the other options available – we present it here as a tested technique that has worked in the systematic reviews in which it has been employed.

We have observed that one of the key tasks in the synthesis of qualitative research is the translation of concepts between studies. While the activity of translating concepts is usually undertaken in the few syntheses of qualitative research that exist, there are few examples that specify the detail of how this translation is actually carried out. The example above shows how we achieved the translation of concepts across studies through the use of line-by-line coding, the organisation of these codes into descriptive themes, and the generation of analytical themes through the application of a higher level theoretical framework. This paper therefore also demonstrates how the methods and process of a thematic synthesis can be written up in a transparent way.

This paper goes some way to addressing concerns regarding the use of thematic analysis in research synthesis raised by Dixon-Woods and colleagues who argue that the approach can lack transparency due to a failure to distinguish between 'data-driven' or 'theory-driven' approaches. Moreover they suggest that, "if thematic analysis is limited to summarising themes reported in primary studies, it offers little by way of theoretical structure within which to develop higher order thematic categories..." [[ 35 ], p47]. Part of the problem, they observe, is that the precise methods of thematic synthesis are unclear. Our approach contains a clear separation between the 'data-driven' descriptive themes and the 'theory-driven' analytical themes and demonstrates how the review questions provided a theoretical structure within which it became possible to develop higher order thematic categories.

The theme of 'going beyond' the content of the primary studies was discussed earlier. Citing Strike and Posner [ 59 ], Campbell et al . [[ 11 ], p672] also suggest that synthesis "involves some degree of conceptual innovation, or employment of concepts not found in the characterisation of the parts and a means of creating the whole" . This was certainly true of the example given in this paper. We used a series of questions, derived from the main topic of our review, to focus an examination of our descriptive themes and we do not find our recommendations for interventions contained in the findings of the primary studies: these were new propositions generated by the reviewers in the light of the synthesis. The method also demonstrates that it is possible to synthesise without conceptual innovation. The initial synthesis, involving the translation of concepts between studies, was necessary in order for conceptual innovation to begin. One could argue that the conceptual innovation, in this case, was only necessary because the primary studies did not address our review question directly. In situations in which the primary studies are concerned directly with the review question, it may not be necessary to go beyond the contents of the original studies in order to produce a satisfactory synthesis (see, for example, Marston and King, [ 60 ]). Conceptually, our analytical themes are similar to the ultimate product of meta-ethnographies: third order interpretations [ 11 ], since both are explicit mechanisms for going beyond the content of the primary studies and presenting this in a transparent way. The main difference between them lies in their purposes. Third order interpretations bring together the implications of translating studies into one another in their own terms, whereas analytical themes are the result of interrogating a descriptive synthesis by placing it within an external theoretical framework (our review question and sub-questions). It may be, therefore, that analytical themes are more appropriate when a specific review question is being addressed (as often occurs when informing policy and practice), and third order interpretations should be used when a body of literature is being explored in and of itself, with broader, or emergent, review questions.

This paper is a contribution to the current developmental work taking place in understanding how best to bring together the findings of qualitative research to inform policy and practice. It is by no means the only method on offer but, by drawing on methods and principles from qualitative primary research, it benefits from the years of methodological development that underpins the research it seeks to synthesise.

Chalmers I: Trying to do more good than harm in policy and practice: the role of rigorous, transparent and up-to-date evaluations. Ann Am Acad Pol Soc Sci. 2003, 589: 22-40. 10.1177/0002716203254762.

Article   Google Scholar  

Oakley A: Social science and evidence-based everything: the case of education. Educ Rev. 2002, 54: 277-286. 10.1080/0013191022000016329.

Cooper H, Hedges L: The Handbook of Research Synthesis. 1994, New York: Russell Sage Foundation

Google Scholar  

EPPI-Centre: EPPI-Centre Methods for Conducting Systematic Reviews. 2006, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=89 ]

Higgins J, Green S, (Eds): Cochrane Handbook for Systematic Reviews of Interventions 4.2.6. 2006, Updated September 2006. Accessed 24th January 2007, [ http://www.cochrane.org/resources/handbook/ ]

Petticrew M, Roberts H: Systematic Reviews in the Social Sciences: A practical guide. 2006, Oxford: Blackwell Publishing

Book   Google Scholar  

Chalmers I, Hedges L, Cooper H: A brief history of research synthesis. Eval Health Prof. 2002, 25: 12-37. 10.1177/0163278702025001003.

Article   PubMed   Google Scholar  

Juni P, Altman D, Egger M: Assessing the quality of controlled clinical trials. BMJ. 2001, 323: 42-46. 10.1136/bmj.323.7303.42.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Mulrow C: Systematic reviews: rationale for systematic reviews. BMJ. 1994, 309: 597-599.

White H: Scientific communication and literature retrieval. The Handbook of Research Synthesis. Edited by: Cooper H, Hedges L. 1994, New York: Russell Sage Foundation

Campbell R, Pound P, Pope C, Britten N, Pill R, Morgan M, Donovan J: Evaluating meta-ethnography: a synthesis of qualitative research on lay experiences of diabetes and diabetes care. Soc Sci Med. 2003, 56: 671-684. 10.1016/S0277-9536(02)00064-3.

Dixon-Woods M, Bonas S, Booth A, Jones DR, Miller T, Sutton AJ, Shaw RL, Smith JA, Young B: How can systematic reviews incorporate qualitative research? A critical perspective. Qual Res. 2006, 6: 27-44. 10.1177/1468794106058867.

Sandelowski M, Barroso J: Handbook for Synthesising Qualitative Research. 2007, New York: Springer

Thorne S, Jensen L, Kearney MH, Noblit G, Sandelowski M: Qualitative meta-synthesis: reflections on methodological orientation and ideological agenda. Qual Health Res. 2004, 14: 1342-1365. 10.1177/1049732304269888.

Harden A, Garcia J, Oliver S, Rees R, Shepherd J, Brunton G, Oakley A: Applying systematic review methods to studies of people's views: an example from public health. J Epidemiol Community Health. 2004, 58: 794-800. 10.1136/jech.2003.014829.

Article   PubMed   PubMed Central   Google Scholar  

Harden A, Brunton G, Fletcher A, Oakley A: Young People, Pregnancy and Social Exclusion: A systematic synthesis of research evidence to identify effective, appropriate and promising approaches for prevention and support. 2006, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=674 ]

Thomas J, Sutcliffe K, Harden A, Oakley A, Oliver S, Rees R, Brunton G, Kavanagh J: Children and Healthy Eating: A systematic review of barriers and facilitators. 2003, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, accessed 4 th July 2008, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=246 ]

Thomas J, Kavanagh J, Tucker H, Burchett H, Tripney J, Oakley A: Accidental Injury, Risk-Taking Behaviour and the Social Circumstances in which Young People Live: A systematic review. 2007, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=1910 ]

Bryman A: Quantity and Quality in Social Research. 1998, London: Unwin

Hammersley M: What's Wrong with Ethnography?. 1992, London: Routledge

Harden A, Thomas J: Methodological issues in combining diverse study types in systematic reviews. Int J Soc Res Meth. 2005, 8: 257-271. 10.1080/13645570500155078.

Oakley A: Experiments in Knowing: Gender and methods in the social sciences. 2000, Cambridge: Polity Press

Harden A, Oakley A, Oliver S: Peer-delivered health promotion for young people: a systematic review of different study designs. Health Educ J. 2001, 60: 339-353. 10.1177/001789690106000406.

Harden A, Rees R, Shepherd J, Brunton G, Oliver S, Oakley A: Young People and Mental Health: A systematic review of barriers and facilitators. 2001, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=256 ]

Rees R, Harden A, Shepherd J, Brunton G, Oliver S, Oakley A: Young People and Physical Activity: A systematic review of barriers and facilitators. 2001, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=260 ]

Shepherd J, Harden A, Rees R, Brunton G, Oliver S, Oakley A: Young People and Healthy Eating: A systematic review of barriers and facilitators. 2001, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=258 ]

Thomas J, Harden A, Oakley A, Oliver S, Sutcliffe K, Rees R, Brunton G, Kavanagh J: Integrating qualitative research with trials in systematic reviews: an example from public health. BMJ. 2004, 328: 1010-1012. 10.1136/bmj.328.7446.1010.

Davies P: What is evidence-based education?. Br J Educ Stud. 1999, 47: 108-121. 10.1111/1467-8527.00106.

Newman M, Thompson C, Roberts AP: Helping practitioners understand the contribution of qualitative research to evidence-based practice. Evid Based Nurs. 2006, 9: 4-7. 10.1136/ebn.9.1.4.

Popay J: Moving Beyond Effectiveness in Evidence Synthesis. 2006, London: National Institute for Health and Clinical Excellence

Noblit GW, Hare RD: Meta-Ethnography: Synthesizing qualitative studies. 1988, Newbury Park: Sage

Britten N, Campbell R, Pope C, Donovan J, Morgan M, Pill R: Using meta-ethnography to synthesise qualitative research: a worked example. J Health Serv Res Policy. 2002, 7: 209-215. 10.1258/135581902320432732.

Paterson B, Thorne S, Canam C, Jillings C: Meta-Study of Qualitative Health Research. 2001, Thousand Oaks, California: Sage

Dixon-Woods M, Cavers D, Agarwal S, Annandale E, Arthur A, Harvey J, Katbamna S, Olsen R, Smith L, Riley R, Sutton AJ: Conducting a critical interpretative synthesis of the literature on access to healthcare by vulnerable groups. BMC Med Res Methodol. 2006, 6: 35-10.1186/1471-2288-6-35.

Dixon-Woods M, Agarwal S, Jones D, Young B, Sutton A: Synthesising qualitative and quantitative evidence: a review of possible methods. J Health Serv Res Policy. 2005, 10: 45-53. 10.1258/1355819052801804.

Boyatzis RE: Transforming Qualitative Information. 1998, Sage: Cleveland

Braun V, Clarke V: Using thematic analysis in psychology. Qual Res Psychol. 2006, 3: 77-101. 10.1191/1478088706qp063oa. [ http://science.uwe.ac.uk/psychology/drvictoriaclarke_files/thematicanalysis%20.pdf ]

Silverman D, Ed: Qualitative Research: Theory, method and practice. 1997, London: Sage

Doyle LH: Synthesis through meta-ethnography: paradoxes, enhancements, and possibilities. Qual Res. 2003, 3: 321-344. 10.1177/1468794103033003.

Barroso J, Gollop C, Sandelowski M, Meynell J, Pearce PF, Collins LJ: The challenges of searching for and retrieving qualitative studies. Western J Nurs Res. 2003, 25: 153-178. 10.1177/0193945902250034.

Walters LA, Wilczynski NL, Haynes RB, Hedges Team: Developing optimal search strategies for retrieving clinically relevant qualitative studies in EMBASE. Qual Health Res. 2006, 16: 162-8. 10.1177/1049732305284027.

Wong SSL, Wilczynski NL, Haynes RB: Developing optimal search strategies for detecting clinically relevant qualitative studies in Medline. Medinfo. 2004, 11: 311-314.

Murphy E, Dingwall R, Greatbatch D, Parker S, Watson P: Qualitative research methods in health technology assessment: a review of the literature. Health Technol Assess. 1998, 2 (16):

Seale C: Quality in qualitative research. Qual Inq. 1999, 5: 465-478.

Spencer L, Ritchie J, Lewis J, Dillon L: Quality in Qualitative Evaluation: A framework for assessing research evidence. 2003, London: Cabinet Office

Boulton M, Fitzpatrick R, Swinburn C: Qualitative research in healthcare II: a structured review and evaluation of studies. J Eval Clin Pract. 1996, 2: 171-179. 10.1111/j.1365-2753.1996.tb00041.x.

Article   CAS   PubMed   Google Scholar  

Cobb A, Hagemaster J: Ten criteria for evaluating qualitative research proposals. J Nurs Educ. 1987, 26: 138-143.

CAS   PubMed   Google Scholar  

Mays N, Pope C: Rigour and qualitative research. BMJ. 1995, 311: 109-12.

Medical Sociology Group: Criteria for the evaluation of qualitative research papers. Med Sociol News. 1996, 22: 68-71.

Alderson P: Listening to Children. 1995, London: Barnardo's

Egger M, Davey-Smith G, Altman D: Systematic Reviews in Health Care: Meta-analysis in context. 2001, London: BMJ Publishing

Sandelowski M, Barroso J: Finding the findings in qualitative studies. J Nurs Scholarsh. 2002, 34: 213-219. 10.1111/j.1547-5069.2002.00213.x.

Sandelowski M: Using qualitative research. Qual Health Res. 2004, 14: 1366-1386. 10.1177/1049732304269672.

Thomas J, Brunton J: EPPI-Reviewer 3.0: Analysis and management of data for research synthesis. EPPI-Centre software. 2006, London: EPPI-Centre, Social Science Research Unit, Institute of Education

Fisher M, Qureshi H, Hardyman W, Homewood J: Using Qualitative Research in Systematic Reviews: Older people's views of hospital discharge. 2006, London: Social Care Institute for Excellence

Dixey R, Sahota P, Atwal S, Turner A: Children talking about healthy eating: data from focus groups with 300 9–11-year-olds. Nutr Bull. 2001, 26: 71-79. 10.1046/j.1467-3010.2001.00078.x.

Daly A, Willis K, Small R, Green J, Welch N, Kealy M, Hughes E: Hierarchy of evidence for assessing qualitative health research. J Clin Epidemiol. 2007, 60: 43-49. 10.1016/j.jclinepi.2006.03.014.

Popay J: Moving beyond floccinaucinihilipilification: enhancing the utility of systematic reviews. J Clin Epidemiol. 2005, 58: 1079-80. 10.1016/j.jclinepi.2005.08.004.

Strike K, Posner G: Types of synthesis and their criteria. Knowledge Structure and Use: Implications for synthesis and interpretation. Edited by: Ward S, Reed L. 1983, Philadelphia: Temple University Press

Marston C, King E: Factors that shape young people's sexual behaviour: a systematic review. The Lancet. 2006, 368: 1581-86. 10.1016/S0140-6736(06)69662-1.

Pre-publication history

The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2288/8/45/prepub

Download references

Acknowledgements

The authors would like to thank Elaine Barnett-Page for her assistance in producing the draft paper, and David Gough, Ann Oakley and Sandy Oliver for their helpful comments. The review used an example in this paper was funded by the Department of Health (England). The methodological development was supported by Department of Health (England) and the ESRC through the Methods for Research Synthesis Node of the National Centre for Research Methods. In addition, Angela Harden held a senior research fellowship funded by the Department of Health (England) December 2003 – November 2007. The views expressed in this paper are those of the authors and are not necessarily those of the funding bodies.

Author information

Authors and affiliations.

EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, UK

James Thomas & Angela Harden

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to James Thomas .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors' contributions

Both authors contributed equally to the paper and read and approved the final manuscript.

James Thomas and Angela Harden contributed equally to this work.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2, rights and permissions.

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Thomas, J., Harden, A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol 8 , 45 (2008). https://doi.org/10.1186/1471-2288-8-45

Download citation

Received : 17 April 2008

Accepted : 10 July 2008

Published : 10 July 2008

DOI : https://doi.org/10.1186/1471-2288-8-45

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative Research
  • Primary Study
  • Analytical Theme
  • Healthy Eating
  • Review Question

BMC Medical Research Methodology

ISSN: 1471-2288

research synthesis systematic review

Should I do a synthesis (i.e. literature review)?

  • Questions & Quandaries
  • Published: 18 April 2024

Cite this article

research synthesis systematic review

  • H. Carrie Chen 1 ,
  • Ayelet Kuper 2 , 3 , 4 ,
  • Jennifer Cleland 5 &
  • Patricia O’Sullivan 6  

249 Accesses

1 Altmetric

Explore all metrics

This column is intended to address the kinds of knotty problems and dilemmas with which many scholars grapple in studying health professions education. In this article, the authors address the question of whether one should conduct a literature review or knowledge synthesis, considering the why, when, and how, as well as its potential pitfalls. The goal is to guide supervisors and students who are considering whether to embark on a literature review in education research.

Avoid common mistakes on your manuscript.

Two junior colleagues come to you to ask your advice about carrying out a literature review on a particular topic. “Should they?” immediately pops into your mind, followed closely by, if yes, then what kind of literature review is appropriate? Our experience is that colleagues often come to suggest a literature review to “kick start” their research (in fact, some academic programs require them as part of degree requirements), without a full understanding of the work involved, the different types of literature review, and what type of literature review might be most suitable for their research question. In this Questions and Quandaries, we address the question of literature reviews in education research, considering the why, when, and how, as well as potential pitfalls.

First, what is meant by literature review? The term literature review has been used to refer to both a review of the literature and a knowledge synthesis (Maggio et al., 2018 ; Siddaway et al., 2019 ). For our purposes, we employ the term as commonly used to refer to a knowledge synthesis , which is a formal comprehensive review of the existing body of literature on a topic. It is a research approach that critically integrates and synthesizes available evidence from multiple studies to provide insight and allow the drawing of conclusions. It is an example of Boyer’s scholarship of integration (Boyer, 1990 ). In contrast, a review of the literature is a relatively casual and expedient method for attaining a general overview of the state of knowledge on a given topic to make the argument that a new study is needed. In this interpretation, a literature review serves as a key starting point for anyone conducting research by identifying gaps in the literature, informing the study question, and situating one’s study in the field.

Whether a formal knowledge synthesis should be done depends on if a review is needed and what the rationale is for the review. The first question to consider is whether a literature review already exists. If no, is there enough literature published on the topic to warrant a review? If yes, does the previous review need updating? How long has it been since the last review and has the literature expanded so much or are there important new studies that need integrating to justify an updated review? Or were there flaws in the previous review that one intends to address with a new review? Or does one intend to address a different question than the focus of the previous review?

If the knowledge synthesis is to be done, it should be driven by a research question. What is the research question? Can it be answered by a review? What is the purpose of the synthesis? There are two main purposes for knowledge synthesis– knowledge support and decision support. Knowledge support summarizes the evidence while decision support takes additional analytical steps to allow for decision-making in particular contexts (Mays et al., 2005 ).

If the purpose is to provide knowledge support, then the question is how or what will the knowledge synthesis add to the literature? Will it establish the state of knowledge in an area, identify gaps in the literature/knowledge base, and/or map opportunities for future research? Cornett et al., performed a scoping review of the literature on professional identity, focusing on how professional identity is described, why the studies where done, and what constructs of identity were used. Their findings advanced understanding of the state of knowledge by indicating that professional identity studies were driven primarily by the desire to examine the impact of political, social and healthcare reforms and advances, and that the various constructs of professional identity across the literature could be categorized into five themes (Cornett et al., 2023 ).

If, on the other hand, the purpose of the knowledge synthesis is to provide decision support, for whom will the synthesis be relevant and how will it improve practice? Will the synthesis result in tools such as guidelines or recommendations for practitioners and policymakers? An example of a knowledge synthesis for decision support is a systematic review conducted by Spencer and colleagues to examine the validity evidence for use of the Ottawa Surgical Competency Operating Room Evaluation (OSCORE) assessment tool. The authors summarized their findings with recommendations for educational practice– namely supporting the use of the OSCORE for in-the-moment entrustment decisions by frontline supervisors in surgical fields but cautioning about the limited evidence for support of its use in summative promotions decisions or non-surgical contexts (Spencer et al., 2022 ).

If a knowledge synthesis is indeed appropriate, its methodology should be informed by its research question and purpose. We do not have the space to discuss the various types of knowledge synthesis except to say that several types have been described in the literature. The five most common types in health professions education are narrative reviews, systematic reviews, umbrella reviews (meta-syntheses), scoping reviews, and realist reviews (Maggio et al., 2018 ). These represent different epistemologies, serve different review purposes, use different methods, and result in different review outcomes (Gordon, 2016 ).

Each type of review lends itself best to answering a certain type of research question. For instance, narrative reviews generally describe what is known about a topic without necessarily answering a specific empirical question (Maggio et al., 2018 ). A recent example of a narrative review focused on schoolwide wellbeing programs, describing what is known about the key characteristics and mediating factors that influence student support and identifying critical tensions around confidentiality that could make or break programs (Tan et al., 2023 ). Umbrella reviews, on the other hand, synthesize evidence from multiple reviews or meta-analyses and can illuminate agreement, inconsistencies, or evolution of evidence on a topic. For example, an umbrella review on problem-based learning highlighted the shift in research focus over time from does it work, to how does it work, to how does it work in different contexts, and pointed to directions for new research (Hung et al., 2019 ).

Practical questions for those considering a literature review include whether one has the time required and an appropriate team to conduct a high-quality knowledge synthesis. Regardless of the type of knowledge synthesis and use of quantitative or qualitative methods, all require rigorous and clear methods that allow for reproducibility. This can take time, up to 12–18 months. A high-quality knowledge synthesis also requires a team whose members have expertise not only in the content matter, but also in knowledge synthesis methodology and in literature searches (i.e. a librarian). A team with multiple reviewers with a variety of perspectives can also help manage the volume of large reviews, minimize potential biases, and strengthen the critical analysis.

Finally, a pitfall one should be careful to avoid is merely summarizing everything in the literature without critical evaluation and integration of the information. A knowledge synthesis that merely bean counts or presents a collection of unconnected information that has not been reflected upon or critically analyzed does not truly advance knowledge or decision-making. Rather, it leads us back to our original question of whether it should have been done in the first place.

Boyer, E. L. (1990). Scholarship reconsidered: Priorities of the professoriate (pp. 18–21). Princeton University Press.

Cornett, M., Palermo, C., & Ash, S. (2023). Professional identity research in the health professions—a scoping review. Advances in Health Sciences Education , 28 (2), 589–642.

Article   Google Scholar  

Gordon, M. (2016). Are we talking the same paradigm? Considering methodological choices in health education systematic review. Medical Teacher , 38 (7), 746–750.

Hung, W., Dolmans, D. H. J. M., & van Merrienboer, J. J. G. (2019). A review to identify key perspectives in PBL meta-analyses and reviews: Trends, gaps and future research directions. Advances in Health Sciences Education , 24 , 943–957.

Maggio, L. A., Thomas, A., & Durning, S. J. (2018). Knowledge synthesis. In T. Swanwick, K. Forrest, & B. C. O’Brien (Eds.), Understanding Medical Education: Evidence, theory, and practice (pp. 457–469). Wiley.

Mays, N., Pope, C., & Popay, J. (2005). Systematically reviewing qualitative and quantitative evidence to inform management and policy-making in the health field. Journal of Health Services Research & Policy , 10 (1_suppl), 6–20.

Siddaway, A. P., Wood, A. M., & Hedges, L. V. (2019). How to do a systematic review: A best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses. Annual Review of Psychology , 70 , 747–770.

Spencer, M., Sherbino, J., & Hatala, R. (2022). Examining the validity argument for the Ottawa Surgical Competency operating room evaluation (OSCORE): A systematic review and narrative synthesis. Advances in Health Sciences Education , 27 , 659–689.

Tan, E., Frambach, J., Driessen, E., & Cleland, J. (2023). Opening the black box of school-wide student wellbeing programmes: A critical narrative review informed by activity theory. Advances in Health Sciences Education . https://doi.org/10.1007/s10459-023-10261-8 . Epub ahead of print 02 July 2023.

Download references

Author information

Authors and affiliations.

Georgetown University School of Medicine, Washington, DC, USA

H. Carrie Chen

Wilson Centre for Research in Education, University Health Network, University of Toronto, Toronto, Canada

Ayelet Kuper

Division of General Internal Medicine, Sunnybrook Health Sciences Center, Toronto, Canada

Department of Medicine, University of Toronto, Toronto, Canada

Lee Kong Chian School of Medicine, Nanyang Technological University Singapore, Singapore, Singapore

Jennifer Cleland

University of California San Francisco School of Medicine, San Francisco, CA, USA

Patricia O’Sullivan

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to H. Carrie Chen .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Chen, H.C., Kuper, A., Cleland, J. et al. Should I do a synthesis (i.e. literature review)?. Adv in Health Sci Educ (2024). https://doi.org/10.1007/s10459-024-10335-1

Download citation

Published : 18 April 2024

DOI : https://doi.org/10.1007/s10459-024-10335-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

Systematic Reviews & Literature Reviews

Evidence synthesis: part 1.

This blog post is the first in a series exploring Evidence Synthesis . We’re going to start by looking at two types of evidence synthesis: literature reviews and systemic reviews . To help me with this topic I looked at a number of research guides from other institutions, e.g., Cornell University Libraries.

The Key Differences Between a Literature Review and a Systematic Review

Overall, while both literature reviews and systematic reviews involve reviewing existing research literature, systematic reviews adhere to more rigorous and transparent methods to minimize bias and provide robust evidence to inform decision-making in education and other fields. If you are interested in learning about other evidence synthesis this decision tree created by Cornell Libraries (Robinson, n.d.) is a nice visual introduction.

Along with exploring evidence synthesis I am also interested in generative A.I.   I want to be transparent about how I used A.I. to create the table above. I fed this prompt into ChatGPT:

“ List the differences between a literature review and a systemic review for a graduate student of education “

I wanted to see what it would produce. I reformatted the list into a table so that it would be easier to compare and contrast these two reviews much like the one created by Cornell University Libraries (Kibbee, 2024). I think ChatGPT did a pretty good job. I did have to do quite a bit of editing, and make sure that what was created matched what I already knew. There are things ChatGPT left out, for example time frames, and how many people are needed for a systemic review, but we can revisit that in a later post.

Kibbee, M. (2024, April 10). Libguides: A guide to evidence synthesis: Cornell University Library Evidence Synthesis Service. Cornell University Library. https://guides.library.cornell.edu/evidence-synthesis/intro

  • Blog Archive 2009-2018
  • Library Hours
  • Library Salons
  • Library Spaces
  • Library Workshops
  • Reference Desk Questions

Subscribe to the Bank Street Library Blog

  • Download PDF
  • Share X Facebook Email LinkedIn
  • Permissions

Prevalence of Mental Health Disorders Among Individuals Experiencing Homelessness : A Systematic Review and Meta-Analysis

  • 1 Department of Psychiatry, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada
  • 2 Hotchkiss Brain Institute, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada
  • 3 Faculty of Social Work, University of Calgary, Calgary, Alberta, Canada
  • 4 Mathison Centre for Mental Health Research and Education, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada
  • 5 Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada
  • 6 Department of Electrical and Software Engineering, University of Calgary, Calgary, Alberta, Canada
  • 7 Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada

Question   What is the prevalence of mental health disorders among people experiencing homelessness?

Findings   In this systematic review and meta-analysis, the prevalence of current and lifetime mental health disorders among people experiencing homelessness was high, with male individuals exhibiting a significantly higher lifetime prevalence of any mental health disorder compared to female individuals.

Meaning   These findings demonstrate that most people experiencing homelessness have mental health disorders, with current and lifetime prevalence generally much greater than that observed in general community samples.

Importance   Several factors may place people with mental health disorders, including substance use disorders, at increased risk of experiencing homelessness and experiencing homelessness may also increase the risk of developing mental health disorders. Meta-analyses examining the prevalence of mental health disorders among people experiencing homelessness globally are lacking.

Objective   To determine the current and lifetime prevalence of mental health disorders among people experiencing homelessness and identify associated factors.

Data Sources   A systematic search of electronic databases (PubMed, MEDLINE, PsycInfo, Embase, Cochrane, CINAHL, and AMED) was conducted from inception to May 1, 2021.

Study Selection   Studies investigating the prevalence of mental health disorders among people experiencing homelessness aged 18 years and older were included.

Data Extraction and Synthesis   Data extraction was completed using standardized forms in Covidence. All extracted data were reviewed for accuracy by consensus between 2 independent reviewers. Random-effects meta-analysis was used to estimate the prevalence (with 95% CIs) of mental health disorders in people experiencing homelessness. Subgroup analyses were performed by sex, study year, age group, region, risk of bias, and measurement method. Meta-regression was conducted to examine the association between mental health disorders and age, risk of bias, and study year.

Main Outcomes and Measures   Current and lifetime prevalence of mental health disorders among people experiencing homelessness.

Results   A total of 7729 citations were retrieved, with 291 undergoing full-text review and 85 included in the final review (N = 48 414 participants, 11 154 [23%] female and 37 260 [77%] male). The current prevalence of mental health disorders among people experiencing homelessness was 67% (95% CI, 55-77), and the lifetime prevalence was 77% (95% CI, 61-88). Male individuals exhibited a significantly higher lifetime prevalence of mental health disorders (86%; 95% CI, 74-92) compared to female individuals (69%; 95% CI, 48-84). The prevalence of several specific disorders were estimated, including any substance use disorder (44%), antisocial personality disorder (26%), major depression (19%), schizophrenia (7%), and bipolar disorder (8%).

Conclusions and Relevance   The findings demonstrate that most people experiencing homelessness have mental health disorders, with higher prevalences than those observed in general community samples. Specific interventions are needed to support the mental health needs of this population, including close coordination of mental health, social, and housing services and policies to support people experiencing homelessness with mental disorders.

Read More About

Barry R , Anderson J , Tran L, et al. Prevalence of Mental Health Disorders Among Individuals Experiencing Homelessness : A Systematic Review and Meta-Analysis . JAMA Psychiatry. Published online April 17, 2024. doi:10.1001/jamapsychiatry.2024.0426

Manage citations:

© 2024

Artificial Intelligence Resource Center

Psychiatry in JAMA : Read the Latest

Browse and subscribe to JAMA Network podcasts!

Others Also Liked

Select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing
  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Int J Prev Med

How to Write a Systematic Review: A Narrative Review

Ali hasanpour dehkordi.

Social Determinants of Health Research Center, Shahrekord University of Medical Sciences, Shahrekord, Iran

Elaheh Mazaheri

1 Health Information Technology Research Center, Student Research Committee, Department of Medical Library and Information Sciences, School of Management and Medical Information Sciences, Isfahan University of Medical Sciences, Isfahan, Iran

Hanan A. Ibrahim

2 Department of International Relations, College of Law, Bayan University, Erbil, Kurdistan, Iraq

Sahar Dalvand

3 MSc in Biostatistics, Health Promotion Research Center, Iran University of Medical Sciences, Tehran, Iran

Reza Ghanei Gheshlagh

4 Spiritual Health Research Center, Research Institute for Health Development, Kurdistan University of Medical Sciences, Sanandaj, Iran

In recent years, published systematic reviews in the world and in Iran have been increasing. These studies are an important resource to answer evidence-based clinical questions and assist health policy-makers and students who want to identify evidence gaps in published research. Systematic review studies, with or without meta-analysis, synthesize all available evidence from studies focused on the same research question. In this study, the steps for a systematic review such as research question design and identification, the search for qualified published studies, the extraction and synthesis of information that pertain to the research question, and interpretation of the results are presented in details. This will be helpful to all interested researchers.

A systematic review, as its name suggests, is a systematic way of collecting, evaluating, integrating, and presenting findings from several studies on a specific question or topic.[ 1 ] A systematic review is a research that, by identifying and combining evidence, is tailored to and answers the research question, based on an assessment of all relevant studies.[ 2 , 3 ] To identify assess and interpret available research, identify effective and ineffective health-care interventions, provide integrated documentation to help decision-making, and identify the gap between studies is one of the most important reasons for conducting systematic review studies.[ 4 ]

In the review studies, the latest scientific information about a particular topic is criticized. In these studies, the terms of review, systematic review, and meta-analysis are used instead. A systematic review is done in one of two methods, quantitative (meta-analysis) and qualitative. In a meta-analysis, the results of two or more studies for the evaluation of say health interventions are combined to measure the effect of treatment, while in the qualitative method, the findings of other studies are combined without using statistical methods.[ 5 ]

Since 1999, various guidelines, including the QUORUM, the MOOSE, the STROBE, the CONSORT, and the QUADAS, have been introduced for reporting meta-analyses. But recently the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement has gained widespread popularity.[ 6 , 7 , 8 , 9 ] The systematic review process based on the PRISMA statement includes four steps of how to formulate research questions, define the eligibility criteria, identify all relevant studies, extract and synthesize data, and deduce and present results (answers to research questions).[ 2 ]

Systematic Review Protocol

Systematic reviews start with a protocol. The protocol is a researcher road map that outlines the goals, methodology, and outcomes of the research. Many journals advise writers to use the PRISMA statement to write the protocol.[ 10 ] The PRISMA checklist includes 27 items related to the content of a systematic review and meta-analysis and includes abstracts, methods, results, discussions, and financial resources.[ 11 ] PRISMA helps writers improve their systematic review and meta-analysis report. Reviewers and editors of medical journals acknowledge that while PRISMA may not be used as a tool to assess the methodological quality, it does help them to publish a better study article [ Figure 1 ].[ 12 ]

An external file that holds a picture, illustration, etc.
Object name is IJPVM-12-27-g001.jpg

Screening process and articles selection according to the PRISMA guidelines

The main step in designing the protocol is to define the main objectives of the study and provide some background information. Before starting a systematic review, it is important to assess that your study is not a duplicate; therefore, in search of published research, it is necessary to review PREOSPERO and the Cochrane Database of Systematic. Sometimes it is better to search, in four databases, related systematic reviews that have already been published (PubMed, Web of Sciences, Scopus, Cochrane), published systematic review protocols (PubMed, Web of Sciences, Scopus, Cochrane), systematic review protocols that have already been registered but have not been published (PROSPERO, Cochrane), and finally related published articles (PubMed, Web of Sciences, Scopus, Cochrane). The goal is to reduce duplicate research and keep up-to-date systematic reviews.[ 13 ]

Research questions

Writing a research question is the first step in systematic review that summarizes the main goal of the study.[ 14 ] The research question determines which types of studies should be included in the analysis (quantitative, qualitative, methodic mix, review overviews, or other studies). Sometimes a research question may be broken down into several more detailed questions.[ 15 ] The vague questions (such as: is walking helpful?) makes the researcher fail to be well focused on the collected studies or analyze them appropriately.[ 16 ] On the other hand, if the research question is rigid and restrictive (e.g., walking for 43 min and 3 times a week is better than walking for 38 min and 4 times a week?), there may not be enough studies in this area to answer this question and hence the generalizability of the findings to other populations will be reduced.[ 16 , 17 ] A good question in systematic review should include components that are PICOS style which include population (P), intervention (I), comparison (C), outcome (O), and setting (S).[ 18 ] Regarding the purpose of the study, control in clinical trials or pre-poststudies can replace C.[ 19 ]

Search and identify eligible texts

After clarifying the research question and before searching the databases, it is necessary to specify searching methods, articles screening, studies eligibility check, check of the references in eligible studies, data extraction, and data analysis. This helps researchers ensure that potential biases in the selection of potential studies are minimized.[ 14 , 17 ] It should also look at details such as which published and unpublished literature have been searched, how they were searched, by which mechanism they were searched, and what are the inclusion and exclusion criteria.[ 4 ] First, all studies are searched and collected according to predefined keywords; then the title, abstract, and the entire text are screened for relevance by the authors.[ 13 ] By screening articles based on their titles, researchers can quickly decide on whether to retain or remove an article. If more information is needed, the abstracts of the articles will also be reviewed. In the next step, the full text of the articles will be reviewed to identify the relevant articles, and the reason for the removal of excluded articles is reported.[ 20 ] Finally, it is recommended that the process of searching, selecting, and screening articles be reported as a flowchart.[ 21 ] By increasing research, finding up-to-date and relevant information has become more difficult.[ 22 ]

Currently, there is no specific guideline as to which databases should be searched, which database is the best, and how many should be searched; but overall, it is advisable to search broadly. Because no database covers all health topics, it is recommended to use several databases to search.[ 23 ] According to the A MeaSurement Tool to Assess Systematic Reviews scale (AMSTAR) at least two databases should be searched in systematic and meta-analysis, although more comprehensive and accurate results can be obtained by increasing the number of searched databases.[ 24 ] The type of database to be searched depends on the systematic review question. For example, in a clinical trial study, it is recommended that Cochrane, multi-regional clinical trial (mRCTs), and International Clinical Trials Registry Platform be searched.[ 25 ]

For example, MEDLINE, a product of the National Library of Medicine in the United States of America, focuses on peer-reviewed articles in biomedical and health issues, while Embase covers the broad field of pharmacology and summaries of conferences. CINAHL is a great resource for nursing and health research and PsycINFO is a great database for psychology, psychiatry, counseling, addiction, and behavioral problems. Also, national and regional databases can be used to search related articles.[ 26 , 27 ] In addition, the search for conferences and gray literature helps to resolve the file-drawn problem (negative studies that may not be published yet).[ 26 ] If a systematic review is carried out on articles in a particular country or region, the databases in that region or country should also be investigated. For example, Iranian researchers can use national databases such as Scientific Information Database and MagIran. Comprehensive search to identify the maximum number of existing studies leads to a minimization of the selection bias. In the search process, the available databases should be used as much as possible, since many databases are overlapping.[ 17 ] Searching 12 databases (PubMed, Scopus, Web of Science, EMBASE, GHL, VHL, Cochrane, Google Scholar, Clinical trials.gov, mRCTs, POPLINE, and SIGLE) covers all articles published in the field of medicine and health.[ 25 ] Some have suggested that references management software be used to search for more easy identification and removal of duplicate articles from several different databases.[ 20 ] At least one search strategy is presented in the article.[ 21 ]

Quality assessment

The methodological quality assessment of articles is a key step in systematic review that helps identify systemic errors (bias) in results and interpretations. In systematic review studies, unlike other review studies, qualitative assessment or risk of bias is required. There are currently several tools available to review the quality of the articles. The overall score of these tools may not provide sufficient information on the strengths and weaknesses of the studies.[ 28 ] At least two reviewers should independently evaluate the quality of the articles, and if there is any objection, the third author should be asked to examine the article or the two researchers agree on the discussion. Some believe that the study of the quality of studies should be done by removing the name of the journal, title, authors, and institutions in a Blinded fashion.[ 29 ]

There are several ways for quality assessment, such as Sack's quality assessment (1988),[ 30 ] overview quality assessment questionnaire (1991),[ 31 ] CASP (Critical Appraisal Skills Program),[ 32 ] and AMSTAR (2007),[ 33 ] Besides, CASP,[ 34 ] the National Institute for Health and Care Excellence,[ 35 ] and the Joanna Briggs Institute System for the Unified Management, Assessment and Review of Information checklists.[ 30 , 36 ] However, it is worth mentioning that there is no single tool for assessing the quality of all types of reviews, but each is more applicable to some types of reviews. Often, the STROBE tool is used to check the quality of articles. It reviews the title and abstract (item 1), introduction (items 2 and 3), implementation method (items 4–12), findings (items 13–17), discussion (Items 18–21), and funding (item 22). Eighteen items are used to review all articles, but four items (6, 12, 14, and 15) apply in certain situations.[ 9 ] The quality of interventional articles is often evaluated by the JADAD tool, which consists of three sections of randomization (2 scores), blinding (2 scores), and patient count (1 scores).[ 29 ]

Data extraction

At this stage, the researchers extract the necessary information in the selected articles. Elamin believes that reviewing the titles and abstracts and data extraction is a key step in the review process, which is often carried out by two of the research team independently, and ultimately, the results are compared.[ 37 ] This step aimed to prevent selection bias and it is recommended that the chance of agreement between the two researchers (Kappa coefficient) be reported at the end.[ 26 ] Although data collection forms may differ in systematic reviews, they all have information such as first author, year of publication, sample size, target community, region, and outcome. The purpose of data synthesis is to collect the findings of eligible studies, evaluate the strengths of the findings of the studies, and summarize the results. In data synthesis, we can use different analysis frameworks such as meta-ethnography, meta-analysis, or thematic synthesis.[ 38 ] Finally, after quality assessment, data analysis is conducted. The first step in this section is to provide a descriptive evaluation of each study and present the findings in a tabular form. Reviewing this table can determine how to combine and analyze various studies.[ 28 ] The data synthesis approach depends on the nature of the research question and the nature of the initial research studies.[ 39 ] After reviewing the bias and the abstract of the data, it is decided that the synthesis is carried out quantitatively or qualitatively. In case of conceptual heterogeneity (systematic differences in the study design, population, and interventions), the generalizability of the findings will be reduced and the study will not be meta-analysis. The meta-analysis study allows the estimation of the effect size, which is reported as the odds ratio, relative risk, hazard ratio, prevalence, correlation, sensitivity, specificity, and incidence with a confidence interval.[ 26 ]

Estimation of the effect size in systematic review and meta-analysis studies varies according to the type of studies entered into the analysis. Unlike the mean, prevalence, or incidence index, in odds ratio, relative risk, and hazard ratio, it is necessary to combine logarithm and logarithmic standard error of these statistics [ Table 1 ].

Effect size in systematic review and meta-analysis

OR=Odds ratio; RR=Relative risk; RCT= Randomized controlled trial; PPV: positive predictive value; NPV: negative predictive value; PLR: positive likelihood ratio; NLR: negative likelihood ratio; DOR: diagnostic odds ratio

Interpreting and presenting results (answers to research questions)

A systematic review ends with the interpretation of results. At this stage, the results of the study are summarized and the conclusions are presented to improve clinical and therapeutic decision-making. A systematic review with or without meta-analysis provides the best evidence available in the hierarchy of evidence-based practice.[ 14 ] Using meta-analysis can provide explicit conclusions. Conceptually, meta-analysis is used to combine the results of two or more studies that are similar to the specific intervention and the similar outcomes. In meta-analysis, instead of the simple average of the results of various studies, the weighted average of studies is reported, meaning studies with larger sample sizes account for more weight. To combine the results of various studies, we can use two models of fixed and random effects. In the fixed-effect model, it is assumed that the parameters studied are constant in all studies, and in the random-effect model, the measured parameter is assumed to be distributed between the studies and each study has measured some of it. This model offers a more conservative estimate.[ 40 ]

Three types of homogeneity tests can be used: (1) forest plot, (2) Cochrane's Q test (Chi-squared), and (3) Higgins I 2 statistics. In the forest plot, more overlap between confidence intervals indicates more homogeneity. In the Q statistic, when the P value is less than 0.1, it indicates heterogeneity exists and a random-effect model should be used.[ 41 ] Various tests such as the I 2 index are used to determine heterogeneity, values between 0 and 100; the values below 25%, between 25% and 50%, and above 75% indicate low, moderate, and high levels of heterogeneity, respectively.[ 26 , 42 ] The results of the meta-analyzing study are presented graphically using the forest plot, which shows the statistical weight of each study with a 95% confidence interval and a standard error of the mean.[ 40 ]

The importance of meta-analyses and systematic reviews in providing evidence useful in making clinical and policy decisions is ever-increasing. Nevertheless, they are prone to publication bias that occurs when positive or significant results are preferred for publication.[ 43 ] Song maintains that studies reporting a certain direction of results or powerful correlations may be more likely to be published than the studies which do not.[ 44 ] In addition, when searching for meta-analyses, gray literature (e.g., dissertations, conference abstracts, or book chapters) and unpublished studies may be missed. Moreover, meta-analyses only based on published studies may exaggerate the estimates of effect sizes; as a result, patients may be exposed to harmful or ineffective treatment methods.[ 44 , 45 ] However, there are some tests that can help in detecting negative expected results that are not included in a review due to publication bias.[ 46 ] In addition, publication bias can be reduced through searching for data that are not published.

Systematic reviews and meta-analyses have certain advantages; some of the most important ones are as follows: examining differences in the findings of different studies, summarizing results from various studies, increased accuracy of estimating effects, increased statistical power, overcoming problems related to small sample sizes, resolving controversies from disagreeing studies, increased generalizability of results, determining the possible need for new studies, overcoming the limitations of narrative reviews, and making new hypotheses for further research.[ 47 , 48 ]

Despite the importance of systematic reviews, the author may face numerous problems in searching, screening, and synthesizing data during this process. A systematic review requires extensive access to databases and journals that can be costly for nonacademic researchers.[ 13 ] Also, in reviewing the inclusion and exclusion criteria, the inevitable mindsets of browsers may be involved and the criteria are interpreted differently from each other.[ 49 ] Lee refers to some disadvantages of these studies, the most significant ones are as follows: a research field cannot be summarized by one number, publication bias, heterogeneity, combining unrelated things, being vulnerable to subjectivity, failing to account for all confounders, comparing variables that are not comparable, just focusing on main effects, and possible inconsistency with results of randomized trials.[ 47 ] Different types of programs are available to perform meta-analysis. Some of the most commonly used statistical programs are general statistical packages, including SAS, SPSS, R, and Stata. Using flexible commands in these programs, meta-analyses can be easily run and the results can be readily plotted out. However, these statistical programs are often expensive. An alternative to using statistical packages is to use programs designed for meta-analysis, including Metawin, RevMan, and Comprehensive Meta-analysis. However, these programs may have limitations, including that they can accept few data formats and do not provide much opportunity to set the graphical display of findings. Another alternative is to use Microsoft Excel. Although it is not a free software, it is usually found in many computers.[ 20 , 50 ]

A systematic review study is a powerful and valuable tool for answering research questions, generating new hypotheses, and identifying areas where there is a lack of tangible knowledge. A systematic review study provides an excellent opportunity for researchers to improve critical assessment and evidence synthesis skills.

Authors' contributions

All authors contributed equally to this work.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

IMAGES

  1. Evidence Summaries/Synthesis

    research synthesis systematic review

  2. The importance of meta-analysis and systematic review: How research

    research synthesis systematic review

  3. What is a Systematic Review

    research synthesis systematic review

  4. [PDF] How to Write a Systematic Review : A Step-by-Step Guide

    research synthesis systematic review

  5. Overview & the Systematic Review Team

    research synthesis systematic review

  6. Before you begin

    research synthesis systematic review

VIDEO

  1. Statistical Procedure in Meta-Essentials

  2. Lecture Designing Organic Syntheses 4 Prof G Dyker 151014

  3. Lecture Designing Organic Syntheses 7 Prof G Dyker 291014

  4. Types of Literature Review| Narrative| Theoretical| Scoping|Systematic|Meta Analysis|Meta Synthesis

  5. Systematic Review _ 05 (Narrative Synthesis & SWiM in Systematic Reviews)

  6. 1 What are systematic reviews?

COMMENTS

  1. What Synthesis Methodology Should I Use? A Review and Analysis of Approaches to Research Synthesis

    The first is a well-developed research question that gives direction to the synthesis (e.g., meta-analysis, systematic review, meta-study, concept analysis, rapid review, realist synthesis). The second begins as a broad general question that evolves and becomes more refined over the course of the synthesis (e.g., meta-ethnography, scoping ...

  2. PDF Checklist for Systematic Reviews and Research Syntheses

    JBI Systematic Reviews The core of evidence synthesis is the systematic review of literature of a particular intervention, condition or issue. The systematic review is essentially an analysis of the available literature (that is, evidence) and a judgment of the effectiveness or otherwise of a practice, involving a series of complex steps.

  3. An overview of methodological approaches in systematic reviews

    1. INTRODUCTION. Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search ...

  4. Research Guides: Systematic Reviews & Evidence Synthesis Methods

    One commonly used form of evidence synthesis is a systematic review. This table compares a traditional literature review with a systematic review. ... Topics may be broad in scope; the goal of the review may be to place one's own research within the existing body of knowledge, or to gather information that supports a particular viewpoint.

  5. Meta-analysis and the science of research synthesis

    Meta-analysis is the quantitative, scientific synthesis of research results. Since the term and modern approaches to research synthesis were first introduced in the 1970s, meta-analysis has had a ...

  6. Systematic reviews: Structure, form and content

    As well as synthesis of these studies' findings, there should be an element of evaluation and quality assessment. ... 2015) - although a systematic review may be an inappropriate or unnecessary research methodology for answering many research questions. Systematic reviews can be inadvisable for a variety of reasons. It may be that the topic ...

  7. Guidance on Conducting a Systematic Literature Review

    A search on EBSCOhost using keywords "review methodology," "literature review," and "research synthesis" returned 653 records of peer-reviewed articles. ... Britten Nicky, Roen Katrina, Duffy Steven. 2006. "Guidance on the Conduct of Narrative Synthesis in Systematic Reviews." A product from the ESRC methods programme Version 1 ...

  8. How to Do a Systematic Review: A Best Practice Guide for ...

    The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information.

  9. Synthesis and systematic maps

    Synthesis is the process of combining the findings of research studies. A synthesis is also the product and output of the combined studies. This output may be a written narrative, a table, or graphical plots, including statistical meta-analysis. ... If a systematic review question is about the effectiveness of an intervention, then the included ...

  10. Systematic Reviews and Meta-Analyses: Synthesis & Discussion

    Qualitative Synthesis in Systematic Reviews and/or Meta-Analyses. Selecting the best approach for synthesis will depend on your scope, included material, field of research, etc.Therefore, it is important to follow methodological guidance that best matches your scope and field (e.g., a heath-focused review guided by the Cochrane Handbook).It can also be helpful to check out the synthesis and ...

  11. Systematic reviews: Structure, form and content

    A systematic review collects secondary data, and is a synthesis of all available, relevant evidence which brings together all existing primary studies for review (Cochrane 2016). A systematic review differs from other types of literature review in several major ways.

  12. Research Guides: Systematic Reviews & Evidence Synthesis Methods

    Systematic Reviews & Evidence Synthesis Methods. A detailed, step-by-step guide to the first several stages of an evidence synthesis review. Email this link: ... Given the time and effort needed to create a systematic review, research questions with the potential to have significant impact are preferred.

  13. Getting Started

    A systematic review is guided filtering and synthesis of all available evidence addressing a specific, focused research question, generally about a specific intervention or exposure. The use of standardized, systematic methods and pre-selected eligibility criteria reduce the risk of bias in identifying, selecting and analyzing relevant studies.

  14. Systematic Review and Evidence Synthesis

    This guide is directly informed by and selectively reuses, with permission, content from: Systematic Reviews, Scoping Reviews, and other Knowledge Syntheses by Genevieve Gore and Jill Boruff, McGill University (CC-BY-NC-SA); A Guide to Evidence Synthesis, Cornell University Library Evidence Synthesis Service; Primary University of Minnesota Libraries authors are: Meghan Lafferty, Scott ...

  15. Overview

    Systematic Review: Comprehensive literature synthesis on a specific research question, typically requires a team: Systematic; exhaustive and comprehensive; search of all available evidence: Yes: Yes: Narrative and tables, describes what is known and unknown, recommendations for future research, limitations of findings:

  16. Systematic Reviews & Evidence Synthesis Methods

    A systematic review gathers, assesses, and synthesizes all available empirical research on a specific question using a comprehensive search method with an aim to minimize bias.. Or, put another way: . A systematic review begins with a specific research question. Authors of the review gather and evaluate all experimental studies that address the question.

  17. Synthesise

    In a qualitative systematic review, data can be presented in a number of different ways. A typical procedure in the health sciences is thematic analysis. As explained by James Thomas and Angela Harden (2008) in an article for BMC Medical Research Methodology: "Thematic synthesis has three stages: the coding of text 'line-by-line'

  18. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...

  19. Qualitative Evidence Synthesis: Where Are We at?

    The term QES is used, and is the preferred term of the Cochrane Qualitative and Implementation Methods Group, as it acknowledges that qualitative research requires its own methods for synthesis which reflects the nature of the qualitative paradigm, rather than simply using the same methods devised for systematic reviews of quantitative research (Booth et al., 2016).

  20. Methods for the thematic synthesis of qualitative research in

    The systematic review is an important technology for the evidence-informed policy and practice movement, which aims to bring research closer to decision-making [1, 2].This type of review uses rigorous and explicit methods to bring together the results of primary research in order to provide reliable answers to particular questions [3-6].The picture that is presented aims to be distorted ...

  21. Systematic Reviews and Meta-analysis: Understanding the Best Evidence

    With the view to address this challenge, the systematic review method was developed. Systematic reviews aim to inform and facilitate this process through research synthesis of multiple studies, enabling increased and efficient access to evidence.[1,3,4] Systematic reviews and meta-analyses have become increasingly important in healthcare settings.

  22. Should I do a synthesis (i.e. literature review)?

    In this Questions and Quandaries, we address the question of literature reviews in education research, considering the why, when, and how, as well as potential pitfalls. ... Examining the validity argument for the Ottawa Surgical Competency operating room evaluation (OSCORE): A systematic review and narrative synthesis. Advances in Health ...

  23. Finding and Appraising Existing Systematic Reviews

    A systematic review is a comprehensive literature search and synthesis project that tries to answer a well-defined question using existing primary research as evidence. A protocol is used to plan the systematic review methods prior to the project, including what is and is not included in the search. Systematic reviews are often used as the foundation for a meta analysis (a statistical process ...

  24. Systematic Reviews & Literature Reviews

    We're going to start by looking at two types of evidence synthesis: literature reviews and systemic reviews. To help me with this topic I looked at a number of research guides from other institutions, e.g., Cornell University Libraries. The Key Differences Between a Literature Review and a Systematic Review

  25. Prevalence of Mental Health Disorders Among Individuals Experiencing

    This systematic review and meta-analysis assesses the prevalence of mental health disorders among people experiencing homelessness. ... 4 Mathison Centre for Mental Health Research and Education, Cumming School of Medicine, University of Calgary, Calgary ... Data Extraction and Synthesis Data extraction was completed using standardized forms in ...

  26. Promising practices for culturally relevant assessment: A systematic review

    Olasunkanmi Kehinde is a Ph.D. candidate in Educational Psychology at Washington State University, with a background in applied mathematics and statistics. His research interests include assessment and measurement, psychometrics, large-scale assessment, cognitive diagnostic models (CDMs), multilevel modeling, systematic reviews/meta-analyses, and structural equation modeling in social, medical ...

  27. Consumer responses toward smart technology: A systematic review

    This article is a comprehensive review of the literature on smart technology in consumer studies from 1996 to 2023. While the paper provides information about the development of the field by identifying important publications and authors, it employs topic modeling to pinpoint key topics in papers published in marketing and business journals.

  28. Consumer responses toward smart technology: A systematic review

    Design/methodology/approach This study undertakes a framework-based systematic review of 239 articles on AI in marketing from the consumer perspective published in peer-reviewed journals from 2007 ...

  29. Informatics

    Using the Preferred Reporting Items for Systematic Review and Meta-Analysis, the paper reviewed 60 studies conducted since the beginning of the twenty-first century and classified them by different metrics to identify relevant trends and research gaps. ... (PRISMA-P) in data identification, screening, synthesis and analysis. The research ...

  30. How to Write a Systematic Review: A Narrative Review

    In this study, the steps for a systematic review such as research question design and identification, the search for qualified published studies, the extraction and synthesis of information that pertain to the research question, and interpretation of the results are presented in details. This will be helpful to all interested researchers.