Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Systematic Review | Definition, Examples & Guide

Systematic Review | Definition, Examples & Guide

Published on 15 June 2022 by Shaun Turney . Revised on 17 October 2022.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesise all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question ‘What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?’

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs meta-analysis, systematic review vs literature review, systematic review vs scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce research bias . The methods are repeatable , and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesise the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesising all available evidence and evaluating the quality of the evidence. Synthesising means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Prevent plagiarism, run a free check.

Systematic reviews often quantitatively synthesise the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesise results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarise and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimise bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimise research b ias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinised by others.
  • They’re thorough : they summarise all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fourth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomised control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective(s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesise the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Grey literature: Grey literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of grey literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of grey literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Grey literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarise what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgement of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomised into the control and treatment groups.

Step 6: Synthesise the data

Synthesising the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesising the data:

  • Narrative ( qualitative ): Summarise the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarise and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analysed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a dissertation , thesis, research paper , or proposal .

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarise yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Turney, S. (2022, October 17). Systematic Review | Definition, Examples & Guide. Scribbr. Retrieved 22 April 2024, from https://www.scribbr.co.uk/research-methods/systematic-reviews/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, what is a literature review | guide, template, & examples, exploratory research | definition, guide, & examples, what is peer review | types & examples.

case study vs systematic review

Evidence Synthesis and Systematic Reviews

Systematic reviews, rapid reviews, scoping reviews.

  • Other Review Types
  • Resources for Reviews by Discipline and Type
  • Tools for Evidence Synthesis
  • Grey Literature

Definition : A systematic review is a summary of research results (evidence) that uses explicit and reproducible methods to systematically search, critically appraise, and synthesize on a specific issue. It synthesizes the results of multiple primary studies related to each other by using strategies that reduce biases and errors.

When to use : If you want to identify, appraise, and synthesize all available research that is relevant to a particular question with reproduceable search methods.

Limitations : It requires extensive time and a team

Resources :

  • Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare
  • The 8 stages of a systematic review
  • Determining the scope of the review and the questions it will address
  • Reporting the review

Definition : Rapid reviews are a form of evidence synthesis that may provide more timely information for decision making compared with standard systematic reviews

When to use : When you want to evaluate new or emerging research topics using some systematic review methods at a faster pace

Limitations : It is not as rigorous or as thorough as a systematic review and therefore may be more likely to be biased

  • Cochrane guidance for rapid reviews
  • Steps for conducting a rapid review
  • Expediting systematic reviews: methods and implications of rapid reviews

Definition : Scoping reviews are often used to categorize or group existing literature in a given field in terms of its nature, features, and volume.

When to use : Label body of literature with relevance to time, location (e.g. country or context), source (e.g. peer-reviewed or grey literature), and origin (e.g. healthcare discipline or academic field) It also is used to clarify working definitions and conceptual boundaries of a topic or field or to identify gaps in existing literature/research

Limitations : More citations to screen and takes as long or longer than a systematic review.  Larger teams may be required because of the larger volumes of literature.  Different screening criteria and process than a systematic review

  • PRISMA-ScR for scoping reviews
  • JBI Updated methodological guidance for the conduct of scoping reviews
  • JBI Manual: Scoping Reviews (2020)
  • Equator Network-Current Best Practices for the Conduct of Scoping Reviews
  • << Previous: Home Page
  • Next: Other Review Types >>
  • Last Updated: Feb 14, 2024 8:15 AM
  • URL: https://guides.temple.edu/systematicreviews

Temple University

University libraries.

See all library locations

  • Library Directory
  • Locations and Directions
  • Frequently Called Numbers

Twitter Icon

Need help? Email us at [email protected]

Introduction to Systematic Reviews

  • Reference work entry
  • First Online: 20 July 2022
  • pp 2159–2177
  • Cite this reference work entry

case study vs systematic review

  • Tianjing Li 3 ,
  • Ian J. Saldanha 4 &
  • Karen A. Robinson 5  

209 Accesses

A systematic review identifies and synthesizes all relevant studies that fit prespecified criteria to answer a research question. Systematic review methods can be used to answer many types of research questions. The type of question most relevant to trialists is the effects of treatments and is thus the focus of this chapter. We discuss the motivation for and importance of performing systematic reviews and their relevance to trialists. We introduce the key steps in completing a systematic review, including framing the question, searching for and selecting studies, collecting data, assessing risk of bias in included studies, conducting a qualitative synthesis and a quantitative synthesis (i.e., meta-analysis), grading the certainty of evidence, and writing the systematic review report. We also describe how to identify systematic reviews and how to assess their methodological rigor. We discuss the challenges and criticisms of systematic reviews, and how technology and innovations, combined with a closer partnership between trialists and systematic reviewers, can help identify effective and safe evidence-based practices more quickly.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

AHRQ (2015) Methods guide for effectiveness and comparative effectiveness reviews. Available from https://effectivehealthcare.ahrq.gov/products/cer-methods-guide/overview . Accessed on 27 Oct 2019

Andersen MZ, Gülen S, Fonnes S, Andresen K, Rosenberg J (2020) Half of Cochrane reviews were published more than two years after the protocol. J Clin Epidemiol 124:85–93. https://doi.org/10.1016/j.jclinepi.2020.05.011

Article   Google Scholar  

Berkman ND, Lohr KN, Ansari MT, Balk EM, Kane R, McDonagh M, Morton SC, Viswanathan M, Bass EB, Butler M, Gartlehner G, Hartling L, McPheeters M, Morgan LC, Reston J, Sista P, Whitlock E, Chang S (2015) Grading the strength of a body of evidence when assessing health care interventions: an EPC update. J Clin Epidemiol 68(11):1312–1324

Borah R, Brown AW, Capers PL, Kaiser KA (2017) Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry. BMJ Open 7(2):e012545. https://doi.org/10.1136/bmjopen-2016-012545

Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, Howells DW, Ioannidis JP, Oliver S (2014) How to increase value and reduce waste when research priorities are set. Lancet 383(9912):156–165. https://doi.org/10.1016/S0140-6736(13)62229-1

Clarke M, Chalmers I (1998) Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents? JAMA 280(3):280–282

Cooper NJ, Jones DR, Sutton AJ (2005) The use of systematic reviews when designing studies. Clin Trials 2(3):260–264

Djulbegovic B, Kumar A, Magazin A, Schroen AT, Soares H, Hozo I, Clarke M, Sargent D, Schell MJ (2011) Optimism bias leads to inconclusive results-an empirical study. J Clin Epidemiol 64(6):583–593. https://doi.org/10.1016/j.jclinepi.2010.09.007

Elliott JH, Synnot A, Turner T, Simmonds M, Akl EA, McDonald S, Salanti G, Meerpohl J, MacLehose H, Hilton J, Tovey D, Shemilt I, Thomas J (2017) Living systematic review network. Living systematic review: 1. Introduction-the why, what, when, and how. J Clin Epidemiol 91:23–30

Equator Network. Reporting guidelines for systematic reviews. Available from https://www.equator-network.org/?post_type=eq_guidelines&eq_guidelines_study_design=systematic-reviews-and-meta-analyses&eq_guidelines_clinical_specialty=0&eq_guidelines_report_section=0&s=+ . Accessed 9 Mar 2020

Garner P, Hopewell S, Chandler J, MacLehose H, Schünemann HJ, Akl EA, Beyene J, Chang S, Churchill R, Dearness K, Guyatt G, Lefebvre C, Liles B, Marshall R, Martínez García L, Mavergames C, Nasser M, Qaseem A, Sampson M, Soares-Weiser K, Takwoingi Y, Thabane L, Trivella M, Tugwell P, Welsh E, Wilson EC, Schünemann HJ (2016) Panel for updating guidance for systematic reviews (PUGs). When and how to update systematic reviews: consensus and checklist. BMJ 354:i3507. https://doi.org/10.1136/bmj.i3507 . Erratum in: BMJ 2016 Sep 06 354:i4853

Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, DeBeer H, Jaeschke R, Rind D, Meerpohl J, Dahm P, Schünemann HJ (2011) GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. J Clin Epidemiol 64(4):383–394

Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) (2019a) Cochrane handbook for systematic reviews of interventions, 2nd edn. Wiley, Chichester

Google Scholar  

Higgins JPT, Lasserson T, Chandler J, Tovey D, Thomas J, Flemyng E, Churchill R (2019b) Standards for the conduct of new Cochrane intervention reviews. In: JPT H, Lasserson T, Chandler J, Tovey D, Thomas J, Flemyng E, Churchill R (eds) Methodological expectations of Cochrane intervention reviews. Cochrane, London

IOM (2011) Committee on standards for systematic reviews of comparative effectiveness research, board on health care services. In: Eden J, Levit L, Berg A, Morton S (eds) Finding what works in health care: standards for systematic reviews. National Academies Press, Washington, DC

Jonnalagadda SR, Goyal P, Huffman MD (2015) Automating data extraction in systematic reviews: a systematic review. Syst Rev 4:78

Krnic Martinic M, Pieper D, Glatt A, Puljak L (2019) Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol 19(1):203. Published 4 Nov 2019. https://doi.org/10.1186/s12874-019-0855-0

Lasserson TJ, Thomas J, Higgins JPT (2019) Chapter 1: Starting a review. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane. Available from www.training.cochrane.org/handbook

Lau J, Antman EM, Jimenez-Silva J, Kupelnick B, Mosteller F, Chalmers TC (1992) Cumulative meta-analysis of therapeutic trials for myocardial infarction. N Engl J Med 327(4):248–254

Lau J (2019) Editorial: systematic review automation thematic series. Syst Rev 8(1):70. Published 11 Mar 2019. https://doi.org/10.1186/s13643-019-0974-z

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D (2009) The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med 6(7):e1000100. https://doi.org/10.1371/journal.pmed.1000100

Lund H, Brunnhuber K, Juhl C, Robinson K, Leenaars M, Dorch BF, Jamtvedt G, Nortvedt MW, Christensen R, Chalmers I (2016) Towards evidence based research. BMJ 355:i5440. https://doi.org/10.1136/bmj.i5440

Marshall IJ, Noel-Storr A, Kuiper J, Thomas J, Wallace BC (2018) Machine learning for identifying randomized controlled trials: an evaluation and practitioner’s guide. Res Synth Methods 9(4):602–614. https://doi.org/10.1002/jrsm.1287

Michelson M, Reuter K (2019) The significant cost of systematic reviews and meta-analyses: a call for greater involvement of machine learning to assess the promise of clinical trials. Contemp Clin Trials Commun 16:100443. https://doi.org/10.1016/j.conctc.2019.100443 . Erratum in: Contemp Clin Trials Commun 2019 16:100450

Moher D, Liberati A, Tetzlaff J (2009) Altman DG; PRISMA group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med 151(4):264–269. W64

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA, PRISMA-P Group (2015) Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev 4(1):1. https://doi.org/10.1186/2046-4053-4-1

NIHR HTA Stage 1 guidance notes. Available from https://www.nihr.ac.uk/documents/hta-stage-1-guidance-notes/11743 ; Accessed 10 Mar 2020

Page MJ, Shamseer L, Altman DG, Tetzlaff J, Sampson M, Tricco AC, Catalá-López F, Li L, Reid EK, Sarkis-Onofre R, Moher D (2016) Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Med 13(5):e1002028. https://doi.org/10.1371/journal.pmed.1002028

Page MJ, Higgins JPT, Sterne JAC (2019) Chapter 13: assessing risk of bias due to missing results in a synthesis. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ et al (eds) Cochrane handbook for systematic reviews of interventions, 2nd edn. Wiley, Chichester, pp 349–374

Chapter   Google Scholar  

Robinson KA (2009) Use of prior research in the justification and interpretation of clinical trials. Johns Hopkins University

Robinson KA, Goodman SN (2011) A systematic examination of the citation of prior research in reports of randomized, controlled trials. Ann Intern Med 154(1):50–55. https://doi.org/10.7326/0003-4819-154-1-201101040-00007

Rouse B, Cipriani A, Shi Q, Coleman AL, Dickersin K, Li T (2016) Network meta-analysis for clinical practice guidelines – a case study on first-line medical therapies for primary open-angle glaucoma. Ann Intern Med 164(10):674–682. https://doi.org/10.7326/M15-2367

Saldanha IJ, Lindsley K, Do DV et al (2017) Comparison of clinical trial and systematic review outcomes for the 4 most prevalent eye diseases. JAMA Ophthalmol 135(9):933–940. https://doi.org/10.1001/jamaophthalmol.2017.2583

Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, Porter AC, Tugwell P, Moher D, Bouter LM (2007) Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol 7:10

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, Henry DA (2017) AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 358:j4008. https://doi.org/10.1136/bmj.j4008

Shojania KG, Sampson M, Ansari MT, Ji J, Doucette S, Moher D (2007) How quickly do systematic reviews go out of date? A survival analysis. Ann Intern Med 147(4):224–233

Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, Henry D, Altman DG, Ansari MT, Boutron I, Carpenter JR, Chan AW, Churchill R, Deeks JJ, Hróbjartsson A, Kirkham J, Jüni P, Loke YK, Pigott TD, Ramsay CR, Regidor D, Rothstein HR, Sandhu L, Santaguida PL, Schünemann HJ, Shea B, Shrier I, Tugwell P, Turner L, Valentine JC, Waddington H, Waters E, Wells GA, Whiting PF, Higgins JP (2016) ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 355:i4919. https://doi.org/10.1136/bmj.i4919

Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, Cates CJ, Cheng HY, Corbett MS, Eldridge SM, Emberson JR, Hernán MA, Hopewell S, Hróbjartsson A, Junqueira DR, Jüni P, Kirkham JJ, Lasserson T, Li T, McAleenan A, Reeves BC, Shepperd S, Shrier I, Stewart LA, Tilling K, White IR, Whiting PF, Higgins JPT (2019) RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ 366:l4898. https://doi.org/10.1136/bmj.l4898

Thomas J, Kneale D, McKenzie JE, Brennan SE, Bhaumik S (2019) Chapter 2: determining the scope of the review and the questions it will address. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane. Available from www.training.cochrane.org/handbook

USPSTF U.S. Preventive Services Task Force Procedure Manual (2017). Available from: https://www.uspreventiveservicestaskforce.org/uspstf/sites/default/files/inline-files/procedure-manual2017_update.pdf . Accessed 21 May 2020

Whitaker (2015) UCSF guides: systematic review: when will i be finished? https://guides.ucsf.edu/c.php?g=375744&p=3041343 , Accessed 13 May 2020

Whiting P, Savović J, Higgins JP, Caldwell DM, Reeves BC, Shea B, Davies P, Kleijnen J (2016) Churchill R; ROBIS group. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol 69:225–234. https://doi.org/10.1016/j.jclinepi.2015.06.005

Download references

Author information

Authors and affiliations.

Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, CO, USA

Tianjing Li

Department of Health Services, Policy, and Practice and Department of Epidemiology, Brown University School of Public Health, Providence, RI, USA

Ian J. Saldanha

Department of Medicine, Johns Hopkins University, Baltimore, MD, USA

Karen A. Robinson

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Tianjing Li .

Editor information

Editors and affiliations.

Department of Surgery, Division of Surgical Oncology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA

Steven Piantadosi

Department of Epidemiology, School of Public Health, Johns Hopkins University, Baltimore, MD, USA

Curtis L. Meinert

Section Editor information

Department of Epidemiology, University of Colorado Denver Anschutz Medical Campus, Aurora, CO, USA

The Johns Hopkins Center for Clinical Trials and Evidence Synthesis, Johns Hopkins University, Baltimore, MD, USA

Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this entry

Cite this entry.

Li, T., Saldanha, I.J., Robinson, K.A. (2022). Introduction to Systematic Reviews. In: Piantadosi, S., Meinert, C.L. (eds) Principles and Practice of Clinical Trials. Springer, Cham. https://doi.org/10.1007/978-3-319-52636-2_194

Download citation

DOI : https://doi.org/10.1007/978-3-319-52636-2_194

Published : 20 July 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-52635-5

Online ISBN : 978-3-319-52636-2

eBook Packages : Mathematics and Statistics Reference Module Computer Science and Engineering

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Open supplemental data
  • Reference Manager
  • Simple TEXT file

People also looked at

Study protocol article, a protocol for the use of case reports/studies and case series in systematic reviews for clinical toxicology.

case study vs systematic review

  • 1 Univ Angers, CHU Angers, Univ Rennes, INSERM, EHESP, Institut de Recherche en Santé, Environnement et Travail-UMR_S 1085, Angers, France
  • 2 Department of Occupational Medicine, Epidemiology and Prevention, Donald and Barbara Zucker School of Medicine, Northwell Health, Feinstein Institutes for Medical Research, Hofstra University, Great Neck, NY, United States
  • 3 Department of Health Sciences, University of California, San Francisco and California State University, Hayward, CA, United States
  • 4 Program on Reproductive Health and the Environment, University of California, San Francisco, San Francisco, CA, United States
  • 5 Cesare Maltoni Cancer Research Center, Ramazzini Institute, Bologna, Italy
  • 6 Department of Research and Public Health, Reims Teaching Hospitals, Robert Debré Hospital, Reims, France
  • 7 CHU Angers, Univ Angers, Poisoning Control Center, Clinical Data Center, Angers, France

Introduction: Systematic reviews are routinely used to synthesize current science and evaluate the evidential strength and quality of resulting recommendations. For specific events, such as rare acute poisonings or preliminary reports of new drugs, we posit that case reports/studies and case series (human subjects research with no control group) may provide important evidence for systematic reviews. Our aim, therefore, is to present a protocol that uses rigorous selection criteria, to distinguish high quality case reports/studies and case series for inclusion in systematic reviews.

Methods: This protocol will adapt the existing Navigation Guide methodology for specific inclusion of case studies. The usual procedure for systematic reviews will be followed. Case reports/studies and case series will be specified in the search strategy and included in separate sections. Data from these sources will be extracted and where possible, quantitatively synthesized. Criteria for integrating cases reports/studies and case series into the overall body of evidence are that these studies will need to be well-documented, scientifically rigorous, and follow ethical practices. The instructions and standards for evaluating risk of bias will be based on the Navigation Guide. The risk of bias, quality of evidence and the strength of recommendations will be assessed by two independent review teams that are blinded to each other.

Conclusion: This is a protocol specified for systematic reviews that use case reports/studies and case series to evaluate the quality of evidence and strength of recommendations in disciplines like clinical toxicology, where case reports/studies are the norm.

Introduction

Systematic reviews are routinely relied upon to qualitatively synthesize current knowledge in a subject area. These reviews are often paired with a meta-analysis for quantitative syntheses. These qualitative and quantitative summaries of pooled data, collectively evaluate the quality of the evidence and the strength of the resulting research recommendations.

There currently exist several guidance documents to instruct on the rigors of systematic review methodology: (i) the Cochrane Collaboration, Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement and PRISMA-P (for protocols) that offer directives on data synthesis; and (ii) the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) guidelines that establish rules for the development of scientific recommendations ( 1 – 5 ). This systematic review guidance is based predominantly on clinical studies, where randomized controlled trials (RCTs) are the gold standard. For that reason, a separate group of researchers has designed the Navigation Guide, specific to environmental health studies that are often observational ( 6 , 7 ). To date, systematic review guidelines (GRADE, PRISMA, PRISMA-P, and Navigation Guide) remove case reports/studies and case series (human subjects research with no control group) from consideration in systematic reviews, in part due to the challenges in evaluating the internal validity of these kinds of study designs. We hypothesize, however, that under certain circumstances, such as in rare acute poisonings, or preliminary reports of new drugs, some case reports and case series may contribute relevant knowledge that would be informative to systematic review recommendations. This is particularly important in clinical settings, where such evidence could potentially change our understanding of the screening, presentation, and potential treatment of rare conditions, such as poisoning from obscure toxins. The Cochrane Collaboration handbook states that “ for some rare or delayed adverse outcomes only case series or case-control studies may be available. Non-randomized studies of interventions with some study design features that are more susceptible to bias may be acceptable for evaluation of serious adverse events in the absence of better evidence, but the risk of bias must still be assessed and reported ” ( 8 ). In addition, the Cochrane Adverse Effects group has shown that case studies may be the best settings in which to observe adverse effects, especially when they are rare and acute ( 9 ). We believe that there may be an effective way to consider case reports/studies and case series in systematic reviews, specifically by developing specific criteria for their inclusion and accounting for their inherent bias.

We propose here a systematic review protocol that has been specifically developed to consider the inclusion and integration of case reports/studies and case series. Our main objective is to create a protocol that is an adaptation of the Navigation Guide ( 6 , 10 ) that presents methodology to examine high quality case reports/studies and case series through cogent inclusion and exclusion criteria. This methodology is in concordance with the Cochrane Methods for Adverse Effects for scoping reviews ( 11 ).

This protocol was prepared in accordance with the usual structured methodology for systematic reviews (PRISMA, PRISMA-P, and Navigation guide) ( 3 – 7 , 10 ). The protocol will be registered on an appropriate website, such as one of the following:

(i) The International Prospective Register of Systematic Reviews (PROSPERO) database ( https://www.crd.york.ac.uk/PROSPERO/ ) is an international database of prospectively registered systematic reviews in health and social welfare, public health, education, crime, justice, and international development, where there is a health-related outcome. It aims to provide a comprehensive listing of systematic reviews registered at inception to help avoid duplication and reduce opportunity for reporting bias by enabling comparison of the completed review with what was planned in the protocol. PROSPERO accepts registrations for systematic reviews, rapid reviews, and umbrella reviews. Key elements of the review protocol are permanently recorded and stored.

(ii) The Open Science Framework (OSF) platform ( https://osf.io/ ) is a free, open, and integrated platform that facilitates open collaboration in research science. It allows for the management and sharing of research project at all stages of research for broad dissemination. It also enables capture of different aspects and products of the research lifecycle, from the development of a research idea, through the design of a study, the storage and analysis of collected data, to the writing and publication of reports or research articles.

(iii) The Research Registry (RR) database ( https://www.researchregistry.com/ ) is a one-stop repository for the registration of all types of research studies, from “first-in-man” case reports/studies to observational/interventional studies to systematic reviews and meta-analyses. The goal is to ensure that every study involving human participants is registered in accordance with the 2013 Declaration of Helsinki. The RR enables prospective or retrospective registrations of studies, including those types of studies that cannot be registered in existing registries. It specifically publishes systematic reviews and meta-analyses and does not register case reports/studies that are not first-in-man or animal studies.

Any significant future changes to the protocol resulting from knowledge gained during the development stages of this project will be documented in detail and a rationale for all changes will be proposed and reported in PROSPERO, OSF, or RR.

The overall protocol will differentiate itself from other known methodologies, by defining two independent teams of reviewers: a classical team and a case team. The classical team will review studies with control groups and an acceptable comparison group (case reports/studies and case series will be excluded). In effect, this team will conduct a more traditional systematic review where evidence from case reports/studies and case series are not considered. The case team will review classical studies, case reports, and case series. This case team will act as a comparison group to identify differences in systematic review conclusions due to the inclusion of evidence from case reports/studies and case series. Both teams will identify studies that meet specified inclusion criteria, conduct separate analyses and risk of bias evaluations, along with overall quality assessments, and syntheses of strengths of evidence. Each team will be blinded to the results of the other team throughout the process. Upon completion of the systematic review, results from each team will be presented, evaluated, and compared.

Patient and Public Involvement

No patient involved.

Eligibility Criteria

Studies will be selected according to the criteria outlined below.

Study Designs

Studies of any design reported in any translatable language to English by online programs (e.g., Google Translate) will be included at the beginning. These studies will span interventional studies with control groups (Randomized Controlled Trials: RCTs), as well as observational studies with and without exposed groups. All observational studies will be eligible for inclusion in accordance with the objectives of this systematic review. Thereafter, only the case team will include cases reports/studies and case series, as specified in their search strategy. The case team will include a separate section for human subjects research that has been conducted with no control groups.

Type of Population

All types of studies examining the general adult human population or healthy adult humans will be included. Studies that involve both adults and children will also be included if data for adults are reported separately. Animal studies will be excluded for the methodological purpose of this (case reports/studies and case series) protocol given that the framework for systematic reviews in toxicology already adequately retrieves this type of toxin data on animals.

Inclusion/Exclusion Criteria

Studies of any design will be included if they fulfill all the eligibility criteria. To be integrated into the overall body of evidence, cases reports/studies and case series must meet pre-defined criteria indicating that they are well-documented, scientifically rigorous, and follow ethical practices, under the CARE guidelines (for Ca se Re ports) ( 12 , 13 ) and the Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Case reports/studies and for Case Series ( 14 , 15 ) that classify case reports/studies in terms of completeness, transparency and data analysis. Studies that were conducted using unethical practices will be excluded.

Type of Exposure/Intervention

Either the prescribed treatment or described exposure to a chemical substance (toxin/toxicant) will be detailed here.

Type of Comparators

In this protocol we plan to compare two review methodologies: one will include and the other will exclude high quality case reports/studies and case series; these two review methodologies will be compared. The comparator will be (the presence or absence of) an available control group that has been specified and is acceptable scientifically and ethically.

Type of Outcomes

The outcome of mortality or morbidity related to the toxicological exposure, will be detailed here.

Information Sources and Search Strategy

There will be no design, date or language limitations applied to the search strategy. A systematic search in electronic academic databases, electronic grey literature, organizational websites, and internet search engines will be performed. We will search at least the following major databases:

- Electronic academic databases : Pubmed, Web of Sciences, Toxline, Poisondex, and databases specific to case reports/studies and case series (e.g., PMC, Scopus, Medline) ( 13 )

- Electronic grey literature databases : OpenGrey ( http://www.opengrey.eu/ ), grey literature Report ( http://greylit.org/ )

- Organizational websites : AHRQ Patient Safety Network ( https://psnet.ahrq.gov/webmm ), World Health Organization ( www.who.int )

- Internet search engines : Google ( https://www.google.com/ ), GoogleScholar ( https://scholar.google.com/ ).

Study Records

Following a systematic search in all the databases above, each of the two independent teams of reviewers (the classical team and the case team) will, respectively, upload separately and in accordance with the eligibility criteria, the literature search results to the systematic review management software, “Covidence,” a primary screening and data extraction tool ( 16 ).

All study records identified during the search will be downloaded and duplicate records will be identified and deleted. Thereafter, two research team members will independently screen the titles and abstracts (step 1) and then the full texts (step 2) of potentially relevant studies for inclusion. If necessary, information will be requested from the publication authors to resolve questions about eligibility. Finally, any disagreements that may potentially exist between the two research team members will be resolved first by discussion and then by consulting a third research team member for arbitration.

If a study record identified during the search was authored by a reviewing research team member, or that team member participated in the identified study, that study record will be re-assigned to another reviewing team member.

Data Collection Process, Items Included, and Prioritization if Needed

All reviewing team members will use standardized forms or software (e.g., Covidence), and each review member will independently extract the data from included studies. If possible, the extracted data will be synthesized numerically. To ensure consistency across reviewers, calibration exercises (reviewer training) will be conducted prior to starting the reviews. Extracted information will include the minimum study characteristics (study authors, study year, study country, participants, intervention/exposure, outcome), study design (summary of study design, comparator, models used, and effect estimate measure) and study context (e.g., data on simultaneous exposure to other risk factors that would be relevant contributors to morbidity or mortality). As specified in the section on study records, a third review team member will resolve any conflicts that arise during data extraction that are not resolved by consensus between the two initial data extractors.

Data on potential conflict of interest for included studies, as well as financial disclosures and funding sources, will also be extracted. If no financial statement or conflict of interest declaration is available, the names of the authors will be searched in other studies published within the previous 36 months and in other publicly available declarations of interests, for funding information ( 17 , 18 ).

Risk of Bias Assessment

To assess the risk of bias within included studies, the internal validity of potential studies will be assessed by using the Navigation Guide tool ( 6 , 19 ), which covers nine domains of bias for human studies: (a) source population representation; (b) blinding; (c) exposure or intervention assessment; (d) outcome assessment; (e) confounding; (f) incomplete outcome data; (g) selective outcome reporting; (h) conflict of interest; and (i) other sources of bias. For each section of the tool, the procedures undertaken for each study will be described and the risk of bias will be rated as “ low risk”; “probably low risk”; “probably risk”; “high risk”; or “not applicable.” Risk of bias on the levels of the individual study and the entire body of evidence will be assessed. Most of the text from these instructions and criteria for judging risk of bias has been adopted verbatim or adapted from one of the latest Navigation Guide systematic reviews used by WHO/ILO ( 6 , 19 , 20 ).

For case reports/studies and case series, the text from these instructions and criteria for judging risk of bias has been adopted verbatim or adapted from one of the latest Navigation Guide systematic reviews ( 21 ), and is given in Supplementary Material . Specific criteria are listed below. To ensure consistency across reviewers, calibration exercises (reviewer training) will be conducted prior to starting the risk of bias assessments for case reports/studies and case series.

Are the Study Groups at Risk of Not Representing Their Source Populations in a Manner That Might Introduce Selection Bias?

The source population is viewed as the population for which study investigators are targeting their study question of interest.

Examples of considerations for this risk of bias domain include: (1) the context of the case report; (2) level of detail reported for participant inclusion/exclusion (including details from previously published papers referenced in the article), with inclusion of all relevant consecutive patients in the considered period; ( 14 , 15 ) (3) exclusion rates, attrition rates and reasons.

Were Exposure/Intervention (Toxic, Treatment) Assessment Methods Lacking Accuracy?

The following list of considerations represents a collection of factors proposed by experts in various fields that may potentially influence the internal validity of the exposure assessment in a systematic manner (not those that may randomly affect overall study results). These should be interpreted only as suggested considerations and should not be viewed as scoring or a checklist . Considering there are no controls in such designs, this should be evaluated carefully to be sure the report really contributes to the actual knowledge .

List of Considerations :

Possible sources of exposure assessment metrics:

1) Identification of the exposure

2) Dose evaluation

3) Toxicological values

4) Clinical effects *

5) Biological effects *

6) Treatments given (dose, timing, route)

* Some clinical and biological effects might be related to exposure

For each, overall considerations include:

1) What is the quality of the source of the metric being used?

2) Is the exposure measured in the study a surrogate for the exposure?

3) What was the temporal coverage (i.e., short or long-term exposure)?

4) Did the analysis account for prediction uncertainty?

5) How was missing data accounted for, and any data imputations incorporated?

6) Were sensitivity analyses performed?

Were Outcome Assessment Methods Lacking Accuracy?

This item is similar to actual Navigation guidelines that require an assessment of the accuracy of the measured outcome.

Was Potential Confounding Inadequately Incorporated?

This is a very important issue for case reports/studies and case series. Case reports/studies and case series do not include controls and so, to be considered in a systematic review, these types of studies will need to be well-documented with respect to treatment or other contextual factors that may explain or influence the outcome. Prior to initiating the study screening, review team members should collectively generate a list of potential confounders that are based on expert opinion and knowledge gathered from the scientific literature:

Tier I: Important confounders

• Other associated treatment (i.e., intoxication, insufficient dose, history, or context)

• Medical history

Tier II: Other potentially important confounders and effect modifiers:

• Age, sex, country.

Were Incomplete Outcome Data Inadequately Addressed?

This item is similar to actual Navigation Guide instructions, though it may be very unlikely that outcome data would be incomplete in published case reports/studies and case series.

Does the Study Report Appear to Have Selective Outcome Reporting?

This item is similar to actual Navigation Guide instructions, though it may be very unlikely that there would be selective outcome reporting in published case reports/studies and case series.

Did the Study Receive Any Support From a Company, Study Author, or Other Entity Having a Financial Interest?

This item is similar to actual Navigation Guide instructions.

Did the Study Appear to Have Other Problems That Could Put It at a Risk of Bias?

Data synthesis criteria and summary measures if feasible.

Meta-analyses will be conducted using a random-effects model if studies are sufficiently homogeneous in terms of design and comparator. For dichotomous outcomes, effects of associations will be determined by using risk ratios (RR) or odds ratios (OR) with 95% confidence intervals (CI). Continuous outcomes will be analyzed using weighted mean differences (with 95% CI) or standardized mean differences (with 95% CI) if different measurement scales are used. Skewed data and non-quantitative data will be presented descriptively. Where data are missing, a request will be made to the original authors of the study to obtain the relevant missing data. If these data cannot be obtained, an imputation method will be performed. The statistical heterogeneity of the studies using the Chi Squared test (significance level: 0.1) and I 2 statistic (0–40%: might not be important; 30–60%: may represent moderate heterogeneity; 50–90%: may represent substantial heterogeneity; 75–100%: considerable heterogeneity). If there is heterogeneity, an attempt will be made to explain the source of this heterogeneity through a subgroup or sensitivity analysis.

Finally, the meta-analysis will be conducted in the latest version of the statistical software RevMan. The Mantel-Haenszel method will be used for the fixed effects model if tests of heterogeneity are not significant. If statistical heterogeneity is observed ( I 2 ≥ 50% or p < 0.1), the random effects model will be chosen. If quantitative synthesis is not feasible (e.g., if heterogeneity exists), a meta-analysis will not be performed and a narrative, qualitative summary of the study findings will be done.

Separate analyses will be conducted for the studies that contain control groups using expected mortality/morbidity, in order to include them in the quantitative synthesis of case reports/studies and case series.

If quantitative synthesis is not appropriate, a systematic narrative synthesis will be provided with information presented in the text and tables to summarize and explain the characteristics and findings of the included studies. The narrative synthesis will explore the relationship and findings both within and between the included studies.

Possible Additional Analyses

If feasible, subgroup analyses will be used to explore possible sources of heterogeneity, if there is evidence for differences in effect estimates by country, study design, or patient characteristics (e.g., sex and age). In addition, sensitivity analysis will be performed to explore the source of heterogeneity as for example, published vs. unpublished data, full-text publications vs. abstracts, risk of bias (by omitting studies that are judged to be at high risk of bias).

Overall Quality of Evidence Assessment

The quality of evidence will be assessed using an adapted version of the Evidence Quality Assessment Tool in the Navigation Guide. This tool is based on the GRADE approach ( 1 ). The assessment will be conducted by two teams, again blinded to each other, one that has the results of the case reports/studies and case series/control synthesis, the other without.

Data synthesis will be conducted independently by the classical and case teams. Evidence ratings will start at “high” for randomized control studies, “moderate” for observational studies, and “low” for case reports/studies and case series . It is important to be clear that sufficient levels of evidence cannot be achieved without study comparators. With regards to case reports/studies and case series, we classify these as starting at the lowest point of evidence and therefore we cannot consider evidence higher than low for these kinds of studies. Complete instructions for making quality of evidence judgments are presented in Supplementary Material .

Synthesis of Strength of Evidence

The standard Navigation Guide methodology will be applied to rate the strength of recommendations. The classical and case teams, blinded to the results from each other during the process, will independently assess the strength of evidence. The evidence quality ratings will be translated into strength of evidence for each population based on a combination of four criteria: (a) Quality of body of evidence; (b) Direction of effect; (c) Confidence in effect; and (d) Other compelling attributes of the data that may influence certainty. The ratings for strength of evidence will be “sufficient evidence of harmfulness,” “limited of harmfulness,” “inadequate of harmfulness” and “evidence of lack of harmfulness.”

Once we complete the synthesis of case reports/studies and case series, findings of this separate evidence stream will only be considered if RCTs and observational studies are not available. They will not be used to upgrade or downgrade the strength of other evidence streams.

To the best of our knowledge, this protocol is one of the first to specifically address the incorporation of case reports/studies and case series in a systematic review ( 9 ). The protocol was adapted from the Navigation Guide with the intent of integrating the case reports/studies and case series in systematic review recommendations, while following traditional systematic review methodology to the greatest extent possible. To be included, these case report/studies and case series will need to be well-documented, scientifically rigorous, and follow ethical practices. In addition, we believe that some case reports/studies and case series might bring relevant knowledge that should be considered in systematic review recommendations when data from RCT's and observational studies are not available, especially when even a small number of studies report an important and possibly causal association in an epidemic or a side effect of a newly marketed medicine. Our methodology will be the first to effectively incorporate case reports/studies and case series in systematic reviews that synthesize evidence for clinicians, researchers, and drug developers. These types of studies will be incorporated mostly through paper selection and risk of bias assessments. In addition, we will conduct meta-analyses if the eligible studies provide sufficient data.

This protocol has limitations related primarily to the constraints of case reports/studies and case series. These are descriptive studies. In addition, a case series is subject to selection bias because the clinician or researcher selects the cases themselves and may represent outliers in clinical practice. Furthermore, this kind of study does not have a control group, so it is not possible to compare what happens to other people who do not have the disease or receive treatment. These sources of bias mean that reported results may not be generalizable to a larger patient population and therefore cannot generate information on incidences or prevalence rates and ratios ( 22 , 23 ). However, it is important to note that promoting the need to synthesize these types of studies (case reports/studies and case series) in a formal systematic review, should not deter or delay immediate action from being taken when a few small studies report a plausible causal association between exposure and disease, such as, in the event of an epidemic or a side effect of a newly marketed medicine ( 23 ). In this study protocol, we will not consider animal studies that might give relevant toxicological information because we are focusing on study areas where a paucity of information exists. Finally, we must note that, case reports/studies and case series do not provide independent proof, and therefore, the findings of this separate evidence stream (case reports/studies and case series) will only be considered if evidence from RCTs and observational studies is not available. Case reports/studies and case series will not be used to upgrade or downgrade the strength of other evidence streams. In any case, it is very important to remember that these kinds of studies (case reports/studies and case series) are there to quickly alert agencies of the need to take immediate action to prevent further harm.

Despite these limitations, case reports/studies and case series are a first line of evidence because they are where new issues and ideas emerge (hypothesis-generating) and can contribute to a change in clinical practice ( 23 – 25 ). We therefore believe that data from case reports/studies and case series, when synthesized and presented with completeness and transparency, may provide important details that are relevant to systematic review recommendations.

Author Contributions

AD and GS the protocol study was designed. JL, TW, and DM reviewed. MF, ALG, RV, NC, CB, GLR, MD, ML, and AN significant improvement was made. AN and AD wrote the manuscript. GS improved the language. All authors reviewed and commented on the final manuscript, read and approved the final manuscript to be published.

This project was supported by the French Pays de la Loire region and Angers Loire Métropole, University of Angers and Centre Hospitalo-Universitaire CHU Angers. The project is entitled TEC-TOP (no award/grant number).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmed.2021.708380/full#supplementary-material

1. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. (2008) 336:924–6. doi: 10.1136/bmj.39489.470347.AD

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al. (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.1 (updated September 2020) . Cochrane (2020). Available online at: www.training.cochrane.org/handbook

Google Scholar

3. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol. (2009) 62:e1–34. doi: 10.1016/j.jclinepi.2009.06.006

4. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. (2015) 4:1. doi: 10.1186/2046-4053-4-1

5. Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 : elaboration and explanation. BMJ . (2015) 350:g7647. doi: 10.1136/bmj.g7647

PubMed Abstract | CrossRef Full Text

6. Woodruff TJ, Sutton P, Navigation Guide Work Group. An evidence-based medicine methodology to bridge the gap between clinical and environmental health sciences. Health Aff (Millwood). (2011) 30:931–7. doi: 10.1377/hlthaff.2010.1219

7. Woodruff TJ, Sutton P. The Navigation Guide systematic review methodology: a rigorous and transparent method for translating environmental health science into better health outcomes. Environ Health Perspect. (2014) 122:1007–14. doi: 10.1289/ehp.1307175

8. Reeves BC, Deeks JJ, Higgins JPT, Shea B, Tugwell P, Wells GA. Chapter 24: Including non-randomized studies on intervention effects. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane Handbook for Systematic Reviews of Interventions version 6.1 (updated September 2020). Cochrane (2020). Available online at: www.training.cochrane.org/handbook

9. Loke YK, Price D, Herxheimer A, the Cochrane Adverse Effects Methods Group. Systematic reviews of adverse effects: framework for a structured approach. BMC Med Res Methodol. (2007) 7:32. doi: 10.1186/1471-2288-7-32

10. Lam J, Koustas E, Sutton P, Johnson PI, Atchley DS, Sen S, et al. The Navigation Guide - evidence-based medicine meets environmental health: integration of animal and human evidence for PFOA effects on fetal growth. Environ Health Perspect. (2014) 122:1040–51. doi: 10.1289/ehp.1307923

11. Peryer G, Golder S, Junqueira DR, Vohra S, Loke YK. Chapter 19: Adverse effects. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane Handbook for Systematic Reviews of Interventions version 6.1 (updated September 2020) . Cochrane (2020). Available online at: www.training.cochrane.org/handbook

12. Gagnier JJ, Kienle G, Altman DG, Moher D, Sox H, Riley D, et al. The CARE guidelines: consensus-based clinical case reporting guideline development. J Med Case Rep. (2013) 7:223. doi: 10.1186/1752-1947-7-223

13. Riley DS, Barber MS, Kienle GS, Aronson JK, von Schoen-Angerer T, Tugwell P, et al. CARE guidelines for case reports: explanation and elaboration document. J Clin Epidemiol. (2017) 89:218–35. doi: 10.1016/j.jclinepi.2017.04.026

14. Moola S, Munn Z, Tufanaru C, Aromataris E, Sears K, Sfetcu R, et al. Chapter 7: Systematic reviews of etiology and risk. In: Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis. JBI (2020). doi: 10.46658/JBIMES-20-08. Available online at: https://synthesismanual.jbi.global

CrossRef Full Text

15. Munn Z, Barker TH, Moola S, Tufanaru C, Stern C, McArthur A, et al. Methodological quality of case series studies: an introduction to the JBI critical appraisal tool. JBI Evidence Synthesis. (2020) 18:2127–33. doi: 10.11124/JBISRIR-D-19-00099

16. Covidence systematic review software, V.H.I. Covidence Systematic Review Software , V.H.I. Melbourne, CA. Available online at: www.covidence.org ; https://support.covidence.org/help/how-can-i-cite-covidence

17. Drazen JM, de Leeuw PW, Laine C, Mulrow C, DeAngelis CD, Frizelle FA, et al. Toward More Uniform Conflict Disclosures: The Updated ICMJE Conflict of Interest Reporting Form. JAMA. (2010) 304:212. doi: 10.1001/jama.2010.918

18. Drazen JM, Weyden MBVD, Sahni P, Rosenberg J, Marusic A, Laine C, et al. Uniform Format for Disclosure of Competing Interests in ICMJE Journals. N Engl J Med. (2009) 361:1896–7. doi: 10.1056/NEJMe0909052

19. Johnson PI, Sutton P, Atchley DS, Koustas E, Lam J, Sen S, et al. The navigation guide—evidence-based medicine meets environmental health: systematic review of human evidence for PFOA effects on fetal growth. Environ Health Perspect. (2014) 122:1028–39. doi: 10.1289/ehp.1307893

20. Descatha A, Sembajwe G, Baer M, Boccuni F, Di Tecco C, Duret C, et al. WHO/ILO work-related burden of disease and injury: protocol for systematic reviews of exposure to long working hours and of the effect of exposure to long working hours on stroke. Environ Int. (2018) 119:366–78. doi: 10.1016/j.envint.2018.06.016

21. Lam J, Lanphear BP, Bellinger D, Axelrad DA, McPartland J, Sutton P, et al. Developmental PBDE exposure and IQ/ADHD in childhood: a systematic review and meta-analysis. Environ Health Perspect. (2017) 125:086001. doi: 10.1289/EHP1632

22. Hay JE, Wiesner RH, Shorter RG, LaRusso NF, Baldus WP. Primary sclerosing cholangitis and celiac disease. Ann Intern Med. (1988) 109:713–7. doi: 10.7326/0003-4819-109-9-713

23. Nissen T, Wynn R. The clinical case report: a review of its merits and limitations. BMC Res Notes. (2014) 7:264. doi: 10.1186/1756-0500-7-264

24. Buonfrate D, Requena-Mendez A, Angheben A, Muñoz J, Gobbi F, Van Den Ende J, et al. Severe strongyloidiasis: a systematic review of case reports. BMC Infect Dis. (2013) 13:78. doi: 10.1186/1471-2334-13-78

25. Graham R, Mancher M, Wolman DM, Greenfield S, Steinberg E, Committee on Standards for Developing Trustworthy Clinical Practice Guidelines, et al. Clinical Practice Guidelines We Can Trust . Washington, D.C.: National Academies Press (2011).

Keywords: toxicology, epidemiology, public health, protocol, systematic review, case reports/studies, case series

Citation: Nambiema A, Sembajwe G, Lam J, Woodruff T, Mandrioli D, Chartres N, Fadel M, Le Guillou A, Valter R, Deguigne M, Legeay M, Bruneau C, Le Roux G and Descatha A (2021) A Protocol for the Use of Case Reports/Studies and Case Series in Systematic Reviews for Clinical Toxicology. Front. Med. 8:708380. doi: 10.3389/fmed.2021.708380

Received: 19 May 2021; Accepted: 11 August 2021; Published: 06 September 2021.

Reviewed by:

Copyright © 2021 Nambiema, Sembajwe, Lam, Woodruff, Mandrioli, Chartres, Fadel, Le Guillou, Valter, Deguigne, Legeay, Bruneau, Le Roux and Descatha. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Aboubakari Nambiema, aboubakari.nambiema@univ-angers.fr ; orcid.org/0000-0002-4258-3764

  • Search Menu
  • Advance articles
  • Editor's Choice
  • Cover Archive
  • Research from China
  • Highly Cited Collection
  • Review Series Index
  • Supplements
  • Author Guidelines
  • Submission Site
  • Open Access
  • About The Association of Physicians
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

Case reports, case series and systematic reviews.

  • Article contents
  • Figures & tables
  • Supplementary Data

Case reports, case series and systematic reviews, QJM: An International Journal of Medicine , Volume 95, Issue 4, April 2002, Pages 197–198, https://doi.org/10.1093/qjmed/95.4.197

  • Permissions Icon Permissions

Large numbers of reports of single cases and case series drop through the letterbox of the QJM's editorial office. A glance back through the journal's archive shows why. We have carried on publishing these sorts of descriptive studies long after the editors of journals that picked up the torch for evidence‐based medicine fell out of love with them.

According to the NHS Centre for Reviews and Dissemination, the randomized controlled trial is top of the hierarchy of what counts as reliable evidence for clinical decision‐making. Case series are at the bottom, contained in the rubric: opinions of respected authorities based on clinical experience; descriptive studies; and reports of expert committees . But, despite this lowly position, there are many instances where valuable knowledge has come from someone taking the trouble to write up cases that are out of the ordinary. Two modern classics are the reports of Pneumocystis carinii pneumonia and mucosal candidiasis in previously healthy homosexual men 1 and of hepatocellular adenomata in women taking oral contraceptives. 2 When such reports are published, they alert other doctors and may stimulate further investigation. Where a mechanism can be devised for collating accounts of unusual events, as for instance in the UK's yellow card scheme for the reporting of suspected adverse drug reactions, case reports turn into a system for population surveillance.

There is another circumstance in which authors should write, and journals should publish, descriptive accounts of case series. This is in rare conditions where population‐based studies or treatment trials are difficult or impossible to organize. Lachmann and colleagues' article on the treatment of amyloidois in unicentric Castleman's disease in this issue (see pp 211–18) is an example. What's more, this style of report often makes more enjoyable reading than accounts of randomized controlled trials that several journals now insist must conform to the procrustean requirements of the Consolidated Standards of Reporting Trials (CONSORT) checklist and flow‐chart.

On the other hand, it must be admitted that descriptive studies have serious limitations. One is that retrospective reviews of case notes are rarely complete. Who can say whether the outcomes of the missing cases might have been very different? Another is that quirks in the way that unusual cases get referred make it hard to feel confident in generalizing from the experience of one centre. A third, as Grimes and Schulz have pointed out, 3 is that without a comparison group, causal inferences about temporal associations need to be treated with deep suspicion.

At best, descriptions of case series act as catalysts for further investigation by methods that are more systematic. At worst, they can cause useful treatments to be abandoned (think of recent events concerning the MMR vaccine) or potentially harmful procedures to be adopted (remember past obstetric enthusiasm for routine fetal monitoring in pregnancy). Sometimes they construct diseases of doubtful validity but remarkable longevity. Forty years ago, Elwood, in a survey of more than 4000 people, showed that the presence of dysphagia, post‐cricoid web and iron‐deficiency anaemia in the same person occurred no more often than would be expected from the background prevalence of the three conditions. 4 Tellingly, he also showed that agreement between radiologists over whether a post‐cricoid web was present on a barium swallow was only slightly better than chance. 5 Yet, cases of Patterson‐Kelly syndrome continue to find their way into print.

Systematic reviews of observational studies are much harder to carry out and interpret than systematic reviews of randomized controlled trials. The main difficulties lie in coping with the diversity of study designs used by investigators, and the biases inherent in most observational studies. Methods are still being developed and argued over, 5 but it is already clear that applying an evidence‐based approach to traditional descriptive studies is useful.

So what should authors and readers expect from the QJM? No prizes for guessing that we are becoming unenthusiastic about reports of single cases. Even so, where they are of exceptional interest and written concisely, we may offer publication in the correspondence columns. Reports of case series will be given a warmer welcome, particularly if the circumstances are such that it would be unreasonable to demand a more systematic approach. (At the same time, the condition that they describe must not be so rare that few readers of the journal will ever encounter a case.) We encourage the submission of systematic reviews and meta‐analyses that are directed at questions of clinical relevance. The investigators will need, however, to have been rigorous in their methodology and to have synthesized a useful amount of evidence. Reviews that conclude that there have been few studies, all of poor quality, and that further research is needed make poor reading.

Gottlieb MS, Schroff R, Schanker HM, Weisman JD, Fan PT, Wolf RA, Saxon A. Pneumocystis carinii pneumonia and mucosal candidiasis in previously healthy homosexual men: evidence of a new acquired cellular immunodeficiency. N Engl J Med 1981 ; 305 : 1425 –31.

Rooks JB, Ory HW, Ishak KG, Strauss LT, Greenspan JR, Hill AP, Tyler CW Jr. Epidemiology of hepatocellular adenoma: the role of oral contraceptive use. JAMA 1979 ; 242 : 644 –8.

Grimes DA, Schulz KF. Descriptive studies: what they can and cannot do. Lancet 2002 ; 359 : 145 –9.

Elwood PC, Jacobs A, Pitman RG, Entwistle CC. Epidemiology of the Patterson‐Kelly syndrome. Lancet 1964 ; ii : 716 –19.

Elwood PC, Pitman RG. Observer error in the radiological diagnosis of Patterson‐Kelly webs. Br J Radiol 1966 ; 39 : 587 –9.

[ http://www.consort‐statement.org/MOOSE.pdf ]

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1460-2393
  • Print ISSN 1460-2725
  • Copyright © 2024 Association of Physicians of Great Britain and Ireland
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Literature Review vs Systematic Review

  • Literature Review vs. Systematic Review
  • Primary vs. Secondary Sources
  • Databases and Articles
  • Specific Journal or Article

Subject Guide

Profile Photo

Definitions

It’s common to confuse systematic and literature reviews because both are used to provide a summary of the existent literature or research on a specific topic. Regardless of this commonality, both types of review vary significantly. The following table provides a detailed explanation as well as the differences between systematic and literature reviews. 

Kysh, Lynn (2013): Difference between a systematic review and a literature review. [figshare]. Available at:  http://dx.doi.org/10.6084/m9.figshare.766364

  • << Previous: Home
  • Next: Primary vs. Secondary Sources >>
  • Last Updated: Dec 15, 2023 10:19 AM
  • URL: https://libguides.sjsu.edu/LitRevVSSysRev
  • En español – ExME
  • Em português – EME

Systematic reviews vs meta-analysis: what’s the difference?

Posted on 24th July 2023 by Verónica Tanco Tellechea

""

You may hear the terms ‘systematic review’ and ‘meta-analysis being used interchangeably’. Although they are related, they are distinctly different. Learn more in this blog for beginners.

What is a systematic review?

According to Cochrane (1), a systematic review attempts to identify, appraise and synthesize all the empirical evidence to answer a specific research question. Thus, a systematic review is where you might find the most relevant, adequate, and current information regarding a specific topic. In the levels of evidence pyramid , systematic reviews are only surpassed by meta-analyses. 

To conduct a systematic review, you will need, among other things: 

  • A specific research question, usually in the form of a PICO question.
  • Pre-specified eligibility criteria, to decide which articles will be included or discarded from the review. 
  • To follow a systematic method that will minimize bias.

You can find protocols that will guide you from both Cochrane and the Equator Network , among other places, and if you are a beginner to the topic then have a read of an overview about systematic reviews.

What is a meta-analysis?

A meta-analysis is a quantitative, epidemiological study design used to systematically assess the results of previous research (2) . Usually, they are based on randomized controlled trials, though not always. This means that a meta-analysis is a mathematical tool that allows researchers to mathematically combine outcomes from multiple studies.

When can a meta-analysis be implemented?

There is always the possibility of conducting a meta-analysis, yet, for it to throw the best possible results it should be performed when the studies included in the systematic review are of good quality, similar designs, and have similar outcome measures.

Why are meta-analyses important?

Outcomes from a meta-analysis may provide more precise information regarding the estimate of the effect of what is being studied because it merges outcomes from multiple studies. In a meta-analysis, data from various trials are combined and generate an average result (1), which is portrayed in a forest plot diagram. Moreover, meta-analysis also include a funnel plot diagram to visually detect publication bias.

Conclusions

A systematic review is an article that synthesizes available evidence on a certain topic utilizing a specific research question, pre-specified eligibility criteria for including articles, and a systematic method for its production. Whereas a meta-analysis is a quantitative, epidemiological study design used to assess the results of articles included in a systematic-review. 

Remember: All meta-analyses involve a systematic review, but not all systematic reviews involve a meta-analysis.

If you would like some further reading on this topic, we suggest the following:

The systematic review – a S4BE blog article

Meta-analysis: what, why, and how – a S4BE blog article

The difference between a systematic review and a meta-analysis – a blog article via Covidence

Systematic review vs meta-analysis: what’s the difference? A 5-minute video from Research Masterminds:

  • About Cochrane reviews [Internet]. Cochranelibrary.com. [cited 2023 Apr 30]. Available from: https://www.cochranelibrary.com/about/about-cochrane-reviews
  • Haidich AB. Meta-analysis in medical research. Hippokratia. 2010;14(Suppl 1):29–37.

' src=

Verónica Tanco Tellechea

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.

Related Articles

case study vs systematic review

How to read a funnel plot

This blog introduces you to funnel plots, guiding you through how to read them and what may cause them to look asymmetrical.

""

Heterogeneity in meta-analysis

When you bring studies together in a meta-analysis, one of the things you need to consider is the variability in your studies – this is called heterogeneity. This blog presents the three types of heterogeneity, considers the different types of outcome data, and delves a little more into dealing with the variations.

""

Natural killer cells in glioblastoma therapy

As seen in a previous blog from Davide, modern neuroscience often interfaces with other medical specialities. In this blog, he provides a summary of new evidence about the potential of a therapeutic strategy born at the crossroad between neurology, immunology and oncology.

  • Evidence-Based Medicine
  • Finding the Evidence
  • eJournals for EBM

Levels of Evidence

  • JAMA Users' Guides
  • Tutorials (Learning EBM)
  • Web Resources

Resources That Rate The Evidence

  • ACP Smart Medicine
  • Agency for Healthcare Research and Quality
  • Clinical Evidence
  • Cochrane Library
  • Health Services/Technology Assessment Texts (HSTAT)
  • PDQ® Cancer Information Summaries from NCI
  • Trip Database

Critically Appraised Individual Articles

  • Evidence-Based Complementary and Alternative Medicine
  • Evidence-Based Dentistry
  • Evidence-Based Nursing
  • Journal of Evidence-Based Dental Practice

Grades of Recommendation

Critically-appraised individual articles and synopses include:

Filtered evidence:

  • Level I: Evidence from a systematic review of all relevant randomized controlled trials.
  • Level II: Evidence from a meta-analysis of all relevant randomized controlled trials.
  • Level III: Evidence from evidence summaries developed from systematic reviews
  • Level IV: Evidence from guidelines developed from systematic reviews
  • Level V: Evidence from meta-syntheses of a group of descriptive or qualitative studies
  • Level VI: Evidence from evidence summaries of individual studies
  • Level VII: Evidence from one properly designed randomized controlled trial

Unfiltered evidence:

  • Level VIII: Evidence from nonrandomized controlled clinical trials, nonrandomized clinical trials, cohort studies, case series, case reports, and individual qualitative studies.
  • Level IX: Evidence from opinion of authorities and/or reports of expert committee

Two things to remember:

1. Studies in which randomization occurs represent a higher level of evidence than those in which subject selection is not random.

2. Controlled studies carry a higher level of evidence than those in which control groups are not used.

Strength of Recommendation Taxonomy (SORT)

  • SORT The American Academy of Family Physicians uses the Strength of Recommendation Taxonomy (SORT) to label key recommendations in clinical review articles. In general, only key recommendations are given a Strength-of-Recommendation grade. Grades are assigned on the basis of the quality and consistency of available evidence.
  • << Previous: eJournals for EBM
  • Next: JAMA Users' Guides >>
  • Last Updated: Jan 25, 2024 4:15 PM
  • URL: https://guides.library.stonybrook.edu/evidence-based-medicine
  • Request a Class
  • Hours & Locations
  • Ask a Librarian
  • Special Collections
  • Library Faculty & Staff

Library Administration: 631.632.7100

  • Stony Brook Home
  • Campus Maps
  • Web Accessibility Information
  • Accessibility Barrier Report Form

campaign for stony brook

Comments or Suggestions? | Library Webmaster

Creative Commons License

Except where otherwise noted, this work by SBU Libraries is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License .

FSTA Logo

Start your free trial

Arrange a trial for your organisation and discover why FSTA is the leading database for reliable research on the sciences of food and health.

REQUEST A FREE TRIAL

  • Thought for Food Blog

What is the difference between a systematic review and a systematic literature review?

By Carol Hollier on 07-Jan-2020 12:42:03

Systematic Reviews vs Systematic Literature Reviews | IFIS Publishing

For those not immersed in systematic reviews, understanding the difference between a systematic review and a systematic literature review can be confusing.  It helps to realise that a “systematic review” is a clearly defined thing, but ambiguity creeps in around the phrase “systematic literature review” because people can and do use it in a variety of ways. 

A systematic review is a research study of research studies.  To qualify as a systematic review, a review needs to adhere to standards of transparency and reproducibility.  It will use explicit methods to identify, select, appraise, and synthesise empirical results from different but similar studies.  The study will be done in stages:  

  • In stage one, the question, which must be answerable, is framed
  • Stage two is a comprehensive literature search to identify relevant studies
  • In stage three the identified literature’s quality is scrutinised and decisions made on whether or not to include each article in the review
  • In stage four the evidence is summarised and, if the review includes a meta-analysis, the data extracted; in the final stage, findings are interpreted. [1]

Some reviews also state what degree of confidence can be placed on that answer, using the GRADE scale.  By going through these steps, a systematic review provides a broad evidence base on which to make decisions about medical interventions, regulatory policy, safety, or whatever question is analysed.   By documenting each step explicitly, the review is not only reproducible, but can be updated as more evidence on the question is generated.

Sometimes when people talk about a “systematic literature review”, they are using the phrase interchangeably with “systematic review”.  However, people can also use the phrase systematic literature review to refer to a literature review that is done in a fairly systematic way, but without the full rigor of a systematic review. 

For instance, for a systematic review, reviewers would strive to locate relevant unpublished studies in grey literature and possibly by contacting researchers directly.  Doing this is important for combatting publication bias, which is the tendency for studies with positive results to be published at a higher rate than studies with null results.  It is easy to understand how this well-documented tendency can skew a review’s findings, but someone conducting a systematic literature review in the loose sense of the phrase might, for lack of resource or capacity, forgo that step. 

Another difference might be in who is doing the research for the review. A systematic review is generally conducted by a team including an information professional for searches and a statistician for meta-analysis, along with subject experts.  Team members independently evaluate the studies being considered for inclusion in the review and compare results, adjudicating any differences of opinion.   In contrast, a systematic literature review might be conducted by one person. 

Overall, while a systematic review must comply with set standards, you would expect any review called a systematic literature review to strive to be quite comprehensive.  A systematic literature review would contrast with what is sometimes called a narrative or journalistic literature review, where the reviewer’s search strategy is not made explicit, and evidence may be cherry-picked to support an argument.

FSTA is a key tool for systematic reviews and systematic literature reviews in the sciences of food and health.

pawel-czerwinski-VkITYPupzSg-unsplash-1

The patents indexed help find results of research not otherwise publicly available because it has been done for commercial purposes.

The FSTA thesaurus will surface results that would be missed with keyword searching alone. Since the thesaurus is designed for the sciences of food and health, it is the most comprehensive for the field. 

All indexing and abstracting in FSTA is in English, so you can do your searching in English yet pick up non-English language results, and get those results translated if they meet the criteria for inclusion in a systematic review.

FSTA includes grey literature (conference proceedings) which can be difficult to find, but is important to include in comprehensive searches.

FSTA content has a deep archive. It goes back to 1969 for farm to fork research, and back to the late 1990s for food-related human nutrition literature—systematic reviews (and any literature review) should include not just the latest research but all relevant research on a question. 

You can also use FSTA to find literature reviews.

FSTA allows you to easily search for review articles (both narrative and systematic reviews) by using the subject heading or thesaurus term “REVIEWS" and an appropriate free-text keyword.

On the Web of Science or EBSCO platform, an FSTA search for reviews about cassava would look like this: DE "REVIEWS" AND cassava.

On the Ovid platform using the multi-field search option, the search would look like this: reviews.sh. AND cassava.af.

In 2011 FSTA introduced the descriptor META-ANALYSIS, making it easy to search specifically for systematic reviews that include a meta-analysis published from that year onwards.

On the EBSCO or Web of Science platform, an FSTA search for systematic reviews with meta-analyses about staphylococcus aureus would look like this: DE "META-ANALYSIS" AND staphylococcus aureus.

On the Ovid platform using the multi-field search option, the search would look like this: meta-analysis.sh. AND staphylococcus aureus.af.

Systematic reviews with meta-analyses published before 2011 are included in the REVIEWS controlled vocabulary term in the thesaurus.

An easy way to locate pre-2011 systematic reviews with meta-analyses is to search the subject heading or thesaurus term "REVIEWS" AND meta-analysis as a free-text keyword AND another appropriate free-text keyword.

On the Web of Science or EBSCO platform, the FSTA search would look like this: DE "REVIEWS" AND meta-analysis AND carbohydrate*

On the Ovid platform using the multi-field search option, the search would look like this: reviews .s h. AND meta-analysis.af. AND carbohydrate*.af.  

Related resources:

  • Literature Searching Best Practise Guide
  • Predatory publishing: Investigating researchers’ knowledge & attitudes
  • The IFIS Expert Guide to Journal Publishing

Library image by  Paul Schafer , microscope image by Matthew Waring , via Unsplash.

BLOG CTA

  • FSTA - Food Science & Technology Abstracts
  • IFIS Collections
  • Resources Hub
  • Diversity Statement
  • Sustainability Commitment
  • Company news
  • Frequently Asked Questions
  • Privacy Policy
  • Terms of Use for IFIS Collections

Ground Floor, 115 Wharfedale Road,  Winnersh Triangle, Wokingham, Berkshire RG41 5RB

Get in touch with IFIS

© International Food Information Service (IFIS Publishing) operating as IFIS – All Rights Reserved     |     Charity Reg. No. 1068176     |     Limited Company No. 3507902     |     Designed by Blend

Home

  • Duke NetID Login
  • 919.660.1100
  • Duke Health Badge: 24-hour access
  • Accounts & Access
  • Databases, Journals & Books
  • Request & Reserve
  • Training & Consulting
  • Request Articles & Books
  • Renew Online
  • Reserve Spaces
  • Reserve a Locker
  • Study & Meeting Rooms
  • Course Reserves
  • Digital Health Device Collection
  • Pay Fines/Fees
  • Recommend a Purchase
  • Access From Off Campus
  • Building Access
  • Computers & Equipment
  • Wifi Access
  • My Accounts
  • Mobile Apps
  • Known Access Issues
  • Report an Access Issue
  • All Databases
  • Article Databases
  • Basic Sciences
  • Clinical Sciences
  • Dissertations & Theses
  • Drugs, Chemicals & Toxicology
  • Grants & Funding
  • Interprofessional Education
  • Non-Medical Databases
  • Search for E-Journals
  • Search for Print & E-Journals
  • Search for E-Books
  • Search for Print & E-Books
  • E-Book Collections
  • Biostatistics
  • Global Health
  • MBS Program
  • Medical Students
  • MMCi Program
  • Occupational Therapy
  • Path Asst Program
  • Physical Therapy
  • Researchers
  • Community Partners

Conducting Research

  • Archival & Historical Research
  • Black History at Duke Health
  • Data Analytics & Viz Software
  • Data: Find and Share
  • Evidence-Based Practice
  • NIH Public Access Policy Compliance
  • Publication Metrics
  • Qualitative Research
  • Searching Animal Alternatives

Systematic Reviews

  • Test Instruments

Using Databases

  • JCR Impact Factors
  • Web of Science

Finding & Accessing

  • COVID-19: Core Clinical Resources
  • Health Literacy
  • Health Statistics & Data
  • Library Orientation

Writing & Citing

  • Creating Links
  • Getting Published
  • Reference Mgmt
  • Scientific Writing

Meet a Librarian

  • Request a Consultation
  • Find Your Liaisons
  • Register for a Class
  • Request a Class
  • Self-Paced Learning

Search Services

  • Literature Search
  • Systematic Review
  • Animal Alternatives (IACUC)
  • Research Impact

Citation Mgmt

  • Other Software

Scholarly Communications

  • About Scholarly Communications
  • Publish Your Work
  • Measure Your Research Impact
  • Engage in Open Science
  • Libraries and Publishers
  • Directions & Maps
  • Floor Plans

Library Updates

  • Annual Snapshot
  • Conference Presentations
  • Contact Information
  • Gifts & Donations
  • What is a Systematic Review?

Types of Reviews

  • Manuals and Reporting Guidelines
  • Our Service
  • 1. Assemble Your Team
  • 2. Develop a Research Question
  • 3. Write and Register a Protocol
  • 4. Search the Evidence
  • 5. Screen Results
  • 6. Assess for Quality and Bias
  • 7. Extract the Data
  • 8. Write the Review
  • Additional Resources
  • Finding Full-Text Articles

Review Typologies

There are many types of evidence synthesis projects, including systematic reviews as well as others. The selection of review type is wholly dependent on the research question. Not all research questions are well-suited for systematic reviews.

  • Review Typologies (from LITR-EX) This site explores different review methodologies such as, systematic, scoping, realist, narrative, state of the art, meta-ethnography, critical, and integrative reviews. The LITR-EX site has a health professions education focus, but the advice and information is widely applicable.

Review the table to peruse review types and associated methodologies. Librarians can also help your team determine which review type might be appropriate for your project. 

Reproduced from Grant, M. J. and Booth, A. (2009), A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26: 91-108.  doi:10.1111/j.1471-1842.2009.00848.x

  • << Previous: What is a Systematic Review?
  • Next: Manuals and Reporting Guidelines >>
  • Last Updated: Mar 20, 2024 2:21 PM
  • URL: https://guides.mclibrary.duke.edu/sysreview
  • Duke Health
  • Duke University
  • Duke Libraries
  • Medical Center Archives
  • Duke Directory
  • Seeley G. Mudd Building
  • 10 Searle Drive
  • [email protected]

DistillerSR Logo

The Differences Between a Randomized-Controlled Trial vs Systematic Review

case study vs systematic review

Automate every stage of your literature review to produce evidence-based research faster and more accurately.

Through these reviews, you’ll be able to evaluate available literature and collect credible data that can be used as evidence to base decisions on. But you need to understand the pros and cons of each type of review so that you can choose the one that fits your objectives. Fortunately, you can find numerous online resources that talk about the differences between an integrated review vs systematic review , a rapid review vs systematic review, and many other comparisons. This article compares a systematic review with a randomized-controlled trial (RCT).

Systematic Review

A systematic review uses a clearly defined research question to make an assessment leveraging a systematic and reproducible method to find, choose and analytically assess all pertinent research. It gathers and analyzes eligible studies from reputable research sources to support the evidence. The main point to remember from this systematic review definition is that the review needs to answer a specific research question. The research question, which should be clearly stated, the study objectives, and the topic of study define the scope of the study. The scope of the study prevents the author from going against the intention of the conducting review.

Learn More About DistillerSR

(Article continues below)

case study vs systematic review

Randomized-Controlled Trial

An RCT is a type of scientific trial meant to control factors that aren’t under direct experimental control. A perfect example of an RCT is a clinical trial aiming to assess the effects of pharmacological treatment, surgical procedure, medical apparatus, diagnostic procedure, or other medical intervention.

This type of study randomly assigns participants (or subjects) to either experimental groups (EG) or control groups (CG), ensuring there isn’t any bias involved in the assignments. The defining aspect of RCTs is that the allocation of participants to either an EG or CG is completely randomized. The assignment may be blinded or not. Not only the participants but also the assessment professionals may be blinded to the group allocations. Also, during the course of the trial, participants may be single-blinded, double-blinded, or not blinded at all. The experimental group in an RCT receives the dose or procedure, while those in the control group receive a placebo, a different type of treatment, or no treatment at all. The state of an RCT being “double-blind”, means that no one knows who is assigned to which group, so there is no way to influence the results.

There are several advantages and disadvantages of using an RCT format. For instance, because an RCT ensures that possible population biases are not a factor in the results, you are assured of receiving impartial evidence that can help you to make informed decisions.

Unlike observational studies, the subjects and researchers involved in a double-blind RCT study don’t know which subjects are receiving the treatment. Masking is essential in medical trials that rely on subjective outcomes to ensure that the drug being evaluated does what it is intended to do. With an RCT, it’s easier to analyze results because you’re using recognized statistical tools, and the population of participants is clearly defined.

However, an RCT can be costly and time-consuming because it requires a large number of participants for more statistical power, and a longer duration to do all of the follow-up analysis. But you can keep the cost of your RCTs down by conducting simple, single, and easily assessed outcome measures.

3 Reasons to Connect

case study vs systematic review

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Res Methodol

Logo of bmcmrm

Clarifying the distinction between case series and cohort studies in systematic reviews of comparative studies: potential impact on body of evidence and workload

Institute for Research in Operative Medicine, Chair of Surgical Research, Faculty of Health, School of Medicine, Witten/Herdecke University, Ostmerheimer Str. 200, 51109 Cologne, Germany

Dawid Pieper

Associated data.

Not applicable.

Distinguishing cohort studies from case series is difficult.

We propose a conceptualization of cohort studies in systematic reviews of comparative studies. The main aim of this conceptualization is to clarify the distinction between cohort studies and case series. We discuss the potential impact of the proposed conceptualization on the body of evidence and workload.

All studies with exposure-based sampling gather multiple exposures (with at least two different exposures or levels of exposure) and enable calculation of relative risks that should be considered cohort studies in systematic reviews, including non-randomized studies. The term “enables/can” means that a predefined analytic comparison is not a prerequisite (i.e., the absolute risks per group and/or a risk ratio are provided). Instead, all studies for which sufficient data are available for reanalysis to compare different exposures (e.g., sufficient data in the publication) are classified as cohort studies.

There are possibly large numbers of studies without a comparison for the exposure of interest but that do provide the necessary data to calculate effect measures for a comparison. Consequently, more studies could be included in a systematic review. Therefore, on the one hand, the outlined approach can increase the confidence in effect estimates and the strengths of conclusions. On the other hand, the workload would increase (e.g., additional data extraction and risk of bias assessment, as well as reanalyses).

Systematic reviews that include non-randomized studies often consider different observational study designs [ 1 ]. However, the distinction between different non-randomized study designs is difficult. One key design feature to classify observational study designs is to distinguish comparative from non-comparative studies [ 2 , 3 ]. The lack of a comparison group is of particular importance for distinguishing cohort studies from case series because in many definitions, they share a main design feature of having a follow-up period examining the exposed individuals over time [ 2 , 3 ]. The only difference between cohort studies and case series in many definitions is that cohort studies compare different groups (i.e., examine the association between exposure and outcome), while case series are uncontrolled [ 3 – 5 ]. Table ​ Table1 1 shows an example definition [ 3 ]. The problem with this definition is that vague terms, such as comparison and examination of association, might be interpreted as an analytic comparison of at least two exposures (i.e., interventions, risk factors or prognostic factors).

Example definitions of cohort studies and case series [ 2 ]

For example, imagine a study of 20 consecutive patients with a certain disease that can be treated in two different ways. A study that divides the 20 patients into two groups according to the treatment received and compares the outcomes of these groups (e.g., provides aggregated absolute risks per group or a risk ratio) would be probably classified as a cohort study (the example used in the following sections is denoted “study 1”). A sample of this study type is illustrated in Fig. ​ Fig.1 1 and Table ​ Table2 2 .

An external file that holds a picture, illustration, etc.
Object name is 12874_2017_391_Fig1_HTML.jpg

Cohort study (vague definition)

Possible presentation of a study with a preexisting exposure based comparison (cohort study not requiring a reanalysis)

In contrast, a publication that describes the interventions received and outcomes for each patient/case separately would probably be classified as a case series (the example in the following sections is denoted “study 2”). An example of this study type is illustrated in Fig. ​ Fig.2 2 and Table ​ Table3. 3 . In the medical literature, the data on exposure and outcomes are usually provided in either running text or spreadsheet formats [ 6 – 21 ]. A good example is the study by Wong et al. [ 10 ]. In this study, information on placental invasion (exposure) and blood loss (outcome) is separately provided for 40 pregnant women in a table. The study by Cheng et al. is an example of a study providing information in the running text (i.e., anticoagulation management [exposure] and recovery [outcome] for paediatric stroke) [ 6 ].

An external file that holds a picture, illustration, etc.
Object name is 12874_2017_391_Fig2_HTML.jpg

Case series (vague definition)

Possible presentation of study without a preexisting exposure based comparison (cohort study requiring a reanalysis)

These examples illustrate that distinguishing between cohort studies and case series is difficult. Vague definitions are probably the reason for the common confusion between study designs. A recent study found that approximately 72% of cohort studies are mislabelled as case series [ 22 ]. Many systematic reviews of non-randomized studies included cohort studies but excluded case series (see examples in [ 23 – 28 ]). Therefore, the unclear distinction between case series and cohort studies can result in inconsistent study selection and unjustified exclusions from a systematic review. The risk of misclassification is particularly high because study authors also often mislabel their study or studies are not classified by their authors at all (see examples in [ 6 – 21 ]).

We propose a conceptualization of cohort studies in systematic reviews of comparative studies. The main objective of this conceptualization is to clarify the distinction between cohort studies and case series in systematic reviews, including non-randomized comparative studies. We discuss the potential impact of the proposed conceptualization on the body of evidence and workload.

Clarifying the distinction between case series and cohort studies (the solution)

In the following report, we propose a conceptualization for cohort studies and case series (e.g., sampling) for systematic reviews, including comparative non-randomized studies. Our proposal is based on a recent conceptualization of cohort studies and case series by Dekkers et al. [ 29 ]. The main feature of this conceptualization is that it is exclusively based on inherent design features and is not affected by the analysis.

Cohort studies of one exposure/one group

Dekkers et al. [ 29 ] defined cohort studies with one exposure as studies with exposure-based sampling that enable calculating absolute effects measures for a risk of outcome. This definition means that “the absence of a control group in an exposure-based study does not define a case series” [ 29 ]. The definition of cohort studies according to Dekkers et al. [ 29 ] is summarized in Table ​ Table4 4 .

Summary of the distinction proposed by Dekkers et al. [ 28 ]

Cohort studies of multiple exposures/more than one group

This idea can be easily extended to studies with more than one exposure. In this case, all studies with exposure-based sampling gathering multiple exposures (i.e., at least two different exposures, manifestations of exposures or levels of exposures) can be considered as (comparative) cohort studies (Fig. ​ (Fig.3). 3 ). The sampling is based on exposure, and there are different groups. Consequently, relative risks can be calculated [ 29 ]. The term “enables/can” implies that a predefined analytic comparison is not a prerequisite but that all studies with sufficient data to enable a reanalysis (e.g., in the publication, study reports, and supplementary material) would be classified as cohort studies.

An external file that holds a picture, illustration, etc.
Object name is 12874_2017_391_Fig3_HTML.jpg

Cohort study (deduced from Dekkers et al. [ 28 ])

In short, all studies that enable calculation of a relative risk to quantify a difference in outcomes between different groups should be considered cohort studies.

Case series

According to Dekkers et al. [ 29 ], the sampling of a case series is either based on exposure and outcome (e.g., all patients are treated and have an adverse event) or case series include patients with a certain outcome regardless of exposure (see Fig. ​ Fig.4). 4 ). Consequently, no absolute risk and also no relative effect measures for an outcome can be calculated in a case series. Note that sampling in a case series does not need to be consecutive. Consecutiveness would increase the quality of the case series, but a non-consecutive series is also a case series [ 29 ].

An external file that holds a picture, illustration, etc.
Object name is 12874_2017_391_Fig4_HTML.jpg

Case series (Deckers et al. [ 28 ])

In short, for a case series, there are no absolute risks, and also, no risk ratios can be calculated. Consequently, a case series cannot be comparative. The definition of a case series by Dekkers et al. [ 29 ] is summarized in Table ​ Table4 4 .

It is noteworthy that the conceptualization also ensures a clear distinction of case series from other study designs that apply outcome-based sampling. Case series, case-control studies (including case-time-control), and self-controlled case-control designs (e.g., case-crossover) all have outcome-based sampling in common [ 29 ].

Case series have no control at all because only patients with a certain manifestation of outcomes are sampled (e.g., individuals with a disease or deceased individuals). In contrast, all case-control designs as well as self-controlled case-control designs have a control group. In case-control studies, the control group constitutes individuals with another manifestation of the outcome (e.g., healthy individuals or survivors). This outcome can be considered as two case series (i.e., case group and no case group).

Self-controlled case-control studies are characterized by an intra-individual comparison (each individual is their own control) [ 30 ]. Information is also sampled when patients are not exposed. Therefore, case-control designs as well as self-controlled case-control studies enable the calculation of risk ratios. This approach is not possible for a case series.

Illustrating example

Above, we illustrated that by using a vague definition, the classification of a study design might be influenced by the preparation and analysis of the study data. The proposed conceptualization is exclusively based on the inherent design features (e.g., sampling, exposure). After considering the example studies again using the proposed conceptualization, all studies would be classified as cohort studies because the relative risk can be calculated. This outcome becomes clear looking at Table ​ Table2 2 and Table ​ Table3. 3 . If the patients in Table ​ Table3 3 are rearranged according the exposure and the data are reanalysed (i.e., calculation of absolute risk per group and relative risks to compare groups), Table ​ Table3 3 can be converted into Table ​ Table2 2 (and also, Fig. ​ Fig.2 2 can be converted to Fig. ​ Fig.3). 3 ). In the study by Wong et al. [ 10 ], the mean blood loss in the group with placental invasion and in the group without placental invasion can be calculated and compared (e.g., relative risk with 95% confidence limits). In this study, the data on gestational age are also provided in the table. Therefore, it is even possible to adjust the results for gestational age (e.g., using a logistic regression).

Discussion (the impact)

Influence on the body of evidence.

The proposed conceptualization is exclusively based on inherent study design features; therefore, there is less room for misinterpretation compared to existing conceptualizations because analysis features, presentation of data and labelling of the study are not determined. Thus, the conceptualization ensures consistent study selection for systematic reviews.

The prerequisite of an analytical comparison in the publication can lead to the unjustified exclusion of relevant studies from a systematic review. Study 1 would likely be included, and Study 2 would be excluded from the systematic review. The only differences between Study 1 and Study 2 are the analysis and preparation of data. If the data source (e.g., chart review) and the reanalysis (calculation of effect measures and statistical tests) to compare the intervention and control group in Study 2 are performed exactly with the same approach as the existing analysis in Study 1, there can be no difference in the effect estimates between studies, and the studies are at the same risk of bias. Thus, the inclusion of Study 1 and the exclusion of Study 2 are contradictory to the requirement that systematic reviews identify all available evidence [ 31 ].

Considering that more studies would be eligible for inclusion and that the hierarchical paradigm of the levels of evidence is not valid per se, the proposed conceptualization can potentially enrich bodies of evidence and increase confidence in effect estimates.

Influence on workload

The additional inclusion of all studies that enable calculating relative risk for the comparison of interest might impact the workload of systematic reviews. There might be a considerable number of studies not performing a comparison already but that provide sufficient data for reanalysis. Usually the electronic search strategy for systematic reviews of non-randomized studies is not limited to certain study types because there are no sensitive search filters available yet [ 32 ]. Therefore, the search results do not usually include cohort studies as discussed above. However, in many abstracts it would be not directly clear if sufficient data for re-calculations are reported in the full text article (e.g., a table like Table ​ Table3). 3 ). Consequently, many additional potentially relevant full-text studies have to be screened. Additionally, studies often assess various exposures (e.g., different baseline characteristics), and it might thus be difficult to identify relevant exposures. Considering the large amount of wrongly labelled studies, this approach can lead to additional screening effort [ 22 ].

As a result, more studies would be included in systematic reviews. All articles that provide potentially relevant data would have to be assessed in detail to decide whether reanalysis is feasible. For these data extractions, a risk of bias assessment would have to be performed. Challenges in the risk of bias assessment would arise because most assessment tools are constructed to assess a predefined control group [ 33 ]. For example, items regarding the adequacy of analysis (e.g., adjustment for confounders) cannot be assessed anymore. Effect measures must be calculated (e.g., risks by group and relative risk with a 95% confidence limit), and eventually further analyses (e.g., adjustments for confounders) might be necessary for studies that provide sufficient data. Moreover, advanced biometrical expertise would be necessary to judge the feasibility (i.e., determining the possibility to calculate relative risks and whether there are sufficient data to adjust for confounders) of a re-analysis and to conduct the reanalysis.

Promising areas of application

In the medical literature, it is likely that more retrospective mislabelled cohort studies (comparison planned after data collection) based on routinely collected data (e.g., chart review, review of radiology databases) than prospectively planned (i.e., comparisons planned before data collection) and wrongly labelled cohort studies can be found. Thus, it can be assumed that the wrongly labelled studies tend to have lower methodological quality than studies that already include a comparison. This aspect should be considered in decisions about including studies that must be reanalysed. In research areas in which randomized controlled trials or large planned prospective and well-conducted cohort studies can be expected (e.g., risk factors for widespread diseases), the approach is less promising for enriching the body of evidence. Consequently, in these areas, the additional effort might not be worthwhile.

Again, the conceptualization is particularly promising in research areas in which evidence is sparse because studies are difficult to conduct or populations are small or the event rates are low. These areas include rare diseases, adverse events/complications, sensitive groups (e.g., children or individuals with cognitive deficiencies) or rarely used interventions (e.g., costly innovations). In these areas, there might be no well-conducted studies at all [ 34 , 35 ]. Therefore, the proposed conceptualization in this report has great potential to increase confidence in effect estimates.

We proposed a conceptualization for cohort studies with multiple exposures that ensures a clear distinction from case series. In this conceptualization, all studies that contain sufficient data to conduct a reanalysis and not only studies with a pre-existing analytic comparison are classified as cohort studies and are considered appropriate for inclusion in systematic reviews. To the best of our knowledge, no systematic reviews exist that reanalyse (mislabelled) case series to create cohort studies. The outlined approach is a method that can potentially enrich the body of evidence and subsequently enhance confidence in effect estimates and the strengths of conclusions. However, the enrichment of the body of evidence should be balanced against the additional workload.

Acknowledgements

There was no external funding for the research or publication of this article.

Availability of data and materials

Authors’ contributions.

All authors have made substantial contributions to the work. Both authors read and approved the final manuscript.

Ethics approval and consent to participate

Not applicable. No human data involved.

Consent for publication

Not applicable. The manuscript contains no individual person’s data.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Tim Mathes, Phone: +0049 221, Phone: 98957-43, Email: [email protected] .

Dawid Pieper, Phone: +0049 221, Phone: 98957-40, Email: [email protected] .

IMAGES

  1. literature review vs case study

    case study vs systematic review

  2. integrative literature review vs systematic review

    case study vs systematic review

  3. systematic literature review mixed methods

    case study vs systematic review

  4. Systematic Literature Review Example

    case study vs systematic review

  5. book review and literature review similarities

    case study vs systematic review

  6. Evidence-Based Practice

    case study vs systematic review

VIDEO

  1. Systematic Reviews In Research Universe

  2. Systematic Review: Explained!

  3. Differences Between Systematic Review and Scoping Review

  4. Case Study vs Survey

  5. GDQoARoMZictH90CAJazhL43Wwd2bmdjAAAF

  6. different type of studies in #pharmacovigilance Post authorization safety study #PASS

COMMENTS

  1. Introduction to systematic review and meta-analysis

    A systematic review collects all possible studies related to a given topic and design, and reviews and analyzes their results [ 1 ]. During the systematic review process, the quality of studies is evaluated, and a statistical meta-analysis of the study results is conducted on the basis of their quality. A meta-analysis is a valid, objective ...

  2. Cohort vs Case Studies

    It is important to note that what makes the Cohort design "prospective" in nature is that you are working from suspected cause to effect (or outcome). It can be a concurrent study, meaning that you start collecting data now; non-current, typical of a chart review or review of other records, or; a combination of the two. Remember that:

  3. The Levels of Evidence and their role in Evidence-Based Medicine

    Lesser quality prospective cohort, retrospective cohort study, untreated controls from an RCT, or systematic review of these studies: III: Case-control study or systematic review of these studies: IV: Case series: V: Expert opinion; case report or clinical example; or evidence based on physiology, bench research or "first principles"

  4. How to Write a Systematic Review: A Narrative Review

    Background. A systematic review, as its name suggests, is a systematic way of collecting, evaluating, integrating, and presenting findings from several studies on a specific question or topic.[] A systematic review is a research that, by identifying and combining evidence, is tailored to and answers the research question, based on an assessment of all relevant studies.[2,3] To identify assess ...

  5. Systematic Review

    Systematic review vs. literature review. A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method. ... In this case, the acronym is PICOT. Type of study design ...

  6. Systematic Review

    Systematic review vs literature review. A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarise and evaluate previous work, without using a formal, explicit method. ... In this case, the acronym is PICOT. Type of study design(s ...

  7. Evidence Synthesis and Systematic Reviews

    Definition: A systematic review is a summary of research results (evidence) that uses explicit and reproducible methods to systematically search, critically appraise, and synthesize on a specific issue.It synthesizes the results of multiple primary studies related to each other by using strategies that reduce biases and errors. When to use: If you want to identify, appraise, and synthesize all ...

  8. Introduction to Systematic Reviews

    A systematic review identifies and synthesizes all relevant studies that fit prespecified criteria to answer a research question (Lasserson et al. 2019; IOM 2011).What sets a systematic review apart from a narrative review is that it follows consistent, rigorous, and transparent methods established in a protocol in order to minimize bias and errors.

  9. A Protocol for the Use of Case Reports/Studies and Case Series in

    Introduction: Systematic reviews are routinely used to synthesize current science and evaluate the evidential strength and quality of resulting recommendations. For specific events, such as rare acute poisonings or preliminary reports of new drugs, we posit that case reports/studies and case series (human subjects research with no control group) may provide important evidence for systematic ...

  10. Systematic reviews: Structure, form and content

    Abstract. This article aims to provide an overview of the structure, form and content of systematic reviews. It focuses in particular on the literature searching component, and covers systematic database searching techniques, searching for grey literature and the importance of librarian involvement in the search.

  11. Case reports, case series and systematic reviews

    Case reports, case series and systematic reviews, QJM: An International Journal of Medicine, Volume 95, Issue 4, April 2002, ... Systematic reviews of observational studies are much harder to carry out and interpret than systematic reviews of randomized controlled trials. The main difficulties lie in coping with the diversity of study designs ...

  12. Types of Studies

    Cross-Sectional vs Longitudinal. Cross-sectional study. A cross-sectional study is an observational one. This means that researchers record information about their subjects without manipulating the study environment. In our study, we would simply measure the cholesterol levels of daily walkers and non-walkers along with any other ...

  13. Literature Review vs Systematic Review

    Regardless of this commonality, both types of review vary significantly. The following table provides a detailed explanation as well as the differences between systematic and literature reviews. Kysh, Lynn (2013): Difference between a systematic review and a literature review.

  14. What Is a Case Study?

    Revised on November 20, 2023. A case study is a detailed study of a specific subject, such as a person, group, place, event, organization, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research. A case study research design usually involves qualitative methods, but quantitative methods are ...

  15. Study designs: Part 7

    Study designs: Part 7 - Systematic reviews. In this series on research study designs, we have so far looked at different types of primary research designs which attempt to answer a specific question. In this segment, we discuss systematic review, which is a study design used to summarize the results of several primary research studies.

  16. Systematic reviews vs meta-analysis: what's the difference?

    A systematic review is an article that synthesizes available evidence on a certain topic utilizing a specific research question, pre-specified eligibility criteria for including articles, and a systematic method for its production. Whereas a meta-analysis is a quantitative, epidemiological study design used to assess the results of articles ...

  17. Levels of Evidence

    Individual cohort study / low-quality randomized control studies: B: 3a: Systematic review of (homogeneous) case-control studies: B: 3b: Individual case-control studies: C: 4: Case series, low-quality cohort or case-control studies: D : 5: Expert opinions based on non-systematic reviews of results or mechanistic studies

  18. What is the difference between a systematic review and a systematic

    A systematic review is a research study of research studies. To qualify as a systematic review, a review needs to adhere to standards of transparency and reproducibility. It will use explicit methods to identify, select, appraise, and synthesise empirical results from different but similar studies.

  19. Types of Reviews

    This site explores different review methodologies such as, systematic, scoping, realist, narrative, state of the art, meta-ethnography, critical, and integrative reviews. The LITR-EX site has a health professions education focus, but the advice and information is widely applicable. Types of Reviews. Review the table to peruse review types and ...

  20. The Differences Between a Randomized-Controlled Trial vs Systematic Review

    Therefore, you have to review as many studies as possible to gather enough data to support crucial decisions. A failure to review all the relevant eligible studies may lead to inconsistent results. This is where different types of reviews, including systematic reviews and integrated reviews (among others), become essential.

  21. Clarifying the distinction between case series and cohort studies in

    Thus, the conceptualization ensures consistent study selection for systematic reviews. The prerequisite of an analytical comparison in the publication can lead to the unjustified exclusion of relevant studies from a systematic review. Study 1 would likely be included, and Study 2 would be excluded from the systematic review.

  22. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  23. Case Studies: A Systematic Review of the Evidence

    This study aimed to determine the extent, range and nature of research about case studies in higher education. Method A systematic review was conducted using a wide ranging search strategy ...