Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Prevent plagiarism. Run a free check.

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved March 12, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, what is your plagiarism score.

  • Open supplemental data
  • Reference Manager
  • Simple TEXT file

People also looked at

Study protocol article, a protocol for the use of case reports/studies and case series in systematic reviews for clinical toxicology.

case study vs systematic review

  • 1 Univ Angers, CHU Angers, Univ Rennes, INSERM, EHESP, Institut de Recherche en Santé, Environnement et Travail-UMR_S 1085, Angers, France
  • 2 Department of Occupational Medicine, Epidemiology and Prevention, Donald and Barbara Zucker School of Medicine, Northwell Health, Feinstein Institutes for Medical Research, Hofstra University, Great Neck, NY, United States
  • 3 Department of Health Sciences, University of California, San Francisco and California State University, Hayward, CA, United States
  • 4 Program on Reproductive Health and the Environment, University of California, San Francisco, San Francisco, CA, United States
  • 5 Cesare Maltoni Cancer Research Center, Ramazzini Institute, Bologna, Italy
  • 6 Department of Research and Public Health, Reims Teaching Hospitals, Robert Debré Hospital, Reims, France
  • 7 CHU Angers, Univ Angers, Poisoning Control Center, Clinical Data Center, Angers, France

Introduction: Systematic reviews are routinely used to synthesize current science and evaluate the evidential strength and quality of resulting recommendations. For specific events, such as rare acute poisonings or preliminary reports of new drugs, we posit that case reports/studies and case series (human subjects research with no control group) may provide important evidence for systematic reviews. Our aim, therefore, is to present a protocol that uses rigorous selection criteria, to distinguish high quality case reports/studies and case series for inclusion in systematic reviews.

Methods: This protocol will adapt the existing Navigation Guide methodology for specific inclusion of case studies. The usual procedure for systematic reviews will be followed. Case reports/studies and case series will be specified in the search strategy and included in separate sections. Data from these sources will be extracted and where possible, quantitatively synthesized. Criteria for integrating cases reports/studies and case series into the overall body of evidence are that these studies will need to be well-documented, scientifically rigorous, and follow ethical practices. The instructions and standards for evaluating risk of bias will be based on the Navigation Guide. The risk of bias, quality of evidence and the strength of recommendations will be assessed by two independent review teams that are blinded to each other.

Conclusion: This is a protocol specified for systematic reviews that use case reports/studies and case series to evaluate the quality of evidence and strength of recommendations in disciplines like clinical toxicology, where case reports/studies are the norm.

Introduction

Systematic reviews are routinely relied upon to qualitatively synthesize current knowledge in a subject area. These reviews are often paired with a meta-analysis for quantitative syntheses. These qualitative and quantitative summaries of pooled data, collectively evaluate the quality of the evidence and the strength of the resulting research recommendations.

There currently exist several guidance documents to instruct on the rigors of systematic review methodology: (i) the Cochrane Collaboration, Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement and PRISMA-P (for protocols) that offer directives on data synthesis; and (ii) the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) guidelines that establish rules for the development of scientific recommendations ( 1 – 5 ). This systematic review guidance is based predominantly on clinical studies, where randomized controlled trials (RCTs) are the gold standard. For that reason, a separate group of researchers has designed the Navigation Guide, specific to environmental health studies that are often observational ( 6 , 7 ). To date, systematic review guidelines (GRADE, PRISMA, PRISMA-P, and Navigation Guide) remove case reports/studies and case series (human subjects research with no control group) from consideration in systematic reviews, in part due to the challenges in evaluating the internal validity of these kinds of study designs. We hypothesize, however, that under certain circumstances, such as in rare acute poisonings, or preliminary reports of new drugs, some case reports and case series may contribute relevant knowledge that would be informative to systematic review recommendations. This is particularly important in clinical settings, where such evidence could potentially change our understanding of the screening, presentation, and potential treatment of rare conditions, such as poisoning from obscure toxins. The Cochrane Collaboration handbook states that “ for some rare or delayed adverse outcomes only case series or case-control studies may be available. Non-randomized studies of interventions with some study design features that are more susceptible to bias may be acceptable for evaluation of serious adverse events in the absence of better evidence, but the risk of bias must still be assessed and reported ” ( 8 ). In addition, the Cochrane Adverse Effects group has shown that case studies may be the best settings in which to observe adverse effects, especially when they are rare and acute ( 9 ). We believe that there may be an effective way to consider case reports/studies and case series in systematic reviews, specifically by developing specific criteria for their inclusion and accounting for their inherent bias.

We propose here a systematic review protocol that has been specifically developed to consider the inclusion and integration of case reports/studies and case series. Our main objective is to create a protocol that is an adaptation of the Navigation Guide ( 6 , 10 ) that presents methodology to examine high quality case reports/studies and case series through cogent inclusion and exclusion criteria. This methodology is in concordance with the Cochrane Methods for Adverse Effects for scoping reviews ( 11 ).

This protocol was prepared in accordance with the usual structured methodology for systematic reviews (PRISMA, PRISMA-P, and Navigation guide) ( 3 – 7 , 10 ). The protocol will be registered on an appropriate website, such as one of the following:

(i) The International Prospective Register of Systematic Reviews (PROSPERO) database ( https://www.crd.york.ac.uk/PROSPERO/ ) is an international database of prospectively registered systematic reviews in health and social welfare, public health, education, crime, justice, and international development, where there is a health-related outcome. It aims to provide a comprehensive listing of systematic reviews registered at inception to help avoid duplication and reduce opportunity for reporting bias by enabling comparison of the completed review with what was planned in the protocol. PROSPERO accepts registrations for systematic reviews, rapid reviews, and umbrella reviews. Key elements of the review protocol are permanently recorded and stored.

(ii) The Open Science Framework (OSF) platform ( https://osf.io/ ) is a free, open, and integrated platform that facilitates open collaboration in research science. It allows for the management and sharing of research project at all stages of research for broad dissemination. It also enables capture of different aspects and products of the research lifecycle, from the development of a research idea, through the design of a study, the storage and analysis of collected data, to the writing and publication of reports or research articles.

(iii) The Research Registry (RR) database ( https://www.researchregistry.com/ ) is a one-stop repository for the registration of all types of research studies, from “first-in-man” case reports/studies to observational/interventional studies to systematic reviews and meta-analyses. The goal is to ensure that every study involving human participants is registered in accordance with the 2013 Declaration of Helsinki. The RR enables prospective or retrospective registrations of studies, including those types of studies that cannot be registered in existing registries. It specifically publishes systematic reviews and meta-analyses and does not register case reports/studies that are not first-in-man or animal studies.

Any significant future changes to the protocol resulting from knowledge gained during the development stages of this project will be documented in detail and a rationale for all changes will be proposed and reported in PROSPERO, OSF, or RR.

The overall protocol will differentiate itself from other known methodologies, by defining two independent teams of reviewers: a classical team and a case team. The classical team will review studies with control groups and an acceptable comparison group (case reports/studies and case series will be excluded). In effect, this team will conduct a more traditional systematic review where evidence from case reports/studies and case series are not considered. The case team will review classical studies, case reports, and case series. This case team will act as a comparison group to identify differences in systematic review conclusions due to the inclusion of evidence from case reports/studies and case series. Both teams will identify studies that meet specified inclusion criteria, conduct separate analyses and risk of bias evaluations, along with overall quality assessments, and syntheses of strengths of evidence. Each team will be blinded to the results of the other team throughout the process. Upon completion of the systematic review, results from each team will be presented, evaluated, and compared.

Patient and Public Involvement

No patient involved.

Eligibility Criteria

Studies will be selected according to the criteria outlined below.

Study Designs

Studies of any design reported in any translatable language to English by online programs (e.g., Google Translate) will be included at the beginning. These studies will span interventional studies with control groups (Randomized Controlled Trials: RCTs), as well as observational studies with and without exposed groups. All observational studies will be eligible for inclusion in accordance with the objectives of this systematic review. Thereafter, only the case team will include cases reports/studies and case series, as specified in their search strategy. The case team will include a separate section for human subjects research that has been conducted with no control groups.

Type of Population

All types of studies examining the general adult human population or healthy adult humans will be included. Studies that involve both adults and children will also be included if data for adults are reported separately. Animal studies will be excluded for the methodological purpose of this (case reports/studies and case series) protocol given that the framework for systematic reviews in toxicology already adequately retrieves this type of toxin data on animals.

Inclusion/Exclusion Criteria

Studies of any design will be included if they fulfill all the eligibility criteria. To be integrated into the overall body of evidence, cases reports/studies and case series must meet pre-defined criteria indicating that they are well-documented, scientifically rigorous, and follow ethical practices, under the CARE guidelines (for Ca se Re ports) ( 12 , 13 ) and the Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Case reports/studies and for Case Series ( 14 , 15 ) that classify case reports/studies in terms of completeness, transparency and data analysis. Studies that were conducted using unethical practices will be excluded.

Type of Exposure/Intervention

Either the prescribed treatment or described exposure to a chemical substance (toxin/toxicant) will be detailed here.

Type of Comparators

In this protocol we plan to compare two review methodologies: one will include and the other will exclude high quality case reports/studies and case series; these two review methodologies will be compared. The comparator will be (the presence or absence of) an available control group that has been specified and is acceptable scientifically and ethically.

Type of Outcomes

The outcome of mortality or morbidity related to the toxicological exposure, will be detailed here.

Information Sources and Search Strategy

There will be no design, date or language limitations applied to the search strategy. A systematic search in electronic academic databases, electronic grey literature, organizational websites, and internet search engines will be performed. We will search at least the following major databases:

- Electronic academic databases : Pubmed, Web of Sciences, Toxline, Poisondex, and databases specific to case reports/studies and case series (e.g., PMC, Scopus, Medline) ( 13 )

- Electronic grey literature databases : OpenGrey ( http://www.opengrey.eu/ ), grey literature Report ( http://greylit.org/ )

- Organizational websites : AHRQ Patient Safety Network ( https://psnet.ahrq.gov/webmm ), World Health Organization ( www.who.int )

- Internet search engines : Google ( https://www.google.com/ ), GoogleScholar ( https://scholar.google.com/ ).

Study Records

Following a systematic search in all the databases above, each of the two independent teams of reviewers (the classical team and the case team) will, respectively, upload separately and in accordance with the eligibility criteria, the literature search results to the systematic review management software, “Covidence,” a primary screening and data extraction tool ( 16 ).

All study records identified during the search will be downloaded and duplicate records will be identified and deleted. Thereafter, two research team members will independently screen the titles and abstracts (step 1) and then the full texts (step 2) of potentially relevant studies for inclusion. If necessary, information will be requested from the publication authors to resolve questions about eligibility. Finally, any disagreements that may potentially exist between the two research team members will be resolved first by discussion and then by consulting a third research team member for arbitration.

If a study record identified during the search was authored by a reviewing research team member, or that team member participated in the identified study, that study record will be re-assigned to another reviewing team member.

Data Collection Process, Items Included, and Prioritization if Needed

All reviewing team members will use standardized forms or software (e.g., Covidence), and each review member will independently extract the data from included studies. If possible, the extracted data will be synthesized numerically. To ensure consistency across reviewers, calibration exercises (reviewer training) will be conducted prior to starting the reviews. Extracted information will include the minimum study characteristics (study authors, study year, study country, participants, intervention/exposure, outcome), study design (summary of study design, comparator, models used, and effect estimate measure) and study context (e.g., data on simultaneous exposure to other risk factors that would be relevant contributors to morbidity or mortality). As specified in the section on study records, a third review team member will resolve any conflicts that arise during data extraction that are not resolved by consensus between the two initial data extractors.

Data on potential conflict of interest for included studies, as well as financial disclosures and funding sources, will also be extracted. If no financial statement or conflict of interest declaration is available, the names of the authors will be searched in other studies published within the previous 36 months and in other publicly available declarations of interests, for funding information ( 17 , 18 ).

Risk of Bias Assessment

To assess the risk of bias within included studies, the internal validity of potential studies will be assessed by using the Navigation Guide tool ( 6 , 19 ), which covers nine domains of bias for human studies: (a) source population representation; (b) blinding; (c) exposure or intervention assessment; (d) outcome assessment; (e) confounding; (f) incomplete outcome data; (g) selective outcome reporting; (h) conflict of interest; and (i) other sources of bias. For each section of the tool, the procedures undertaken for each study will be described and the risk of bias will be rated as “ low risk”; “probably low risk”; “probably risk”; “high risk”; or “not applicable.” Risk of bias on the levels of the individual study and the entire body of evidence will be assessed. Most of the text from these instructions and criteria for judging risk of bias has been adopted verbatim or adapted from one of the latest Navigation Guide systematic reviews used by WHO/ILO ( 6 , 19 , 20 ).

For case reports/studies and case series, the text from these instructions and criteria for judging risk of bias has been adopted verbatim or adapted from one of the latest Navigation Guide systematic reviews ( 21 ), and is given in Supplementary Material . Specific criteria are listed below. To ensure consistency across reviewers, calibration exercises (reviewer training) will be conducted prior to starting the risk of bias assessments for case reports/studies and case series.

Are the Study Groups at Risk of Not Representing Their Source Populations in a Manner That Might Introduce Selection Bias?

The source population is viewed as the population for which study investigators are targeting their study question of interest.

Examples of considerations for this risk of bias domain include: (1) the context of the case report; (2) level of detail reported for participant inclusion/exclusion (including details from previously published papers referenced in the article), with inclusion of all relevant consecutive patients in the considered period; ( 14 , 15 ) (3) exclusion rates, attrition rates and reasons.

Were Exposure/Intervention (Toxic, Treatment) Assessment Methods Lacking Accuracy?

The following list of considerations represents a collection of factors proposed by experts in various fields that may potentially influence the internal validity of the exposure assessment in a systematic manner (not those that may randomly affect overall study results). These should be interpreted only as suggested considerations and should not be viewed as scoring or a checklist . Considering there are no controls in such designs, this should be evaluated carefully to be sure the report really contributes to the actual knowledge .

List of Considerations :

Possible sources of exposure assessment metrics:

1) Identification of the exposure

2) Dose evaluation

3) Toxicological values

4) Clinical effects *

5) Biological effects *

6) Treatments given (dose, timing, route)

* Some clinical and biological effects might be related to exposure

For each, overall considerations include:

1) What is the quality of the source of the metric being used?

2) Is the exposure measured in the study a surrogate for the exposure?

3) What was the temporal coverage (i.e., short or long-term exposure)?

4) Did the analysis account for prediction uncertainty?

5) How was missing data accounted for, and any data imputations incorporated?

6) Were sensitivity analyses performed?

Were Outcome Assessment Methods Lacking Accuracy?

This item is similar to actual Navigation guidelines that require an assessment of the accuracy of the measured outcome.

Was Potential Confounding Inadequately Incorporated?

This is a very important issue for case reports/studies and case series. Case reports/studies and case series do not include controls and so, to be considered in a systematic review, these types of studies will need to be well-documented with respect to treatment or other contextual factors that may explain or influence the outcome. Prior to initiating the study screening, review team members should collectively generate a list of potential confounders that are based on expert opinion and knowledge gathered from the scientific literature:

Tier I: Important confounders

• Other associated treatment (i.e., intoxication, insufficient dose, history, or context)

• Medical history

Tier II: Other potentially important confounders and effect modifiers:

• Age, sex, country.

Were Incomplete Outcome Data Inadequately Addressed?

This item is similar to actual Navigation Guide instructions, though it may be very unlikely that outcome data would be incomplete in published case reports/studies and case series.

Does the Study Report Appear to Have Selective Outcome Reporting?

This item is similar to actual Navigation Guide instructions, though it may be very unlikely that there would be selective outcome reporting in published case reports/studies and case series.

Did the Study Receive Any Support From a Company, Study Author, or Other Entity Having a Financial Interest?

This item is similar to actual Navigation Guide instructions.

Did the Study Appear to Have Other Problems That Could Put It at a Risk of Bias?

Data synthesis criteria and summary measures if feasible.

Meta-analyses will be conducted using a random-effects model if studies are sufficiently homogeneous in terms of design and comparator. For dichotomous outcomes, effects of associations will be determined by using risk ratios (RR) or odds ratios (OR) with 95% confidence intervals (CI). Continuous outcomes will be analyzed using weighted mean differences (with 95% CI) or standardized mean differences (with 95% CI) if different measurement scales are used. Skewed data and non-quantitative data will be presented descriptively. Where data are missing, a request will be made to the original authors of the study to obtain the relevant missing data. If these data cannot be obtained, an imputation method will be performed. The statistical heterogeneity of the studies using the Chi Squared test (significance level: 0.1) and I 2 statistic (0–40%: might not be important; 30–60%: may represent moderate heterogeneity; 50–90%: may represent substantial heterogeneity; 75–100%: considerable heterogeneity). If there is heterogeneity, an attempt will be made to explain the source of this heterogeneity through a subgroup or sensitivity analysis.

Finally, the meta-analysis will be conducted in the latest version of the statistical software RevMan. The Mantel-Haenszel method will be used for the fixed effects model if tests of heterogeneity are not significant. If statistical heterogeneity is observed ( I 2 ≥ 50% or p < 0.1), the random effects model will be chosen. If quantitative synthesis is not feasible (e.g., if heterogeneity exists), a meta-analysis will not be performed and a narrative, qualitative summary of the study findings will be done.

Separate analyses will be conducted for the studies that contain control groups using expected mortality/morbidity, in order to include them in the quantitative synthesis of case reports/studies and case series.

If quantitative synthesis is not appropriate, a systematic narrative synthesis will be provided with information presented in the text and tables to summarize and explain the characteristics and findings of the included studies. The narrative synthesis will explore the relationship and findings both within and between the included studies.

Possible Additional Analyses

If feasible, subgroup analyses will be used to explore possible sources of heterogeneity, if there is evidence for differences in effect estimates by country, study design, or patient characteristics (e.g., sex and age). In addition, sensitivity analysis will be performed to explore the source of heterogeneity as for example, published vs. unpublished data, full-text publications vs. abstracts, risk of bias (by omitting studies that are judged to be at high risk of bias).

Overall Quality of Evidence Assessment

The quality of evidence will be assessed using an adapted version of the Evidence Quality Assessment Tool in the Navigation Guide. This tool is based on the GRADE approach ( 1 ). The assessment will be conducted by two teams, again blinded to each other, one that has the results of the case reports/studies and case series/control synthesis, the other without.

Data synthesis will be conducted independently by the classical and case teams. Evidence ratings will start at “high” for randomized control studies, “moderate” for observational studies, and “low” for case reports/studies and case series . It is important to be clear that sufficient levels of evidence cannot be achieved without study comparators. With regards to case reports/studies and case series, we classify these as starting at the lowest point of evidence and therefore we cannot consider evidence higher than low for these kinds of studies. Complete instructions for making quality of evidence judgments are presented in Supplementary Material .

Synthesis of Strength of Evidence

The standard Navigation Guide methodology will be applied to rate the strength of recommendations. The classical and case teams, blinded to the results from each other during the process, will independently assess the strength of evidence. The evidence quality ratings will be translated into strength of evidence for each population based on a combination of four criteria: (a) Quality of body of evidence; (b) Direction of effect; (c) Confidence in effect; and (d) Other compelling attributes of the data that may influence certainty. The ratings for strength of evidence will be “sufficient evidence of harmfulness,” “limited of harmfulness,” “inadequate of harmfulness” and “evidence of lack of harmfulness.”

Once we complete the synthesis of case reports/studies and case series, findings of this separate evidence stream will only be considered if RCTs and observational studies are not available. They will not be used to upgrade or downgrade the strength of other evidence streams.

To the best of our knowledge, this protocol is one of the first to specifically address the incorporation of case reports/studies and case series in a systematic review ( 9 ). The protocol was adapted from the Navigation Guide with the intent of integrating the case reports/studies and case series in systematic review recommendations, while following traditional systematic review methodology to the greatest extent possible. To be included, these case report/studies and case series will need to be well-documented, scientifically rigorous, and follow ethical practices. In addition, we believe that some case reports/studies and case series might bring relevant knowledge that should be considered in systematic review recommendations when data from RCT's and observational studies are not available, especially when even a small number of studies report an important and possibly causal association in an epidemic or a side effect of a newly marketed medicine. Our methodology will be the first to effectively incorporate case reports/studies and case series in systematic reviews that synthesize evidence for clinicians, researchers, and drug developers. These types of studies will be incorporated mostly through paper selection and risk of bias assessments. In addition, we will conduct meta-analyses if the eligible studies provide sufficient data.

This protocol has limitations related primarily to the constraints of case reports/studies and case series. These are descriptive studies. In addition, a case series is subject to selection bias because the clinician or researcher selects the cases themselves and may represent outliers in clinical practice. Furthermore, this kind of study does not have a control group, so it is not possible to compare what happens to other people who do not have the disease or receive treatment. These sources of bias mean that reported results may not be generalizable to a larger patient population and therefore cannot generate information on incidences or prevalence rates and ratios ( 22 , 23 ). However, it is important to note that promoting the need to synthesize these types of studies (case reports/studies and case series) in a formal systematic review, should not deter or delay immediate action from being taken when a few small studies report a plausible causal association between exposure and disease, such as, in the event of an epidemic or a side effect of a newly marketed medicine ( 23 ). In this study protocol, we will not consider animal studies that might give relevant toxicological information because we are focusing on study areas where a paucity of information exists. Finally, we must note that, case reports/studies and case series do not provide independent proof, and therefore, the findings of this separate evidence stream (case reports/studies and case series) will only be considered if evidence from RCTs and observational studies is not available. Case reports/studies and case series will not be used to upgrade or downgrade the strength of other evidence streams. In any case, it is very important to remember that these kinds of studies (case reports/studies and case series) are there to quickly alert agencies of the need to take immediate action to prevent further harm.

Despite these limitations, case reports/studies and case series are a first line of evidence because they are where new issues and ideas emerge (hypothesis-generating) and can contribute to a change in clinical practice ( 23 – 25 ). We therefore believe that data from case reports/studies and case series, when synthesized and presented with completeness and transparency, may provide important details that are relevant to systematic review recommendations.

Author Contributions

AD and GS the protocol study was designed. JL, TW, and DM reviewed. MF, ALG, RV, NC, CB, GLR, MD, ML, and AN significant improvement was made. AN and AD wrote the manuscript. GS improved the language. All authors reviewed and commented on the final manuscript, read and approved the final manuscript to be published.

This project was supported by the French Pays de la Loire region and Angers Loire Métropole, University of Angers and Centre Hospitalo-Universitaire CHU Angers. The project is entitled TEC-TOP (no award/grant number).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmed.2021.708380/full#supplementary-material

1. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. (2008) 336:924–6. doi: 10.1136/bmj.39489.470347.AD

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al. (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.1 (updated September 2020) . Cochrane (2020). Available online at: www.training.cochrane.org/handbook

Google Scholar

3. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol. (2009) 62:e1–34. doi: 10.1016/j.jclinepi.2009.06.006

4. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. (2015) 4:1. doi: 10.1186/2046-4053-4-1

5. Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 : elaboration and explanation. BMJ . (2015) 350:g7647. doi: 10.1136/bmj.g7647

PubMed Abstract | CrossRef Full Text

6. Woodruff TJ, Sutton P, Navigation Guide Work Group. An evidence-based medicine methodology to bridge the gap between clinical and environmental health sciences. Health Aff (Millwood). (2011) 30:931–7. doi: 10.1377/hlthaff.2010.1219

7. Woodruff TJ, Sutton P. The Navigation Guide systematic review methodology: a rigorous and transparent method for translating environmental health science into better health outcomes. Environ Health Perspect. (2014) 122:1007–14. doi: 10.1289/ehp.1307175

8. Reeves BC, Deeks JJ, Higgins JPT, Shea B, Tugwell P, Wells GA. Chapter 24: Including non-randomized studies on intervention effects. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane Handbook for Systematic Reviews of Interventions version 6.1 (updated September 2020). Cochrane (2020). Available online at: www.training.cochrane.org/handbook

9. Loke YK, Price D, Herxheimer A, the Cochrane Adverse Effects Methods Group. Systematic reviews of adverse effects: framework for a structured approach. BMC Med Res Methodol. (2007) 7:32. doi: 10.1186/1471-2288-7-32

10. Lam J, Koustas E, Sutton P, Johnson PI, Atchley DS, Sen S, et al. The Navigation Guide - evidence-based medicine meets environmental health: integration of animal and human evidence for PFOA effects on fetal growth. Environ Health Perspect. (2014) 122:1040–51. doi: 10.1289/ehp.1307923

11. Peryer G, Golder S, Junqueira DR, Vohra S, Loke YK. Chapter 19: Adverse effects. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane Handbook for Systematic Reviews of Interventions version 6.1 (updated September 2020) . Cochrane (2020). Available online at: www.training.cochrane.org/handbook

12. Gagnier JJ, Kienle G, Altman DG, Moher D, Sox H, Riley D, et al. The CARE guidelines: consensus-based clinical case reporting guideline development. J Med Case Rep. (2013) 7:223. doi: 10.1186/1752-1947-7-223

13. Riley DS, Barber MS, Kienle GS, Aronson JK, von Schoen-Angerer T, Tugwell P, et al. CARE guidelines for case reports: explanation and elaboration document. J Clin Epidemiol. (2017) 89:218–35. doi: 10.1016/j.jclinepi.2017.04.026

14. Moola S, Munn Z, Tufanaru C, Aromataris E, Sears K, Sfetcu R, et al. Chapter 7: Systematic reviews of etiology and risk. In: Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis. JBI (2020). doi: 10.46658/JBIMES-20-08. Available online at: https://synthesismanual.jbi.global

CrossRef Full Text

15. Munn Z, Barker TH, Moola S, Tufanaru C, Stern C, McArthur A, et al. Methodological quality of case series studies: an introduction to the JBI critical appraisal tool. JBI Evidence Synthesis. (2020) 18:2127–33. doi: 10.11124/JBISRIR-D-19-00099

16. Covidence systematic review software, V.H.I. Covidence Systematic Review Software , V.H.I. Melbourne, CA. Available online at: www.covidence.org ; https://support.covidence.org/help/how-can-i-cite-covidence

17. Drazen JM, de Leeuw PW, Laine C, Mulrow C, DeAngelis CD, Frizelle FA, et al. Toward More Uniform Conflict Disclosures: The Updated ICMJE Conflict of Interest Reporting Form. JAMA. (2010) 304:212. doi: 10.1001/jama.2010.918

18. Drazen JM, Weyden MBVD, Sahni P, Rosenberg J, Marusic A, Laine C, et al. Uniform Format for Disclosure of Competing Interests in ICMJE Journals. N Engl J Med. (2009) 361:1896–7. doi: 10.1056/NEJMe0909052

19. Johnson PI, Sutton P, Atchley DS, Koustas E, Lam J, Sen S, et al. The navigation guide—evidence-based medicine meets environmental health: systematic review of human evidence for PFOA effects on fetal growth. Environ Health Perspect. (2014) 122:1028–39. doi: 10.1289/ehp.1307893

20. Descatha A, Sembajwe G, Baer M, Boccuni F, Di Tecco C, Duret C, et al. WHO/ILO work-related burden of disease and injury: protocol for systematic reviews of exposure to long working hours and of the effect of exposure to long working hours on stroke. Environ Int. (2018) 119:366–78. doi: 10.1016/j.envint.2018.06.016

21. Lam J, Lanphear BP, Bellinger D, Axelrad DA, McPartland J, Sutton P, et al. Developmental PBDE exposure and IQ/ADHD in childhood: a systematic review and meta-analysis. Environ Health Perspect. (2017) 125:086001. doi: 10.1289/EHP1632

22. Hay JE, Wiesner RH, Shorter RG, LaRusso NF, Baldus WP. Primary sclerosing cholangitis and celiac disease. Ann Intern Med. (1988) 109:713–7. doi: 10.7326/0003-4819-109-9-713

23. Nissen T, Wynn R. The clinical case report: a review of its merits and limitations. BMC Res Notes. (2014) 7:264. doi: 10.1186/1756-0500-7-264

24. Buonfrate D, Requena-Mendez A, Angheben A, Muñoz J, Gobbi F, Van Den Ende J, et al. Severe strongyloidiasis: a systematic review of case reports. BMC Infect Dis. (2013) 13:78. doi: 10.1186/1471-2334-13-78

25. Graham R, Mancher M, Wolman DM, Greenfield S, Steinberg E, Committee on Standards for Developing Trustworthy Clinical Practice Guidelines, et al. Clinical Practice Guidelines We Can Trust . Washington, D.C.: National Academies Press (2011).

Keywords: toxicology, epidemiology, public health, protocol, systematic review, case reports/studies, case series

Citation: Nambiema A, Sembajwe G, Lam J, Woodruff T, Mandrioli D, Chartres N, Fadel M, Le Guillou A, Valter R, Deguigne M, Legeay M, Bruneau C, Le Roux G and Descatha A (2021) A Protocol for the Use of Case Reports/Studies and Case Series in Systematic Reviews for Clinical Toxicology. Front. Med. 8:708380. doi: 10.3389/fmed.2021.708380

Received: 19 May 2021; Accepted: 11 August 2021; Published: 06 September 2021.

Reviewed by:

Copyright © 2021 Nambiema, Sembajwe, Lam, Woodruff, Mandrioli, Chartres, Fadel, Le Guillou, Valter, Deguigne, Legeay, Bruneau, Le Roux and Descatha. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Aboubakari Nambiema, aboubakari.nambiema@univ-angers.fr ; orcid.org/0000-0002-4258-3764

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Systematic Review | Definition, Examples & Guide

Systematic Review | Definition, Examples & Guide

Published on 15 June 2022 by Shaun Turney . Revised on 17 October 2022.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesise all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question ‘What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?’

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs meta-analysis, systematic review vs literature review, systematic review vs scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce research bias . The methods are repeatable , and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesise the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesising all available evidence and evaluating the quality of the evidence. Synthesising means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Prevent plagiarism, run a free check.

Systematic reviews often quantitatively synthesise the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesise results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarise and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimise bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimise research b ias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinised by others.
  • They’re thorough : they summarise all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fourth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomised control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective(s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesise the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Grey literature: Grey literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of grey literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of grey literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Grey literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarise what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgement of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomised into the control and treatment groups.

Step 6: Synthesise the data

Synthesising the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesising the data:

  • Narrative ( qualitative ): Summarise the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarise and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analysed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a dissertation , thesis, research paper , or proposal .

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarise yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Turney, S. (2022, October 17). Systematic Review | Definition, Examples & Guide. Scribbr. Retrieved 12 March 2024, from https://www.scribbr.co.uk/research-methods/systematic-reviews/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, what is a literature review | guide, template, & examples, exploratory research | definition, guide, & examples, what is peer review | types & examples.

DistillerSR Logo

The Differences Between a Randomized-Controlled Trial vs Systematic Review

case study vs systematic review

Automate every stage of your literature review to produce evidence-based research faster and more accurately.

Through these reviews, you’ll be able to evaluate available literature and collect credible data that can be used as evidence to base decisions on. But you need to understand the pros and cons of each type of review so that you can choose the one that fits your objectives. Fortunately, you can find numerous online resources that talk about the differences between an integrated review vs systematic review , a rapid review vs systematic review, and many other comparisons. This article compares a systematic review with a randomized-controlled trial (RCT).

Systematic Review

A systematic review uses a clearly defined research question to make an assessment leveraging a systematic and reproducible method to find, choose and analytically assess all pertinent research. It gathers and analyzes eligible studies from reputable research sources to support the evidence. The main point to remember from this systematic review definition is that the review needs to answer a specific research question. The research question, which should be clearly stated, the study objectives, and the topic of study define the scope of the study. The scope of the study prevents the author from going against the intention of the conducting review.

Learn More About DistillerSR

(Article continues below)

case study vs systematic review

Randomized-Controlled Trial

An RCT is a type of scientific trial meant to control factors that aren’t under direct experimental control. A perfect example of an RCT is a clinical trial aiming to assess the effects of pharmacological treatment, surgical procedure, medical apparatus, diagnostic procedure, or other medical intervention.

This type of study randomly assigns participants (or subjects) to either experimental groups (EG) or control groups (CG), ensuring there isn’t any bias involved in the assignments. The defining aspect of RCTs is that the allocation of participants to either an EG or CG is completely randomized. The assignment may be blinded or not. Not only the participants but also the assessment professionals may be blinded to the group allocations. Also, during the course of the trial, participants may be single-blinded, double-blinded, or not blinded at all. The experimental group in an RCT receives the dose or procedure, while those in the control group receive a placebo, a different type of treatment, or no treatment at all. The state of an RCT being “double-blind”, means that no one knows who is assigned to which group, so there is no way to influence the results.

There are several advantages and disadvantages of using an RCT format. For instance, because an RCT ensures that possible population biases are not a factor in the results, you are assured of receiving impartial evidence that can help you to make informed decisions.

Unlike observational studies, the subjects and researchers involved in a double-blind RCT study don’t know which subjects are receiving the treatment. Masking is essential in medical trials that rely on subjective outcomes to ensure that the drug being evaluated does what it is intended to do. With an RCT, it’s easier to analyze results because you’re using recognized statistical tools, and the population of participants is clearly defined.

However, an RCT can be costly and time-consuming because it requires a large number of participants for more statistical power, and a longer duration to do all of the follow-up analysis. But you can keep the cost of your RCTs down by conducting simple, single, and easily assessed outcome measures.

3 Reasons to Connect

case study vs systematic review

  • Search Menu
  • Advance articles
  • Editor's Choice
  • Cover Archive
  • Research from China
  • Highly Cited Collection
  • Review Series Index
  • Supplements
  • Author Guidelines
  • Submission Site
  • Open Access
  • About The Association of Physicians
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

Case reports, case series and systematic reviews.

  • Article contents
  • Figures & tables
  • Supplementary Data

Case reports, case series and systematic reviews, QJM: An International Journal of Medicine , Volume 95, Issue 4, April 2002, Pages 197–198, https://doi.org/10.1093/qjmed/95.4.197

  • Permissions Icon Permissions

Large numbers of reports of single cases and case series drop through the letterbox of the QJM's editorial office. A glance back through the journal's archive shows why. We have carried on publishing these sorts of descriptive studies long after the editors of journals that picked up the torch for evidence‐based medicine fell out of love with them.

According to the NHS Centre for Reviews and Dissemination, the randomized controlled trial is top of the hierarchy of what counts as reliable evidence for clinical decision‐making. Case series are at the bottom, contained in the rubric: opinions of respected authorities based on clinical experience; descriptive studies; and reports of expert committees . But, despite this lowly position, there are many instances where valuable knowledge has come from someone taking the trouble to write up cases that are out of the ordinary. Two modern classics are the reports of Pneumocystis carinii pneumonia and mucosal candidiasis in previously healthy homosexual men 1 and of hepatocellular adenomata in women taking oral contraceptives. 2 When such reports are published, they alert other doctors and may stimulate further investigation. Where a mechanism can be devised for collating accounts of unusual events, as for instance in the UK's yellow card scheme for the reporting of suspected adverse drug reactions, case reports turn into a system for population surveillance.

There is another circumstance in which authors should write, and journals should publish, descriptive accounts of case series. This is in rare conditions where population‐based studies or treatment trials are difficult or impossible to organize. Lachmann and colleagues' article on the treatment of amyloidois in unicentric Castleman's disease in this issue (see pp 211–18) is an example. What's more, this style of report often makes more enjoyable reading than accounts of randomized controlled trials that several journals now insist must conform to the procrustean requirements of the Consolidated Standards of Reporting Trials (CONSORT) checklist and flow‐chart.

On the other hand, it must be admitted that descriptive studies have serious limitations. One is that retrospective reviews of case notes are rarely complete. Who can say whether the outcomes of the missing cases might have been very different? Another is that quirks in the way that unusual cases get referred make it hard to feel confident in generalizing from the experience of one centre. A third, as Grimes and Schulz have pointed out, 3 is that without a comparison group, causal inferences about temporal associations need to be treated with deep suspicion.

At best, descriptions of case series act as catalysts for further investigation by methods that are more systematic. At worst, they can cause useful treatments to be abandoned (think of recent events concerning the MMR vaccine) or potentially harmful procedures to be adopted (remember past obstetric enthusiasm for routine fetal monitoring in pregnancy). Sometimes they construct diseases of doubtful validity but remarkable longevity. Forty years ago, Elwood, in a survey of more than 4000 people, showed that the presence of dysphagia, post‐cricoid web and iron‐deficiency anaemia in the same person occurred no more often than would be expected from the background prevalence of the three conditions. 4 Tellingly, he also showed that agreement between radiologists over whether a post‐cricoid web was present on a barium swallow was only slightly better than chance. 5 Yet, cases of Patterson‐Kelly syndrome continue to find their way into print.

Systematic reviews of observational studies are much harder to carry out and interpret than systematic reviews of randomized controlled trials. The main difficulties lie in coping with the diversity of study designs used by investigators, and the biases inherent in most observational studies. Methods are still being developed and argued over, 5 but it is already clear that applying an evidence‐based approach to traditional descriptive studies is useful.

So what should authors and readers expect from the QJM? No prizes for guessing that we are becoming unenthusiastic about reports of single cases. Even so, where they are of exceptional interest and written concisely, we may offer publication in the correspondence columns. Reports of case series will be given a warmer welcome, particularly if the circumstances are such that it would be unreasonable to demand a more systematic approach. (At the same time, the condition that they describe must not be so rare that few readers of the journal will ever encounter a case.) We encourage the submission of systematic reviews and meta‐analyses that are directed at questions of clinical relevance. The investigators will need, however, to have been rigorous in their methodology and to have synthesized a useful amount of evidence. Reviews that conclude that there have been few studies, all of poor quality, and that further research is needed make poor reading.

Gottlieb MS, Schroff R, Schanker HM, Weisman JD, Fan PT, Wolf RA, Saxon A. Pneumocystis carinii pneumonia and mucosal candidiasis in previously healthy homosexual men: evidence of a new acquired cellular immunodeficiency. N Engl J Med 1981 ; 305 : 1425 –31.

Rooks JB, Ory HW, Ishak KG, Strauss LT, Greenspan JR, Hill AP, Tyler CW Jr. Epidemiology of hepatocellular adenoma: the role of oral contraceptive use. JAMA 1979 ; 242 : 644 –8.

Grimes DA, Schulz KF. Descriptive studies: what they can and cannot do. Lancet 2002 ; 359 : 145 –9.

Elwood PC, Jacobs A, Pitman RG, Entwistle CC. Epidemiology of the Patterson‐Kelly syndrome. Lancet 1964 ; ii : 716 –19.

Elwood PC, Pitman RG. Observer error in the radiological diagnosis of Patterson‐Kelly webs. Br J Radiol 1966 ; 39 : 587 –9.

[ http://www.consort‐statement.org/MOOSE.pdf ]

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1460-2393
  • Print ISSN 1460-2725
  • Copyright © 2024 Association of Physicians of Great Britain and Ireland
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Evidence-Based Medicine
  • Finding the Evidence
  • eJournals for EBM

Levels of Evidence

  • JAMA Users' Guides
  • Tutorials (Learning EBM)
  • Web Resources

Resources That Rate The Evidence

  • ACP Smart Medicine
  • Agency for Healthcare Research and Quality
  • Clinical Evidence
  • Cochrane Library
  • Health Services/Technology Assessment Texts (HSTAT)
  • PDQ® Cancer Information Summaries from NCI
  • Trip Database

Critically Appraised Individual Articles

  • Evidence-Based Complementary and Alternative Medicine
  • Evidence-Based Dentistry
  • Evidence-Based Nursing
  • Journal of Evidence-Based Dental Practice

Grades of Recommendation

Critically-appraised individual articles and synopses include:

Filtered evidence:

  • Level I: Evidence from a systematic review of all relevant randomized controlled trials.
  • Level II: Evidence from a meta-analysis of all relevant randomized controlled trials.
  • Level III: Evidence from evidence summaries developed from systematic reviews
  • Level IV: Evidence from guidelines developed from systematic reviews
  • Level V: Evidence from meta-syntheses of a group of descriptive or qualitative studies
  • Level VI: Evidence from evidence summaries of individual studies
  • Level VII: Evidence from one properly designed randomized controlled trial

Unfiltered evidence:

  • Level VIII: Evidence from nonrandomized controlled clinical trials, nonrandomized clinical trials, cohort studies, case series, case reports, and individual qualitative studies.
  • Level IX: Evidence from opinion of authorities and/or reports of expert committee

Two things to remember:

1. Studies in which randomization occurs represent a higher level of evidence than those in which subject selection is not random.

2. Controlled studies carry a higher level of evidence than those in which control groups are not used.

Strength of Recommendation Taxonomy (SORT)

  • SORT The American Academy of Family Physicians uses the Strength of Recommendation Taxonomy (SORT) to label key recommendations in clinical review articles. In general, only key recommendations are given a Strength-of-Recommendation grade. Grades are assigned on the basis of the quality and consistency of available evidence.
  • << Previous: eJournals for EBM
  • Next: JAMA Users' Guides >>
  • Last Updated: Jan 25, 2024 4:15 PM
  • URL: https://guides.library.stonybrook.edu/evidence-based-medicine
  • Request a Class
  • Hours & Locations
  • Ask a Librarian
  • Special Collections
  • Library Faculty & Staff

Library Administration: 631.632.7100

  • Stony Brook Home
  • Campus Maps
  • Web Accessibility Information
  • Accessibility Barrier Report Form

campaign for stony brook

Comments or Suggestions? | Library Webmaster

Creative Commons License

Except where otherwise noted, this work by SBU Libraries is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License .

FSTA Logo

Start your free trial

Arrange a trial for your organisation and discover why FSTA is the leading database for reliable research on the sciences of food and health.

REQUEST A FREE TRIAL

  • Thought for Food Blog

What is the difference between a systematic review and a systematic literature review?

By Carol Hollier on 07-Jan-2020 12:42:03

Systematic Reviews vs Systematic Literature Reviews | IFIS Publishing

For those not immersed in systematic reviews, understanding the difference between a systematic review and a systematic literature review can be confusing.  It helps to realise that a “systematic review” is a clearly defined thing, but ambiguity creeps in around the phrase “systematic literature review” because people can and do use it in a variety of ways. 

A systematic review is a research study of research studies.  To qualify as a systematic review, a review needs to adhere to standards of transparency and reproducibility.  It will use explicit methods to identify, select, appraise, and synthesise empirical results from different but similar studies.  The study will be done in stages:  

  • In stage one, the question, which must be answerable, is framed
  • Stage two is a comprehensive literature search to identify relevant studies
  • In stage three the identified literature’s quality is scrutinised and decisions made on whether or not to include each article in the review
  • In stage four the evidence is summarised and, if the review includes a meta-analysis, the data extracted; in the final stage, findings are interpreted. [1]

Some reviews also state what degree of confidence can be placed on that answer, using the GRADE scale.  By going through these steps, a systematic review provides a broad evidence base on which to make decisions about medical interventions, regulatory policy, safety, or whatever question is analysed.   By documenting each step explicitly, the review is not only reproducible, but can be updated as more evidence on the question is generated.

Sometimes when people talk about a “systematic literature review”, they are using the phrase interchangeably with “systematic review”.  However, people can also use the phrase systematic literature review to refer to a literature review that is done in a fairly systematic way, but without the full rigor of a systematic review. 

For instance, for a systematic review, reviewers would strive to locate relevant unpublished studies in grey literature and possibly by contacting researchers directly.  Doing this is important for combatting publication bias, which is the tendency for studies with positive results to be published at a higher rate than studies with null results.  It is easy to understand how this well-documented tendency can skew a review’s findings, but someone conducting a systematic literature review in the loose sense of the phrase might, for lack of resource or capacity, forgo that step. 

Another difference might be in who is doing the research for the review. A systematic review is generally conducted by a team including an information professional for searches and a statistician for meta-analysis, along with subject experts.  Team members independently evaluate the studies being considered for inclusion in the review and compare results, adjudicating any differences of opinion.   In contrast, a systematic literature review might be conducted by one person. 

Overall, while a systematic review must comply with set standards, you would expect any review called a systematic literature review to strive to be quite comprehensive.  A systematic literature review would contrast with what is sometimes called a narrative or journalistic literature review, where the reviewer’s search strategy is not made explicit, and evidence may be cherry-picked to support an argument.

FSTA is a key tool for systematic reviews and systematic literature reviews in the sciences of food and health.

pawel-czerwinski-VkITYPupzSg-unsplash-1

The patents indexed help find results of research not otherwise publicly available because it has been done for commercial purposes.

The FSTA thesaurus will surface results that would be missed with keyword searching alone. Since the thesaurus is designed for the sciences of food and health, it is the most comprehensive for the field. 

All indexing and abstracting in FSTA is in English, so you can do your searching in English yet pick up non-English language results, and get those results translated if they meet the criteria for inclusion in a systematic review.

FSTA includes grey literature (conference proceedings) which can be difficult to find, but is important to include in comprehensive searches.

FSTA content has a deep archive. It goes back to 1969 for farm to fork research, and back to the late 1990s for food-related human nutrition literature—systematic reviews (and any literature review) should include not just the latest research but all relevant research on a question. 

You can also use FSTA to find literature reviews.

FSTA allows you to easily search for review articles (both narrative and systematic reviews) by using the subject heading or thesaurus term “REVIEWS" and an appropriate free-text keyword.

On the Web of Science or EBSCO platform, an FSTA search for reviews about cassava would look like this: DE "REVIEWS" AND cassava.

On the Ovid platform using the multi-field search option, the search would look like this: reviews.sh. AND cassava.af.

In 2011 FSTA introduced the descriptor META-ANALYSIS, making it easy to search specifically for systematic reviews that include a meta-analysis published from that year onwards.

On the EBSCO or Web of Science platform, an FSTA search for systematic reviews with meta-analyses about staphylococcus aureus would look like this: DE "META-ANALYSIS" AND staphylococcus aureus.

On the Ovid platform using the multi-field search option, the search would look like this: meta-analysis.sh. AND staphylococcus aureus.af.

Systematic reviews with meta-analyses published before 2011 are included in the REVIEWS controlled vocabulary term in the thesaurus.

An easy way to locate pre-2011 systematic reviews with meta-analyses is to search the subject heading or thesaurus term "REVIEWS" AND meta-analysis as a free-text keyword AND another appropriate free-text keyword.

On the Web of Science or EBSCO platform, the FSTA search would look like this: DE "REVIEWS" AND meta-analysis AND carbohydrate*

On the Ovid platform using the multi-field search option, the search would look like this: reviews .s h. AND meta-analysis.af. AND carbohydrate*.af.  

Related resources:

  • Literature Searching Best Practise Guide
  • Predatory publishing: Investigating researchers’ knowledge & attitudes
  • The IFIS Expert Guide to Journal Publishing

Library image by  Paul Schafer , microscope image by Matthew Waring , via Unsplash.

BLOG CTA

  • FSTA - Food Science & Technology Abstracts
  • IFIS Collections
  • Resources Hub
  • Diversity Statement
  • Sustainability Commitment
  • Company news
  • Frequently Asked Questions
  • Privacy Policy
  • Terms of Use for IFIS Collections

Ground Floor, 115 Wharfedale Road,  Winnersh Triangle, Wokingham, Berkshire RG41 5RB

Get in touch with IFIS

© International Food Information Service (IFIS Publishing) operating as IFIS – All Rights Reserved     |     Charity Reg. No. 1068176     |     Limited Company No. 3507902     |     Designed by Blend

  • En español – ExME
  • Em português – EME

Traditional reviews vs. systematic reviews

Posted on 3rd February 2016 by Weyinmi Demeyin

case study vs systematic review

Millions of articles are published yearly (1) , making it difficult for clinicians to keep abreast of the literature. Reviews of literature are necessary in order to provide clinicians with accurate, up to date information to ensure appropriate management of their patients. Reviews usually involve summaries and synthesis of primary research findings on a particular topic of interest and can be grouped into 2 main categories; the ‘traditional’ review and the ‘systematic’ review with major differences between them.

Traditional reviews provide a broad overview of a research topic with no clear methodological approach (2) . Information is collected and interpreted unsystematically with subjective summaries of findings. Authors aim to describe and discuss the literature from a contextual or theoretical point of view. Although the reviews may be conducted by topic experts, due to preconceived ideas or conclusions, they could be subject to bias.

Systematic reviews are overviews of the literature undertaken by identifying, critically appraising and synthesising results of primary research studies using an explicit, methodological approach(3). They aim to summarise the best available evidence on a particular research topic.

The main differences between traditional reviews and systematic reviews are summarised below in terms of the following characteristics: Authors, Study protocol, Research question, Search strategy, Sources of literature, Selection criteria, Critical appraisal, Synthesis, Conclusions, Reproducibility, and Update.

Traditional reviews

  • Authors: One or more authors usually experts in the topic of interest
  • Study protocol: No study protocol
  • Research question: Broad to specific question, hypothesis not stated
  • Search strategy: No detailed search strategy, search is probably conducted using keywords
  • Sources of literature: Not usually stated and non-exhaustive, usually well-known articles. Prone to publication bias
  • Selection criteria: No specific selection criteria, usually subjective. Prone to selection bias
  • Critical appraisal: Variable evaluation of study quality or method
  • Synthesis: Often qualitative synthesis of evidence
  • Conclusions: Sometimes evidence based but can be influenced by author’s personal belief
  • Reproducibility: Findings cannot be reproduced independently as conclusions may be subjective
  • Update: Cannot be continuously updated

Systematic reviews

  • Authors: Two or more authors are involved in good quality systematic reviews, may comprise experts in the different stages of the review
  • Study protocol: Written study protocol which includes details of the methods to be used
  • Research question: Specific question which may have all or some of PICO components (Population, Intervention, Comparator, and Outcome). Hypothesis is stated
  • Search strategy: Detailed and comprehensive search strategy is developed
  • Sources of literature: List of databases, websites and other sources of included studies are listed. Both published and unpublished literature are considered
  • Selection criteria: Specific inclusion and exclusion criteria
  • Critical appraisal: Rigorous appraisal of study quality
  • Synthesis: Narrative, quantitative or qualitative synthesis
  • Conclusions: Conclusions drawn are evidence based
  • Reproducibility: Accurate documentation of method means results can be reproduced
  • Update: Systematic reviews can be periodically updated to include new evidence

Decisions and health policies about patient care should be evidence based in order to provide the best treatment for patients. Systematic reviews provide a means of systematically identifying and synthesising the evidence, making it easier for policy makers and practitioners to assess such relevant information and hopefully improve patient outcomes.

  • Fletcher RH, Fletcher SW. Evidence-Based Approach to the Medical Literature. Journal of General Internal Medicine. 1997; 12(Suppl 2):S5-S14. doi:10.1046/j.1525-1497.12.s2.1.x. Available from:  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1497222/
  • Rother ET. Systematic literature review X narrative review. Acta paul. enferm. [Internet]. 2007 June [cited 2015 Dec 25]; 20(2): v-vi. Available from: http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-21002007000200001&lng=en. http://dx.doi.org/10.1590/S0103-21002007000200001
  • Khan KS, Ter Riet G, Glanville J, Sowden AJ, Kleijnen J. Undertaking systematic reviews of research on effectiveness: CRD’s guidance for carrying out or commissioning reviews. NHS Centre for Reviews and Dissemination; 2001.

' src=

Weyinmi Demeyin

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

No Comments on Traditional reviews vs. systematic reviews

' src=

Thank you very much for the information here. My question is : Is it possible for me to do a systematic review which is not directed toward patients but just a specific population? To be specific can I do a systematic review on the mental health needs of students?

' src=

Hi Rosemary, I wonder whether it would be useful for you to look at Module 1 of the Cochrane Interactive Learning modules. This is a free module, open to everyone (you will just need to register for a Cochrane account if you don’t already have one). This guides you through conducting a systematic review, with a section specifically around defining your research question, which I feel will help you in understanding your question further. Head to this link for more details: https://training.cochrane.org/interactivelearning

I wonder if you have had a search on the Cochrane Library as yet, to see what Cochrane systematic reviews already exist? There is one review, titled “Psychological interventions to foster resilience in healthcare students” which may be of interest: https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD013684/full You can run searches on the library by the population and intervention you are interested in.

I hope these help you start in your investigations. Best wishes. Emma.

' src=

La revisión sistemática vale si hay solo un autor?

HI Alex, so sorry for the delay in replying to you. Yes, that is a very good point. I have copied a paragraph from the Cochrane Handbook, here, which does say that for a Cochrane Review, you should have more than one author.

“Cochrane Reviews should be undertaken by more than one person. In putting together a team, authors should consider the need for clinical and methodological expertise for the review, as well as the perspectives of stakeholders. Cochrane author teams are encouraged to seek and incorporate the views of users, including consumers, clinicians and those from varying regions and settings to develop protocols and reviews. Author teams for reviews relevant to particular settings (e.g. neglected tropical diseases) should involve contributors experienced in those settings”.

Thank you for the discussion point, much appreciated.

' src=

Hello, I’d like to ask you a question: what’s the difference between systematic review and systematized review? In addition, if the screening process of the review was made by only one author, is still a systematic or is a systematized review? Thanks

Hi. This article from Grant & Booth is a really good one to look at explaining different types of reviews: https://onlinelibrary.wiley.com/doi/10.1111/j.1471-1842.2009.00848.x It includes Systematic Reviews and Systematized Reviews. In answer to your second question, have a look at this Chapter from the Cochrane handbook. It covers the question about ‘Who should do a systematic review’. https://training.cochrane.org/handbook/current/chapter-01

A really relevant part of this chapter is this: “Systematic reviews should be undertaken by a team. Indeed, Cochrane will not publish a review that is proposed to be undertaken by a single person. Working as a team not only spreads the effort, but ensures that tasks such as the selection of studies for eligibility, data extraction and rating the certainty of the evidence will be performed by at least two people independently, minimizing the likelihood of errors.”

I hope this helps with the question. Best wishes. Emma.

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.

Related Articles

""

What do trialists do about participants who are ‘lost to follow-up’?

Participants in clinical trials may exit the study prior to having their results collated; in this case, what do we do with their results?

Family therapy walking outdoors

Family Therapy approaches for Anorexia Nervosa

Is Family Therapy effective in the treatment of Anorexia Nervosa? Emily summarises a recent Cochrane Review in this blog and examines the evidence.

Blood pressure tool

Antihypertensive drugs for primary prevention – at what blood pressure do we start treatment?

In this blog, Giorgio Karam examines the evidence on antihypertensive drugs for primary prevention – when do we start treatment?

Elsevier QRcode Wechat

  • Manuscript Review

Systematic Review VS Meta-Analysis

  • 3 minute read
  • 54.1K views

Table of Contents

How you organize your research is incredibly important; whether you’re preparing a report, research review, thesis or an article to be published. What methodology you choose can make or break your work getting out into the world, so let’s take a look at two main types: systematic review and meta-analysis.

Let’s start with what they have in common – essentially, they are both based on high-quality filtered evidence related to a specific research topic. They’re both highly regarded as generally resulting in reliable findings, though there are differences, which we’ll discuss below. Additionally, they both support conclusions based on expert reviews, case-controlled studies, data analysis, etc., versus mere opinions and musings.

What is a Systematic Review?

A systematic review is a form of research done collecting, appraising and synthesizing evidence to answer a particular question, in a very transparent and systematic way. Data (or evidence) used in systematic reviews have their origin in scholarly literature – published or unpublished. So, findings are typically very reliable. In addition, they are normally collated and appraised by an independent panel of experts in the field. Unlike traditional reviews, systematic reviews are very comprehensive and don’t rely on a single author’s point of view, thus avoiding bias.

Systematic reviews are especially important in the medical field, where health practitioners need to be constantly up-to-date with new, high-quality information to lead their daily decisions. Since systematic reviews, by definition, collect information from previous research, the pitfalls of new primary studies is avoided. They often, in fact, identify lack of evidence or knowledge limitations, and consequently recommend further study, if needed.

Why are systematic reviews important?

  • They combine and synthesize various studies and their findings.
  • Systematic reviews appraise the validity of the results and findings of the collected studies in an impartial way.
  • They define clear objectives and reproducible methodologies.

What is a Meta-analysis?

This form of research relies on combining statistical results from two or more existing studies. When multiple studies are addressing the same problem or question, it’s to be expected that there will be some potential for error. Most studies account for this within their results. A meta-analysis can help iron out any inconsistencies in data, as long as the studies are similar.

For instance, if your research is about the influence of the Mediterranean diet on diabetic people, between the ages of 30 and 45, but you only find a study about the Mediterranean diet in healthy people and another about the Mediterranean diet in diabetic teenagers. In this case, undertaking a meta-analysis would probably be a poor choice. You can either pursue the idea of comparing such different material, at the risk of findings that don’t really answer the review question. Or, you can decide to explore a different research method (perhaps more qualitative).

Why is meta-analysis important?

  • They help improve precision about evidence since many studies are too small to provide convincing data.
  • Meta-analyses can settle divergences between conflicting studies. By formally assessing the conflicting study results, it is possible to eventually reach new hypotheses and explore the reasons for controversy.
  • They can also answer questions with a broader influence than individual studies. For example, the effect of a disease on several populations across the world, by comparing other modest research studies completed in specific countries or continents.

Systematic Reviews VS Meta-Analysis

Undertaking research approaches, like systematic reviews and/or meta-analysis, involve great responsibility. They provide reliable information that has a real impact on society. Elsevier offers a number of services that aim to help researchers achieve excellence in written text, suggesting the necessary amendments to fit them into a targeted format. A perfectly written text, whether translated or edited from a manuscript, is the key to being respected within the scientific community, leading to more and more important positions like, let’s say…being part of an expert panel leading a systematic review or a widely acknowledged meta-analysis.

Check why it’s important to manage research data .

Language Editing Services by Elsevier Author Services:

What is a good H-index

  • Publication Recognition

What is a Good H-index?

What is a research gap

  • Research Process

What is a Research Gap

You may also like.

submitting your manuscript to journals checklist

If You’re a Researcher, Remember These Before You Are Submitting Your Manuscript to Journals!

Errors in Academic English Writing

Navigating “Chinglish” Errors in Academic English Writing

AI in Manuscript Editing

Is The Use of AI in Manuscript Editing Feasible? Here’s Three Tips to Steer Clear of Potential Issues

editing experience with English-speaking experts

A profound editing experience with English-speaking experts: Elsevier Language Services to learn more!

Research Fraud: Falsification and Fabrication in Research Data

Research Fraud: Falsification and Fabrication in Research Data

Elsevier News Icon

Professor Anselmo Paiva: Using Computer Vision to Tackle Medical Issues with a Little Help from Elsevier Author Services

What is the main purpose of proofreading a paper?

What is the main purpose of proofreading a paper?

Looking for Medical Editing Services - Elsevier

Looking for Medical Editing Services

Input your search keywords and press Enter.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Perspect Clin Res
  • v.11(2); Apr-Jun 2020

Study designs: Part 7 – Systematic reviews

Priya ranganathan.

Department of Anaesthesiology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, Maharashtra, India

Rakesh Aggarwal

1 Director, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India

In this series on research study designs, we have so far looked at different types of primary research designs which attempt to answer a specific question. In this segment, we discuss systematic review, which is a study design used to summarize the results of several primary research studies. Systematic reviews often also use meta-analysis, which is a statistical tool to mathematically collate the results of various research studies to obtain a pooled estimate of treatment effect; this will be discussed in the next article.

In the previous six articles in this series on study designs, we have looked at different types of primary research study designs which are used to answer research questions. In this article, we describe the systematic review, a type of secondary research design that is used to summarize the results of prior primary research studies. Systematic reviews are considered the highest level of evidence for a particular research question.[ 1 ]

SYSTEMATIC REVIEWS

As defined in the Cochrane Handbook for Systematic Reviews of Interventions , “Systematic reviews seek to collate evidence that fits pre-specified eligibility criteria in order to answer a specific research question. They aim to minimize bias by using explicit, systematic methods documented in advance with a protocol.”[ 2 ]

NARRATIVE VERSUS SYSTEMATIC REVIEWS

Review of available data has been done since times immemorial. However, the traditional narrative reviews (“expert reviews”) do not involve a systematic search of the literature. Instead, the author of the review, usually an expert on the subject, used informal methods to identify (what he or she thinks are) the key studies on the topic. The final review thus is a summary of these “selected” studies. Since studies are chosen at will (haphazardly!) and without clearly defined criteria, such reviews preferentially include those studies that favor the author's views, leading to a potential for subjectivity or selection bias.

In contrast, systematic reviews involve a formal prespecified protocol with explicit, transparent criteria for the inclusion and exclusion of studies, thereby ensuring completeness of coverage of the available evidence, and providing a more objective, replicable, and comprehensive overview it.

META-ANALYSIS

Many systematic reviews use an additional tool, known as meta-analysis, which is a statistical technique for combining the results of multiple studies in a systematic review in a mathematically appropriate way, to create a single (pooled) and more precise estimate of treatment effect. The feasibility of performing a meta-analysis in a systematic review depends on the number of studies included in the final review and the degree of heterogeneity in the inclusion criteria as well as the results between the included studies. Meta-analysis will be discussed in detail in the next article in this series.

THE PROCESS OF A SYSTEMATIC REVIEW

The conduct of a systematic review involves several sequential key steps.[ 3 , 4 ] As in other research study designs, a clearly stated research question and a well-written research protocol are essential before commencing a systematic review.

Step 1: Stating the review question

Systematic reviews can be carried out in any field of medical research, e.g. efficacy or safety of interventions, diagnostics, screening or health economics. In this article, we focus on systematic reviews of studies looking at the efficacy of interventions. As for the other study designs, for a systematic review too, the question is best framed using the Population, Intervention, Comparator, and Outcome (PICO) format.

For example, Safi et al . carried out a systematic review on the effect of beta-blockers on the outcomes of patients with myocardial infarction.[ 5 ] In this review, the Population was patients with suspected or confirmed myocardial infarction, the Intervention was beta-blocker therapy, the Comparator was either placebo or no intervention, and the Outcomes were all-cause mortality and major adverse cardiovascular events. The review question was “ In patients with suspected or confirmed myocardial infarction, does the use of beta-blockers affect mortality or major adverse cardiovascular outcomes? ”

Step 2: Listing the eligibility criteria for studies to be included

It is essential to explicitly define a priori the criteria for selection of studies which will be included in the review. Besides the PICO components, some additional criteria used frequently for this purpose include language of publication (English versus non-English), publication status (published as full paper versus unpublished), study design (randomized versus quasi-experimental), age group (adults versus children), and publication year (e.g. in the last 5 years, or since a particular date). The PICO criteria used may not be very specific, e.g. it is possible to include studies that use one or the other drug belonging to the same group. For instance, the systematic review by Safi et al . included all randomized clinical trials, irrespective of setting, blinding, publication status, publication year, or language, and reported outcomes, that had used any beta-blocker and in a broad range of doses.[ 5 ]

Step 3: Comprehensive search for studies that meet the eligibility criteria

A thorough literature search is essential to identify all articles related to the research question and to ensure that no relevant article is left out. The search may include one or more electronic databases and trial registries; in addition, it is common to hand-search the cross-references in the articles identified through such searches. One could also plan to reach out to experts in the field to identify unpublished data, and to search the grey literature non-peer-reviewednon-peer-reviewed. This last option is particularly helpful non-pharmacologic (theses, conference abstracts, and non-peer-reviewed journals). These sources are particularly helpful when the intervention is relatively new, since data on these may not yet have been published as full papers and hence are unlikely to be found in literature databases. In the review by Safi et al ., the search strategy included not only several electronic databases (Cochrane, MEDLINE, EMBASE, LILACS, etc.) but also other resources (e.g. Google Scholar, WHO International Clinical Trial Registry Platform, and reference lists of identified studies).[ 5 ] It is not essential to include all the above databases in one's search. However, it is mandatory to define in advance which of these will be searched.

Step 4: Identifying and selecting relevant studies

Once the search strategy defined in the previous step has been run to identify potentially relevant studies, a two-step process is followed. First, the titles and abstracts of the identified studies are processed to exclude any duplicates and to discard obviously irrelevant studies. In the next step, full-text papers of the remaining articles are retrieved and closely reviewed to identify studies that meet the eligibility criteria. To minimize bias, these selection steps are usually performed independently by at least two reviewers, who also assign a reason for non-selection to each discarded study. Any discrepancies are then resolved either by an independent reviewer or by mutual consensus of the original reviewers. In the Cochrane review on beta-blockers referred to above, two review authors independently screened the titles for inclusion, and then, four review authors independently reviewed the screen-positive studies to identify the trials to be included in the final review.[ 5 ] Disagreements were resolved by discussion or by taking the opinion of a separate reviewer. A summary of this selection process, showing the degree of agreement between reviewers, and a flow diagram that depicts the numbers of screened, included and excluded (with reason for exclusion) studies are often included in the final review.

Step 5: Data extraction

In this step, from each selected study, relevant data are extracted. This should be done by at least two reviewers independently, and the data then compared to identify any errors in extraction. Standard data extraction forms help in objective data extraction. The data extracted usually contain the name of the author, the year of publication, details of intervention and control treatments, and the number of participants and outcome data in each group. In the review by Safi et al ., four review authors independently extracted data and resolved any differences by discussion.[ 5 ]

Handling missing data

Some of the studies included in the review may not report outcomes in accordance with the review methodology. Such missing data can be handled in two ways – by contacting authors of the original study to obtain the necessary data and by using data imputation techniques. Safi et al . used both these approaches – they tried to get data from the trial authors; however, where that failed, they analyzed the primary outcome (mortality) using the best case (i.e. presuming that all the participants in the experimental arm with missing data had survived and those in the control arm with missing mortality data had died – representing the maximum beneficial effect of the intervention) and the worst case (all the participants with missing data in the experimental arm assumed to have died and those in the control arm to have survived – representing the least beneficial effect of the intervention) scenarios.

Evaluating the quality (or risk of bias) in the included studies

The overall quality of a systematic review depends on the quality of each of the included studies. Quality of a study is inversely proportional to the potential for bias in its design. In our previous articles on interventional study design in this series, we discussed various methods to reduce bias – such as randomization, allocation concealment, participant and assessor blinding, using objective endpoints, minimizing missing data, the use of intention-to-treat analysis, and complete reporting of all outcomes.[ 6 , 7 ] These features form the basis of the Cochrane Risk of Bias Tool (RoB 2), which is a commonly used instrument to assess the risk of bias in the studies included in a systematic review.[ 8 ] Based on this tool, one can classify each study in a review as having low risk of bias, having some concerns regarding bias, or at high risk of bias. Safi et al . used this tool to classify the included studies as having low or high risk of bias and presented these data in both tabular and graphical formats.[ 5 ]

In some reviews, the authors decide to summarize only studies with a low risk of bias and to exclude those with a high risk of bias. Alternatively, some authors undertake a separate analysis of studies with low risk of bias, besides an analysis of all the studies taken together. The conclusions from such analyses of only high-quality studies may be more robust.

Step 6: Synthesis of results

The data extracted from various studies are pooled quantitatively (known as a meta-analysis) or qualitatively (if pooling of results is not considered feasible). For qualitative reviews, data are usually presented in the tabular format, showing the characteristics of each included study, to allow for easier interpretation.

Sensitivity analyses

Sensitivity analyses are used to test the robustness of the results of a systematic review by examining the impact of excluding or including studies with certain characteristics. As referred to above, this can be based on the risk of bias (methodological quality), studies with a specific study design, studies with a certain dosage or schedule, or sample size. If results of these different analyses are more-or-less the same, one can be more certain of the validity of the findings of the review. Furthermore, such analyses can help identify whether the effect of the intervention could vary across different levels of another factor. In the beta-blocker review, sensitivity analysis was performed depending on the risk of bias of included studies.[ 5 ]

IMPORTANT RESOURCES FOR CARRYING OUT SYSTEMATIC REVIEWS AND META-ANALYSES

Cochrane is an organization that works to produce good-quality, updated systematic reviews related to human healthcare and policy, which are accessible to people across the world.[ 9 ] There are more than 7000 Cochrane reviews on various topics. One of its main resources is the Cochrane Library (available at https://www.cochranelibrary.com/ ), which incorporates several databases with different types of high-quality evidence to inform healthcare decisions, including the Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials (CENTRAL), and Cochrane Clinical Answers.

The Cochrane Handbook for Systematic Reviews of Interventions

The Cochrane handbook is an official guide, prepared by the Cochrane Collaboration, to the process of preparing and maintaining Cochrane systematic reviews.[ 10 ]

Review Manager software

Review Manager (RevMan) is a software developed by Cochrane to support the preparation and maintenance of systematic reviews, including tools for performing meta-analysis.[ 11 ] It is freely available in both online (RevMan Web) and offline (RevMan 5.3) versions.

Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement

The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement is an evidence-based minimum set of items for reporting of systematic reviews and meta-analyses of randomized trials.[ 12 ] It can be used both by authors of such studies to improve the completeness of reporting and by reviewers and readers to critically appraise a systematic review. There are several extensions to the PRISMA statement for specific types of reviews. An update is currently underway.

Meta-analysis of Observational Studies in Epidemiology statement

The Meta-analysis of Observational Studies in Epidemiology statement summarizes the recommendations for reporting of meta-analyses in epidemiology.[ 13 ]

PROSPERO is an international database for prospective registration of protocols for systematic reviews in healthcare.[ 14 ] It aims to avoid duplication of and to improve transparency in reporting of results of such reviews.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

  • Open access
  • Published: 17 July 2017

Clarifying the distinction between case series and cohort studies in systematic reviews of comparative studies: potential impact on body of evidence and workload

  • Tim Mathes 1 &
  • Dawid Pieper 1  

BMC Medical Research Methodology volume  17 , Article number:  107 ( 2017 ) Cite this article

95k Accesses

96 Citations

20 Altmetric

Metrics details

Distinguishing cohort studies from case series is difficult.

We propose a conceptualization of cohort studies in systematic reviews of comparative studies. The main aim of this conceptualization is to clarify the distinction between cohort studies and case series. We discuss the potential impact of the proposed conceptualization on the body of evidence and workload.

All studies with exposure-based sampling gather multiple exposures (with at least two different exposures or levels of exposure) and enable calculation of relative risks that should be considered cohort studies in systematic reviews, including non-randomized studies. The term “enables/can” means that a predefined analytic comparison is not a prerequisite (i.e., the absolute risks per group and/or a risk ratio are provided). Instead, all studies for which sufficient data are available for reanalysis to compare different exposures (e.g., sufficient data in the publication) are classified as cohort studies.

There are possibly large numbers of studies without a comparison for the exposure of interest but that do provide the necessary data to calculate effect measures for a comparison. Consequently, more studies could be included in a systematic review. Therefore, on the one hand, the outlined approach can increase the confidence in effect estimates and the strengths of conclusions. On the other hand, the workload would increase (e.g., additional data extraction and risk of bias assessment, as well as reanalyses).

Peer Review reports

Systematic reviews that include non-randomized studies often consider different observational study designs [ 1 ]. However, the distinction between different non-randomized study designs is difficult. One key design feature to classify observational study designs is to distinguish comparative from non-comparative studies [ 2 , 3 ]. The lack of a comparison group is of particular importance for distinguishing cohort studies from case series because in many definitions, they share a main design feature of having a follow-up period examining the exposed individuals over time [ 2 , 3 ]. The only difference between cohort studies and case series in many definitions is that cohort studies compare different groups (i.e., examine the association between exposure and outcome), while case series are uncontrolled [ 3 , 4 , 5 ]. Table 1 shows an example definition [ 3 ]. The problem with this definition is that vague terms, such as comparison and examination of association, might be interpreted as an analytic comparison of at least two exposures (i.e., interventions, risk factors or prognostic factors).

For example, imagine a study of 20 consecutive patients with a certain disease that can be treated in two different ways. A study that divides the 20 patients into two groups according to the treatment received and compares the outcomes of these groups (e.g., provides aggregated absolute risks per group or a risk ratio) would be probably classified as a cohort study (the example used in the following sections is denoted “study 1”). A sample of this study type is illustrated in Fig. 1 and Table 2 .

Cohort study (vague definition)

In contrast, a publication that describes the interventions received and outcomes for each patient/case separately would probably be classified as a case series (the example in the following sections is denoted “study 2”). An example of this study type is illustrated in Fig. 2 and Table 3 . In the medical literature, the data on exposure and outcomes are usually provided in either running text or spreadsheet formats [ 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 ]. A good example is the study by Wong et al. [ 10 ]. In this study, information on placental invasion (exposure) and blood loss (outcome) is separately provided for 40 pregnant women in a table. The study by Cheng et al. is an example of a study providing information in the running text (i.e., anticoagulation management [exposure] and recovery [outcome] for paediatric stroke) [ 6 ].

Case series (vague definition)

These examples illustrate that distinguishing between cohort studies and case series is difficult. Vague definitions are probably the reason for the common confusion between study designs. A recent study found that approximately 72% of cohort studies are mislabelled as case series [ 22 ]. Many systematic reviews of non-randomized studies included cohort studies but excluded case series (see examples in [ 23 , 24 , 25 , 26 , 27 , 28 ]). Therefore, the unclear distinction between case series and cohort studies can result in inconsistent study selection and unjustified exclusions from a systematic review. The risk of misclassification is particularly high because study authors also often mislabel their study or studies are not classified by their authors at all (see examples in [ 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 ]).

We propose a conceptualization of cohort studies in systematic reviews of comparative studies. The main objective of this conceptualization is to clarify the distinction between cohort studies and case series in systematic reviews, including non-randomized comparative studies. We discuss the potential impact of the proposed conceptualization on the body of evidence and workload.

Clarifying the distinction between case series and cohort studies (the solution)

In the following report, we propose a conceptualization for cohort studies and case series (e.g., sampling) for systematic reviews, including comparative non-randomized studies. Our proposal is based on a recent conceptualization of cohort studies and case series by Dekkers et al. [ 29 ]. The main feature of this conceptualization is that it is exclusively based on inherent design features and is not affected by the analysis.

Cohort studies of one exposure/one group

Dekkers et al. [ 29 ] defined cohort studies with one exposure as studies with exposure-based sampling that enable calculating absolute effects measures for a risk of outcome. This definition means that “the absence of a control group in an exposure-based study does not define a case series” [ 29 ]. The definition of cohort studies according to Dekkers et al. [ 29 ] is summarized in Table 4 .

Cohort studies of multiple exposures/more than one group

This idea can be easily extended to studies with more than one exposure. In this case, all studies with exposure-based sampling gathering multiple exposures (i.e., at least two different exposures, manifestations of exposures or levels of exposures) can be considered as (comparative) cohort studies (Fig. 3 ). The sampling is based on exposure, and there are different groups. Consequently, relative risks can be calculated [ 29 ]. The term “enables/can” implies that a predefined analytic comparison is not a prerequisite but that all studies with sufficient data to enable a reanalysis (e.g., in the publication, study reports, and supplementary material) would be classified as cohort studies.

Cohort study (deduced from Dekkers et al. [ 28 ])

In short, all studies that enable calculation of a relative risk to quantify a difference in outcomes between different groups should be considered cohort studies.

Case series

According to Dekkers et al. [ 29 ], the sampling of a case series is either based on exposure and outcome (e.g., all patients are treated and have an adverse event) or case series include patients with a certain outcome regardless of exposure (see Fig. 4 ). Consequently, no absolute risk and also no relative effect measures for an outcome can be calculated in a case series. Note that sampling in a case series does not need to be consecutive. Consecutiveness would increase the quality of the case series, but a non-consecutive series is also a case series [ 29 ].

Case series (Deckers et al. [ 28 ])

In short, for a case series, there are no absolute risks, and also, no risk ratios can be calculated. Consequently, a case series cannot be comparative. The definition of a case series by Dekkers et al. [ 29 ] is summarized in Table 4 .

It is noteworthy that the conceptualization also ensures a clear distinction of case series from other study designs that apply outcome-based sampling. Case series, case-control studies (including case-time-control), and self-controlled case-control designs (e.g., case-crossover) all have outcome-based sampling in common [ 29 ].

Case series have no control at all because only patients with a certain manifestation of outcomes are sampled (e.g., individuals with a disease or deceased individuals). In contrast, all case-control designs as well as self-controlled case-control designs have a control group. In case-control studies, the control group constitutes individuals with another manifestation of the outcome (e.g., healthy individuals or survivors). This outcome can be considered as two case series (i.e., case group and no case group).

Self-controlled case-control studies are characterized by an intra-individual comparison (each individual is their own control) [ 30 ]. Information is also sampled when patients are not exposed. Therefore, case-control designs as well as self-controlled case-control studies enable the calculation of risk ratios. This approach is not possible for a case series.

Illustrating example

Above, we illustrated that by using a vague definition, the classification of a study design might be influenced by the preparation and analysis of the study data. The proposed conceptualization is exclusively based on the inherent design features (e.g., sampling, exposure). After considering the example studies again using the proposed conceptualization, all studies would be classified as cohort studies because the relative risk can be calculated. This outcome becomes clear looking at Table 2 and Table 3 . If the patients in Table 3 are rearranged according the exposure and the data are reanalysed (i.e., calculation of absolute risk per group and relative risks to compare groups), Table 3 can be converted into Table 2 (and also, Fig. 2 can be converted to Fig. 3 ). In the study by Wong et al. [ 10 ], the mean blood loss in the group with placental invasion and in the group without placental invasion can be calculated and compared (e.g., relative risk with 95% confidence limits). In this study, the data on gestational age are also provided in the table. Therefore, it is even possible to adjust the results for gestational age (e.g., using a logistic regression).

Discussion (the impact)

Influence on the body of evidence.

The proposed conceptualization is exclusively based on inherent study design features; therefore, there is less room for misinterpretation compared to existing conceptualizations because analysis features, presentation of data and labelling of the study are not determined. Thus, the conceptualization ensures consistent study selection for systematic reviews.

The prerequisite of an analytical comparison in the publication can lead to the unjustified exclusion of relevant studies from a systematic review. Study 1 would likely be included, and Study 2 would be excluded from the systematic review. The only differences between Study 1 and Study 2 are the analysis and preparation of data. If the data source (e.g., chart review) and the reanalysis (calculation of effect measures and statistical tests) to compare the intervention and control group in Study 2 are performed exactly with the same approach as the existing analysis in Study 1, there can be no difference in the effect estimates between studies, and the studies are at the same risk of bias. Thus, the inclusion of Study 1 and the exclusion of Study 2 are contradictory to the requirement that systematic reviews identify all available evidence [ 31 ].

Considering that more studies would be eligible for inclusion and that the hierarchical paradigm of the levels of evidence is not valid per se, the proposed conceptualization can potentially enrich bodies of evidence and increase confidence in effect estimates.

Influence on workload

The additional inclusion of all studies that enable calculating relative risk for the comparison of interest might impact the workload of systematic reviews. There might be a considerable number of studies not performing a comparison already but that provide sufficient data for reanalysis. Usually the electronic search strategy for systematic reviews of non-randomized studies is not limited to certain study types because there are no sensitive search filters available yet [ 32 ]. Therefore, the search results do not usually include cohort studies as discussed above. However, in many abstracts it would be not directly clear if sufficient data for re-calculations are reported in the full text article (e.g., a table like Table 3 ). Consequently, many additional potentially relevant full-text studies have to be screened. Additionally, studies often assess various exposures (e.g., different baseline characteristics), and it might thus be difficult to identify relevant exposures. Considering the large amount of wrongly labelled studies, this approach can lead to additional screening effort [ 22 ].

As a result, more studies would be included in systematic reviews. All articles that provide potentially relevant data would have to be assessed in detail to decide whether reanalysis is feasible. For these data extractions, a risk of bias assessment would have to be performed. Challenges in the risk of bias assessment would arise because most assessment tools are constructed to assess a predefined control group [ 33 ]. For example, items regarding the adequacy of analysis (e.g., adjustment for confounders) cannot be assessed anymore. Effect measures must be calculated (e.g., risks by group and relative risk with a 95% confidence limit), and eventually further analyses (e.g., adjustments for confounders) might be necessary for studies that provide sufficient data. Moreover, advanced biometrical expertise would be necessary to judge the feasibility (i.e., determining the possibility to calculate relative risks and whether there are sufficient data to adjust for confounders) of a re-analysis and to conduct the reanalysis.

Promising areas of application

In the medical literature, it is likely that more retrospective mislabelled cohort studies (comparison planned after data collection) based on routinely collected data (e.g., chart review, review of radiology databases) than prospectively planned (i.e., comparisons planned before data collection) and wrongly labelled cohort studies can be found. Thus, it can be assumed that the wrongly labelled studies tend to have lower methodological quality than studies that already include a comparison. This aspect should be considered in decisions about including studies that must be reanalysed. In research areas in which randomized controlled trials or large planned prospective and well-conducted cohort studies can be expected (e.g., risk factors for widespread diseases), the approach is less promising for enriching the body of evidence. Consequently, in these areas, the additional effort might not be worthwhile.

Again, the conceptualization is particularly promising in research areas in which evidence is sparse because studies are difficult to conduct or populations are small or the event rates are low. These areas include rare diseases, adverse events/complications, sensitive groups (e.g., children or individuals with cognitive deficiencies) or rarely used interventions (e.g., costly innovations). In these areas, there might be no well-conducted studies at all [ 34 , 35 ]. Therefore, the proposed conceptualization in this report has great potential to increase confidence in effect estimates.

We proposed a conceptualization for cohort studies with multiple exposures that ensures a clear distinction from case series. In this conceptualization, all studies that contain sufficient data to conduct a reanalysis and not only studies with a pre-existing analytic comparison are classified as cohort studies and are considered appropriate for inclusion in systematic reviews. To the best of our knowledge, no systematic reviews exist that reanalyse (mislabelled) case series to create cohort studies. The outlined approach is a method that can potentially enrich the body of evidence and subsequently enhance confidence in effect estimates and the strengths of conclusions. However, the enrichment of the body of evidence should be balanced against the additional workload.

Ijaz S, Verbeek JH, Mischke C, Ruotsalainen J. Inclusion of nonrandomized studies in Cochrane systematic reviews was found to be in need of improvement. J Clin Epidemiol. 2014;67(6):645–53.

Article   PubMed   Google Scholar  

Ev E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP. Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. BMJ. 2007;335(7624):806–8.

Article   Google Scholar  

Reeves BC, Deeks JJ, Higgins JP. 13 including non-randomized studies. Cochrane Handbook Syst Rev Interventions. 2008;1:391.

Google Scholar  

Hartling L, Bond K, Santaguida PL, Viswanathan M, Dryden DM. Testing a tool for the classification of study designs in systematic reviews of interventions and exposures showed moderate reliability and low accuracy. J Clin Epidemiol. 2011;64(8):861–71.

EPOC-specific resources for review authors: What study designs should be included in an EPOC review and what should they be called? [ http://epoc.cochrane.org/resources/epoc-resources-review-authors ]. Accessed 12 July 2017.

Cheng WW, Ko CH, Chan AK. Paediatric stroke: case series. Hong Kong Med J. 2002;8(3):216–20.

CAS   PubMed   Google Scholar  

Hernot S, Wadhera R, Kaintura M, Bhukar S, Pillai DS, Sehrawat U, George JS. Tracheocutaneous fistula closure: comparison of rhomboid flap repair with Z Plasty repair in a case series of 40 patients. Aesthet Plast Surg. 2016.

Stacchiotti S, Provenzano S, Dagrada G, Negri T, Brich S, Basso U, Brunello A, Grosso F, Galli L, Palassini E, et al. Sirolimus in advanced Epithelioid Hemangioendothelioma: a retrospective case-series analysis from the Italian rare cancer network database. Ann Surg Oncol. 2016;23(9):2735–44.

Sofiah S, Fung LYC. Placenta accreta: clinical risk factors, accuracy of antenatal diagnosis and effect on pregnancy outcome. Med J Malays. 2009;64(4):298–302.

CAS   Google Scholar  

Wong HS, Hutton J, Zuccollo J, Tait J, Pringle KC. The maternal outcome in placenta accreta: the significance of antenatal diagnosis and non-separation of placenta at delivery. N Z Med J. 2008;121(1277):30–8.

PubMed   Google Scholar  

Mayorandan S, Meyer U, Gokcay G, Segarra NG, de Baulny HO, van Spronsen F, Zeman J, de Laet C, Spiekerkoetter U, Thimm E, et al. Cross-sectional study of 168 patients with hepatorenal tyrosinaemia and implications for clinical practice. Orphanet J Rare Dis. 2014;9(1):107.

Article   PubMed   PubMed Central   Google Scholar  

Bartlett DC, Lloyd C, McKiernan PJ, Newsome PN. Early nitisinone treatment reduces the need for liver transplantation in children with tyrosinaemia type 1 and improves post-transplant renal function. J Inherit Metab Dis. 2014;37(5):745–52.

Article   CAS   PubMed   Google Scholar  

El-Karaksy H, Fahmy M, El-Raziky M, El-Koofy N, El-Sayed R, Rashed MS, El-Kiki H, El-Hennawy A, Mohsen N. Hereditary tyrosinemia type 1 from a single center in Egypt: clinical study of 22 cases. World J Pediatr. 2011;7(3):224–31.

Zeybek AC, Kiykim E, Soyucen E, Cansever S, Altay S, Zubarioglu T, Erkan T, Aydin A. Hereditary tyrosinemia type 1 in Turkey: twenty year single-center experience. Pediatr Int. 2015;57(2):281–9.

Helmy N, Akl Y, Kaddah S, El Hafiz HA, El Makhzangy H. A case series: Egyptian experience in using chemical pleurodesis as an alternative management in refractory hepatic hydrothorax. Arch Med Sci. 2010;6(3):336–42.

Niesen AD, Sprung J, Prakash YS, Watson JC, Weingarten TN. Case series: anesthetic management of patients with spinal and bulbar muscular atrophy (Kennedy's disease). Can J Anaesth. 2009;56(2):136–41.

de Mauroy JC, Journe A, Gagaliano F, Lecante C, Barral F, Pourret S. The new Lyon ARTbrace versus the historical Lyon brace: a prospective case series of 148 consecutive scoliosis with short time results after 1 year compared with a historical retrospective case series of 100 consecutive scoliosis; SOSORT award 2015 winner. Scoliosis. 2015;10:26.

Forner D, Phillips T, Rigby M, Hart R, Taylor M, Trites J. Submental island flap reconstruction reduces cost in oral cancer reconstruction compared to radial forearm free flap reconstruction: a case series and cost analysis. J Otolaryngol Head Neck Surg. 2016;45:11.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Kuhnt D, Bauer MHA, Sommer J, Merhof D, Nimsky C. Optic radiation fiber Tractography in Glioma patients based on high angular resolution diffusion imaging with compressed sensing compared with diffusion tensor imaging - initial experience. PLoS One. 2013;8(7):e70973.

Naesens R, Vlieghe E, Verbrugghe W, Jorens P, Ieven M. A retrospective observational study on the efficacy of colistin by inhalation as compared to parenteral administration for the treatment of nosocomial pneumonia associated with multidrug-resistant Pseudomonas Aeruginosa. BMC Infect Dis. 2011;11:317.

Toktas ZO, Konakci M, Yilmaz B, Eksi MS, Aksoy T, Yener Y, Koban O, Kilic T, Konya D. Pain control following posterior spine fusion: patient-controlled continuous epidural catheter infusion method yields better post-operative analgesia control compared to intravenous patient controlled analgesia method. A retrospective case series. Eur Spine J. 2016;25(5):1608–13.

Esene IN, Ngu J, Zoghby M, Solaroglu I, Sikod AM, Kotb A, Dechambenoit G, Husseiny H. Case series and descriptive cohort studies in neurosurgery: the confusion and solution. Childs Nerv Syst. 2014;30(8):1321–32.

Kellesarian SV, Yunker M, Ramakrishnaiah R, Malmstrom H, Kellesarian TV, Ros Malignaggi V, Javed F. Does incorporating zinc in titanium implant surfaces influence osseointegration? A systematic review. J Prosthet Dent. 2017;117(1):41–7.

Wijnands TF, Gortjes AP, Gevers TJ, Jenniskens SF, Kool LJ, Potthoff A, Ronot M, Drenth JP. Efficacy and safety of aspiration Sclerotherapy of simple hepatic cysts: a systematic review. AJR Am J Roentgenol. 2017;208(1):201–7.

Zapata LB, Oduyebo T, Whiteman MK, Houtchens MK, Marchbanks PA, Curtis KM. Contraceptive use among women with multiple sclerosis: a systematic review. Contraception. 2016;94(6):612–20.

Dogramaci EJ, Rossi-Fedele G. Establishing the association between nonnutritive sucking behavior and malocclusions: a systematic review and meta-analysis. J Am Dent Assoc. 2016;147(12):926–34. e926.

Kellesarian SV, Abduljabbar T, Vohra F, Gholamiazizi E, Malmstrom H, Romanos GE, Javed F. Does local Ibandronate and/or Pamidronate delivery enhance Osseointegration? A systematic review. J Prosthodont. 2016.

Crandall M, Eastman A, Violano P, Greene W, Allen S, Block E, Christmas AB, Dennis A, Duncan T, Foster S, et al. Prevention of firearm-related injuries with restrictive licensing and concealed carry laws: an eastern Association for the Surgery of trauma systematic review. J Trauma Acute Care Surg. 2016;81(5):952–60.

Dekkers OM, Egger M, Altman DG, Vandenbroucke JP. Distinguishing case series from cohort studies. Ann Intern Med. 2012;156(1_Part_1):37–40.

Petersen I, Douglas I, Whitaker H. Self controlled case series methods: an alternative to standard epidemiological study designs. BMJ. 2016;354.

Higgins JP, Green S. Cochrane handbook for systematic reviews of interventions, vol. 5: Wiley Online Library; 2008.

Marcano Belisario JS, Tudor Car L, Reeves TJA, Gunn LH, Car J. Search strategies to identify observational studies in MEDLINE and EMBASE. Cochrane Database Syst Rev. 2013;12.

Hayden JA, van der Windt DA, Cartwright JL, Cote P, Bombardier C. Assessing bias in studies of prognostic factors. Ann Intern Med. 2013;158(4):280–6.

Institute for Quality and Efficiency in Health Care (IQWIG): Newborn screening for severe combined immunodeficiency (S15–02). In . ; 2017.

Institute for Quality and Efficiency in Health Care (IQWIG): Newborn screening for tyrosinaemia type 1 (S15–01). In . ; 2017.

Download references

Acknowledgements

There was no external funding for the research or publication of this article.

Availability of data and materials

Not applicable.

Author information

Authors and affiliations.

Institute for Research in Operative Medicine, Chair of Surgical Research, Faculty of Health, School of Medicine, Witten/Herdecke University, Ostmerheimer Str. 200, 51109, Cologne, Germany

Tim Mathes & Dawid Pieper

You can also search for this author in PubMed   Google Scholar

Contributions

All authors have made substantial contributions to the work. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Tim Mathes .

Ethics declarations

Ethics approval and consent to participate.

Not applicable. No human data involved.

Consent for publication

Not applicable. The manuscript contains no individual person’s data.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Mathes, T., Pieper, D. Clarifying the distinction between case series and cohort studies in systematic reviews of comparative studies: potential impact on body of evidence and workload. BMC Med Res Methodol 17 , 107 (2017). https://doi.org/10.1186/s12874-017-0391-8

Download citation

Received : 17 January 2017

Accepted : 10 July 2017

Published : 17 July 2017

DOI : https://doi.org/10.1186/s12874-017-0391-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Defined Study Cohort
  • Systematic Review
  • Extract Additional Data
  • Bias Assessment
  • Placental Invasion

BMC Medical Research Methodology

ISSN: 1471-2288

case study vs systematic review

IMAGES

  1. Difference Between Literature Review and Systematic Review

    case study vs systematic review

  2. 10 Steps to Write a Systematic Literature Review Paper in 2023

    case study vs systematic review

  3. Overview

    case study vs systematic review

  4. Levels of evidence and study design

    case study vs systematic review

  5. Understanding an Integrative Review vs Systematic Review

    case study vs systematic review

  6. literature review vs case study

    case study vs systematic review

VIDEO

  1. Case study

  2. Case Study Part 3: Developing or Selecting the Case

  3. Qualitative Research and Case Study

  4. 🔴 2- Study Design, Dr.Hazem Sayed ازاي تعرف نوع الدراسة بسهولة

  5. Case study Meaning

  6. Research Evidence Grading

COMMENTS

  1. Cohort vs Case Studies

    It is important to note that what makes the Cohort design "prospective" in nature is that you are working from suspected cause to effect (or outcome). It can be a concurrent study, meaning that you start collecting data now; non-current, typical of a chart review or review of other records, or; a combination of the two. Remember that:

  2. Introduction to systematic review and meta-analysis

    A systematic review collects all possible studies related to a given topic and design, and reviews and analyzes their results [ 1 ]. During the systematic review process, the quality of studies is evaluated, and a statistical meta-analysis of the study results is conducted on the basis of their quality. A meta-analysis is a valid, objective ...

  3. Clarifying the distinction between case series and cohort studies in

    Thus, the conceptualization ensures consistent study selection for systematic reviews. The prerequisite of an analytical comparison in the publication can lead to the unjustified exclusion of relevant studies from a systematic review. Study 1 would likely be included, and Study 2 would be excluded from the systematic review.

  4. Systematic Review

    Systematic review vs. literature review. A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method. ... In this case, the acronym is PICOT. Type of study design ...

  5. Systematic reviews: Structure, form and content

    Abstract. This article aims to provide an overview of the structure, form and content of systematic reviews. It focuses in particular on the literature searching component, and covers systematic database searching techniques, searching for grey literature and the importance of librarian involvement in the search.

  6. How to Write a Systematic Review: A Narrative Review

    Background. A systematic review, as its name suggests, is a systematic way of collecting, evaluating, integrating, and presenting findings from several studies on a specific question or topic.[] A systematic review is a research that, by identifying and combining evidence, is tailored to and answers the research question, based on an assessment of all relevant studies.[2,3] To identify assess ...

  7. A Protocol for the Use of Case Reports/Studies and Case Series in

    Introduction: Systematic reviews are routinely used to synthesize current science and evaluate the evidential strength and quality of resulting recommendations. For specific events, such as rare acute poisonings or preliminary reports of new drugs, we posit that case reports/studies and case series (human subjects research with no control group) may provide important evidence for systematic ...

  8. Evidence Synthesis and Systematic Reviews

    Definition: A systematic review is a summary of research results (evidence) that uses explicit and reproducible methods to systematically search, critically appraise, and synthesize on a specific issue.It synthesizes the results of multiple primary studies related to each other by using strategies that reduce biases and errors. When to use: If you want to identify, appraise, and synthesize all ...

  9. Systematic Review

    Systematic review vs literature review. A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarise and evaluate previous work, without using a formal, explicit method. ... In this case, the acronym is PICOT. Type of study design(s ...

  10. The Differences Between a Randomized-Controlled Trial vs Systematic Review

    Therefore, you have to review as many studies as possible to gather enough data to support crucial decisions. A failure to review all the relevant eligible studies may lead to inconsistent results. This is where different types of reviews, including systematic reviews and integrated reviews (among others), become essential.

  11. Case reports, case series and systematic reviews

    Case reports, case series and systematic reviews, QJM: An International Journal of Medicine, Volume 95, Issue 4, April 2002, ... Systematic reviews of observational studies are much harder to carry out and interpret than systematic reviews of randomized controlled trials. The main difficulties lie in coping with the diversity of study designs ...

  12. Levels of Evidence

    Individual cohort study / low-quality randomized control studies: B: 3a: Systematic review of (homogeneous) case-control studies: B: 3b: Individual case-control studies: C: 4: Case series, low-quality cohort or case-control studies: D : 5: Expert opinions based on non-systematic reviews of results or mechanistic studies

  13. PDF Evidence Pyramid

    Level 1: Systematic Reviews & Meta-analysis of RCTs; Evidence-based Clinical Practice Guidelines. Level 2: One or more RCTs. Level 3: Controlled Trials (no randomization) Level 4: Case-control or Cohort study. Level 5: Systematic Review of Descriptive and Qualitative studies. Level 6: Single Descriptive or Qualitative Study.

  14. What is the difference between a systematic review and a systematic

    Team members independently evaluate the studies being considered for inclusion in the review and compare results, adjudicating any differences of opinion. In contrast, a systematic literature review might be conducted by one person. Overall, while a systematic review must comply with set standards, you would expect any review called a ...

  15. The rough guide to systematic reviews and meta-analyses

    The aim of this review is to provide a concise but sound framework for the critical reading of systematic reviews and meta-analyses and, summarily, their design and conduct, stemming from our extensive experience with this type of research method ( Figure 1 ). Figure 1. Publications in PubMed authored in the last few years by our research group ...

  16. Case Studies: A Systematic Review of the Evidence

    This study aimed to determine the extent, range and nature of research about case studies in higher education. Method A systematic review was conducted using a wide ranging search strategy ...

  17. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  18. Traditional reviews vs. systematic reviews

    They aim to summarise the best available evidence on a particular research topic. The main differences between traditional reviews and systematic reviews are summarised below in terms of the following characteristics: Authors, Study protocol, Research question, Search strategy, Sources of literature, Selection criteria, Critical appraisal ...

  19. Systematic Review VS Meta-Analysis

    Additionally, they both support conclusions based on expert reviews, case-controlled studies, data analysis, etc., versus mere opinions and musings. What is a Systematic Review? A systematic review is a form of research done collecting, appraising and synthesizing evidence to answer a particular question, in a very transparent and systematic way.

  20. Study designs: Part 7

    Step 1: Stating the review question. Systematic reviews can be carried out in any field of medical research, e.g. efficacy or safety of interventions, diagnostics, screening or health economics. In this article, we focus on systematic reviews of studies looking at the efficacy of interventions. As for the other study designs, for a systematic ...

  21. Systematic Review vs. Retrospective Case Study

    Generally, systematic reviews require more time investment, but are considered higher impact than retrospective chart reviews. Systematic reviews do require two reviewers (with you presumably being one of them) that independently screen abstracts of papers included in the review. Since this is a summer project, I would imagine it would be ...

  22. Clarifying the distinction between case series and cohort studies in

    Distinguishing cohort studies from case series is difficult.We propose a conceptualization of cohort studies in systematic reviews of comparative studies. The main aim of this conceptualization is to clarify the distinction between cohort studies and case series. We discuss the potential impact of the proposed conceptualization on the body of evidence and workload.All studies with exposure ...

  23. COVID-19-associated psychosis: A systematic review of case reports

    The primary objective of this descriptive systematic review of case reports is to describe the clinical comorbidities, presentations and outcomes of adults presenting with incident non-delirious psychosis after or during a SARS-CoV-2 infection and to assess the quality of reports. 2. Methods.