Hands typing on a laptop investigating the online JBI Critical Appraisal Tools

Critical Appraisal Tools

Jbi’s critical appraisal tools assist in assessing the trustworthiness, relevance and results of published papers..

These tools have been revised. Recently published articles detail the revision.

"Assessing the risk of bias of quantitative analytical studies: introducing the vision for critical appraisal within JBI systematic reviews"

"revising the jbi quantitative critical appraisal tools to improve their applicability: an overview of methods and the development process".

End to end support for developing systematic reviews

Analytical Cross Sectional Studies  

Checklist for analytical cross sectional studies, how to cite, associated publication(s), case control studies  , checklist for case control studies, case reports  , checklist for case reports, case series  , checklist for case series.

Munn Z, Barker TH, Moola S, Tufanaru C, Stern C, McArthur A, Stephenson M, Aromataris E. Methodological quality of case series studies: an introduction to the JBI critical appraisal tool. JBI Evidence Synthesis. 2020;18(10):2127-2133

Methodological quality of case series studies: an introduction to the JBI critical appraisal tool

Cohort studies  , checklist for cohort studies, diagnostic test accuracy studies  , checklist for diagnostic test accuracy studies.

Campbell JM, Klugar M, Ding S, Carmody DP, Hakonsen SJ, Jadotte YT, White S, Munn Z. Chapter 9: Diagnostic test accuracy systematic reviews. In: Aromataris E, Munn Z (Editors). JBI Manual for Evidence Synthesis. JBI, 2020

JBI Manual for Evidence Synthesis

Chapter 9: Diagnostic test accuracy systematic reviews

Economic Evaluations  

Checklist for economic evaluations, prevalence studies  , checklist for prevalence studies.

Munn Z, Moola S, Lisy K, Riitano D, Tufanaru C. Chapter 5: Systematic reviews of prevalence and incidence. In: Aromataris E, Munn Z (Editors). JBI Manual for Evidence Synthesis. JBI, 2020

Chapter 5: Systematic reviews of prevalence and incidence

Qualitative Research  

Checklist for qualitative research.

Lockwood C, Munn Z, Porritt K. Qualitative research synthesis: methodological guidance for systematic reviewers utilizing meta-aggregation. Int J Evid Based Healthc. 2015;13(3):179–187

Chapter 2: Systematic reviews of qualitative evidence

Qualitative research synthesis

Methodological guidance for systematic reviewers utilizing meta-aggregation

Quasi-Experimental Studies  

Checklist for quasi-experimental studies.

Barker TH, Habibi N, Aromataris E, Stone JC, Leonardi-Bee J, Sears K, et al. The revised JBI critical appraisal tool for the assessment of risk of bias quasi-experimental studies. JBI Evid Synth. 2024;22(3):378-88.

The revised JBI critical appraisal tool for the assessment of risk of bias for quasi-experimental studies

Randomized controlled trials  , randomized controlled trials.

Barker TH, Stone JC, Sears K, Klugar M, Tufanaru C, Leonardi-Bee J, Aromataris E, Munn Z. The revised JBI critical appraisal tool for the assessment of risk of bias for randomized controlled trials. JBI Evidence Synthesis. 2023;21(3):494-506

The revised JBI critical appraisal tool for the assessment of risk of bias for randomized controlled trials

Randomized controlled trials checklist (archive), systematic reviews  , checklist for systematic reviews.

Aromataris E, Fernandez R, Godfrey C, Holly C, Kahlil H, Tungpunkom P. Summarizing systematic reviews: methodological development, conduct and reporting of an Umbrella review approach. Int J Evid Based Healthc. 2015;13(3):132-40.

Chapter 10: Umbrella Reviews

Textual Evidence: Expert Opinion  

Checklist for textual evidence: expert opinion.

McArthur A, Klugarova J, Yan H, Florescu S. Chapter 4: Systematic reviews of text and opinion. In: Aromataris E, Munn Z (Editors). JBI Manual for Evidence Synthesis. JBI, 2020

Chapter 4: Systematic reviews of text and opinion

Textual Evidence: Narrative  

Checklist for textual evidence: narrative, textual evidence: policy  , checklist for textual evidence: policy.

Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Nuffield Department of Primary Care Health Sciences, University of Oxford

Critical Appraisal tools

Critical appraisal worksheets to help you appraise the reliability, importance and applicability of clinical evidence.

Critical appraisal is the systematic evaluation of clinical research papers in order to establish:

  • Does this study address a  clearly focused question ?
  • Did the study use valid methods to address this question?
  • Are the valid results of this study important?
  • Are these valid, important results applicable to my patient or population?

If the answer to any of these questions is “no”, you can save yourself the trouble of reading the rest of it.

This section contains useful tools and downloads for the critical appraisal of different types of medical evidence. Example appraisal sheets are provided together with several helpful examples.

Critical Appraisal Worksheets

  • Systematic Reviews  Critical Appraisal Sheet
  • Diagnostics  Critical Appraisal Sheet
  • Prognosis  Critical Appraisal Sheet
  • Randomised Controlled Trials  (RCT) Critical Appraisal Sheet
  • Critical Appraisal of Qualitative Studies  Sheet
  • IPD Review  Sheet

Chinese - translated by Chung-Han Yang and Shih-Chieh Shao

  • Systematic Reviews  Critical Appraisal Sheet
  • Diagnostic Study  Critical Appraisal Sheet
  • Prognostic Critical Appraisal Sheet
  • RCT  Critical Appraisal Sheet
  • IPD reviews Critical Appraisal Sheet
  • Qualitative Studies Critical Appraisal Sheet 

German - translated by Johannes Pohl and Martin Sadilek

  • Systematic Review  Critical Appraisal Sheet
  • Diagnosis Critical Appraisal Sheet
  • Prognosis Critical Appraisal Sheet
  • Therapy / RCT Critical Appraisal Sheet

Lithuanian - translated by Tumas Beinortas

  • Systematic review appraisal Lithuanian (PDF)
  • Diagnostic accuracy appraisal Lithuanian  (PDF)
  • Prognostic study appraisal Lithuanian  (PDF)
  • RCT appraisal sheets Lithuanian  (PDF)

Portugese - translated by Enderson Miranda, Rachel Riera and Luis Eduardo Fontes

  • Portuguese – Systematic Review Study Appraisal Worksheet
  • Portuguese – Diagnostic Study Appraisal Worksheet
  • Portuguese – Prognostic Study Appraisal Worksheet
  • Portuguese – RCT Study Appraisal Worksheet
  • Portuguese – Systematic Review Evaluation of Individual Participant Data Worksheet
  • Portuguese – Qualitative Studies Evaluation Worksheet

Spanish - translated by Ana Cristina Castro

  • Systematic Review  (PDF)
  • Diagnosis  (PDF)
  • Prognosis  Spanish Translation (PDF)
  • Therapy / RCT  Spanish Translation (PDF)

Persian - translated by Ahmad Sofi Mahmudi

  • Prognosis  (PDF)
  • PICO  Critical Appraisal Sheet (PDF)
  • PICO Critical Appraisal Sheet (MS-Word)
  • Educational Prescription  Critical Appraisal Sheet (PDF)

Explanations & Examples

  • Pre-test probability
  • SpPin and SnNout
  • Likelihood Ratios

CASP Checklists

  • How to use our CASP Checklists
  • Referencing and Creative Commons
  • Online Training Courses
  • CASP Workshops
  • What is Critical Appraisal
  • Study Designs
  • Useful Links
  • Bibliography
  • View all Tools and Resources
  • Testimonials

Critical appraisal tools and resources

CASP has produced simple critical appraisal checklists for the key study designs. These are not meant to replace considered thought and judgement when reading a paper but are for use as a guide and aide memoire. All CASP checklists cover three main areas: validity , results and clinical relevance.

What is Critical Appraisal?

Critical Appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context. It is an essential skill for evidence-based medicine because it allows people to find and use research evidence reliably and efficiently.

Learn more about what critical appraisal is, why we need it and more

A complete list (published & unpublished) of articles and research papers about CASP and other critical appraisal tools and approaches, covering from 1993-2012.

  • CASP Checklist

Need more information?

  • Online Learning
  • Privacy Policy

research article appraisal tool

Critical Appraisal Skills Programme

Critical Appraisal Skills Programme (CASP) will use the information you provide on this form to be in touch with you and to provide updates and marketing. Please let us know all the ways you would like to hear from us:

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.

Copyright 2024 CASP UK - OAP Ltd. All rights reserved Website by Beyond Your Brand

research article appraisal tool

  • Critical Appraisal Tools
  • Introduction
  • Related Guides
  • Getting Help

Critical Appraisal of Studies

Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value/relevance in a particular context by providing a framework to evaluate the research. During the critical appraisal process, researchers can:

  • Decide whether studies have been undertaken in a way that makes their findings reliable as well as valid and unbiased
  • Make sense of the results
  • Know what these results mean in the context of the decision they are making
  • Determine if the results are relevant to their patients/schoolwork/research

Burls, A. (2009). What is critical appraisal? In What Is This Series: Evidence-based medicine. Available online at  What is Critical Appraisal?

Critical appraisal is included in the process of writing high quality reviews, like systematic and integrative reviews and for evaluating evidence from RCTs and other study designs. For more information on systematic reviews, check out our  Systematic Review  guide.

  • Next: Critical Appraisal Tools >>
  • Last Updated: Nov 16, 2023 1:27 PM
  • URL: https://guides.library.duq.edu/critappraise

Ohio University Logo

University Libraries

  • Ohio University Libraries
  • Library Guides

Evidence-based Practice in Healthcare

Critical appraisal.

  • EBP Tutorials
  • Question- PICO
  • Definitions
  • Systematic Reviews
  • Levels of Evidence
  • Finding Evidence
  • Filter by Study Type
  • Too Much or Too Little?
  • Quality Improvement (QI)
  • Contact - Need Help?

Critically Appraised Topics

CATs are critical summaries of a research article.  They are concise, standardized, and provide an appraisal of the research.

If a CAT already exists for an article, it can be read quickly and the clinical bottom line can be put to use as the clinician sees fit.  If a CAT does not exist, the CAT format provides a template to appraise the article of interest.

Critical appraisal is the process of carefully and systematically assessing the outcome of scientific research (evidence) to judge its trustworthiness, value and relevance in a particular context. Critical appraisal looks at the way a study is conducted and examines factors such as internal validity, generalizability and relevance.

  Some initial appraisal questions you could ask are:

  • Is the evidence from a known, reputable source?
  • Has the evidence been evaluated in any way? If so, how and by whom?
  • How up-to-date is the evidence?

 Second, you look at the study itself and ask the following general appraisal questions:

  • Is the methodology used appropriate for the researchers question? Is the aim clear?
  • How was the outcome measured? Is that a reliable way to measure? How large was the sample size? Does the sample accurately reflect the population?
  • Can the results be replicated?
  • Have exclusions or limitations been listed?
  • What implications does the study have for your practice? Is it relevant, logical?
  • Can the results be applied to your organization/purpose?
  • Centre for Evidence Based Medicine - Critical Appraisal Tools
  • Duke University Medical Center Library - Appraising Evidence

CASP Checklists 

CASP Case Control Checklist

CASP Clinical Protection Rule Checklist

CASP Cohort Study Checklist

CASP Diagnostic Checklist

CASP Economic Evaluation Checklist

CASP Qualitative Study Checklist

CASP Randomized Controlled Trial (RCT) Checklist

CASP Systematic Review Checklist

Appraisal: Validity vs. Reliability & Calculators

Appraisal is the third step in the Evidence Based Medicine process. It requires that the evidence found be evaluated for its validity and clinical usefulness. 

What is validity?

  • Internal validity is the extent to which the experiment demonstrated a cause-effect relationship between the independent and dependent variables.
  • External validity is the extent to which one may safely generalize from the sample studied to the defined target population and to other populations.

What is reliability?

Reliability is the extent to which the results of the experiment are replicable.  The research methodology should be described in detail so that the experiment could be repeated with similar results.

Statistical Calculators for Appraisal

  • Diagnostic Test Calculator
  • Risk Reduction Calculator
  • Diagnostic Test - calculates the Sensitivity, Specificity, PPV, NPV, LR+, and LR-
  • Prospective Study - calculates the Relative Risk (RR), Absolute Relative Risk (ARR), and Number Needed to Treat (NNT)
  • Case-control Study - calculates the Odds Ratio (OR)
  • Randomized Control Trial (RCT) - calculates the Relative Risk Reduction (RRR), ARR, and NNT
  • Chi-Square Calculator
  • Likelihood Ratio (LR) Calculations - The LR is used to assess how good a diagnostic test is and to help in selecting an appropriate diagnostic test(s) or sequence of tests. They have advantages over sensitivity and specificity because they are less likely to change with the prevalence of the disorder, they can be calculated for several levels of the symptom/sign or test, they can be used to combine the results of multiple diagnostic test and the can be used to calculate post-test probability for a target disorder.
  • Odds Ratio - In statistics, the odds ratio (usually abbreviated "OR") is one of three main ways to quantify how strongly the presence or absence of property A is associated with the presence or absence of property B in a given population.
  • Odds Ratio to NNT Converter - To convert odds ratios to NNTs, enter a number that is > 1 or < 1 in the odds ratio textbox and a number that is not equal to 0 or 1 for the Patient's Expected Event Rate (PEER). After entering the numbers, click "Calculate" to convert the odds ratio to NNT.
  • One Factor ANOVA
  • Relative Risk Calculator - In statistics and epidemiology, relative risk or risk ratio (RR) is the ratio of the probability of an event occurring (for example, developing a disease, being injured) in an exposed group to the probability of the event occurring in a comparison, non-exposed group.
  • Two Factor ANOVA
  • << Previous: Resource Evaluation
  • Next: Quality Improvement (QI) >>

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Published: 20 January 2009

How to critically appraise an article

  • Jane M Young 1 &
  • Michael J Solomon 2  

Nature Clinical Practice Gastroenterology & Hepatology volume  6 ,  pages 82–91 ( 2009 ) Cite this article

52k Accesses

100 Citations

435 Altmetric

Metrics details

Critical appraisal is a systematic process used to identify the strengths and weaknesses of a research article in order to assess the usefulness and validity of research findings. The most important components of a critical appraisal are an evaluation of the appropriateness of the study design for the research question and a careful assessment of the key methodological features of this design. Other factors that also should be considered include the suitability of the statistical methods used and their subsequent interpretation, potential conflicts of interest and the relevance of the research to one's own practice. This Review presents a 10-step guide to critical appraisal that aims to assist clinicians to identify the most relevant high-quality studies available to guide their clinical practice.

Critical appraisal is a systematic process used to identify the strengths and weaknesses of a research article

Critical appraisal provides a basis for decisions on whether to use the results of a study in clinical practice

Different study designs are prone to various sources of systematic bias

Design-specific, critical-appraisal checklists are useful tools to help assess study quality

Assessments of other factors, including the importance of the research question, the appropriateness of statistical analysis, the legitimacy of conclusions and potential conflicts of interest are an important part of the critical appraisal process

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 12 print issues and online access

195,33 € per year

only 16,28 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

research article appraisal tool

Similar content being viewed by others

research article appraisal tool

The 5th edition of the World Health Organization Classification of Haematolymphoid Tumours: Myeloid and Histiocytic/Dendritic Neoplasms

Joseph D. Khoury, Eric Solary, … Andreas Hochhaus

research article appraisal tool

An overview of clinical decision support systems: benefits, risks, and strategies for success

Reed T. Sutton, David Pincock, … Karen I. Kroeker

research article appraisal tool

Artificial intelligence and illusions of understanding in scientific research

Lisa Messeri & M. J. Crockett

Druss BG and Marcus SC (2005) Growth and decentralisation of the medical literature: implications for evidence-based medicine. J Med Libr Assoc 93 : 499–501

PubMed   PubMed Central   Google Scholar  

Glasziou PP (2008) Information overload: what's behind it, what's beyond it? Med J Aust 189 : 84–85

PubMed   Google Scholar  

Last JE (Ed.; 2001) A Dictionary of Epidemiology (4th Edn). New York: Oxford University Press

Google Scholar  

Sackett DL et al . (2000). Evidence-based Medicine. How to Practice and Teach EBM . London: Churchill Livingstone

Guyatt G and Rennie D (Eds; 2002). Users' Guides to the Medical Literature: a Manual for Evidence-based Clinical Practice . Chicago: American Medical Association

Greenhalgh T (2000) How to Read a Paper: the Basics of Evidence-based Medicine . London: Blackwell Medicine Books

MacAuley D (1994) READER: an acronym to aid critical reading by general practitioners. Br J Gen Pract 44 : 83–85

CAS   PubMed   PubMed Central   Google Scholar  

Hill A and Spittlehouse C (2001) What is critical appraisal. Evidence-based Medicine 3 : 1–8 [ http://www.evidence-based-medicine.co.uk ] (accessed 25 November 2008)

Public Health Resource Unit (2008) Critical Appraisal Skills Programme (CASP) . [ http://www.phru.nhs.uk/Pages/PHD/CASP.htm ] (accessed 8 August 2008)

National Health and Medical Research Council (2000) How to Review the Evidence: Systematic Identification and Review of the Scientific Literature . Canberra: NHMRC

Elwood JM (1998) Critical Appraisal of Epidemiological Studies and Clinical Trials (2nd Edn). Oxford: Oxford University Press

Agency for Healthcare Research and Quality (2002) Systems to rate the strength of scientific evidence? Evidence Report/Technology Assessment No 47, Publication No 02-E019 Rockville: Agency for Healthcare Research and Quality

Crombie IK (1996) The Pocket Guide to Critical Appraisal: a Handbook for Health Care Professionals . London: Blackwell Medicine Publishing Group

Heller RF et al . (2008) Critical appraisal for public health: a new checklist. Public Health 122 : 92–98

Article   Google Scholar  

MacAuley D et al . (1998) Randomised controlled trial of the READER method of critical appraisal in general practice. BMJ 316 : 1134–37

Article   CAS   Google Scholar  

Parkes J et al . Teaching critical appraisal skills in health care settings (Review). Cochrane Database of Systematic Reviews 2005, Issue 3. Art. No.: cd001270. 10.1002/14651858.cd001270

Mays N and Pope C (2000) Assessing quality in qualitative research. BMJ 320 : 50–52

Hawking SW (2003) On the Shoulders of Giants: the Great Works of Physics and Astronomy . Philadelphia, PN: Penguin

National Health and Medical Research Council (1999) A Guide to the Development, Implementation and Evaluation of Clinical Practice Guidelines . Canberra: National Health and Medical Research Council

US Preventive Services Taskforce (1996) Guide to clinical preventive services (2nd Edn). Baltimore, MD: Williams & Wilkins

Solomon MJ and McLeod RS (1995) Should we be performing more randomized controlled trials evaluating surgical operations? Surgery 118 : 456–467

Rothman KJ (2002) Epidemiology: an Introduction . Oxford: Oxford University Press

Young JM and Solomon MJ (2003) Improving the evidence-base in surgery: sources of bias in surgical studies. ANZ J Surg 73 : 504–506

Margitic SE et al . (1995) Lessons learned from a prospective meta-analysis. J Am Geriatr Soc 43 : 435–439

Shea B et al . (2001) Assessing the quality of reports of systematic reviews: the QUORUM statement compared to other tools. In Systematic Reviews in Health Care: Meta-analysis in Context 2nd Edition, 122–139 (Eds Egger M. et al .) London: BMJ Books

Chapter   Google Scholar  

Easterbrook PH et al . (1991) Publication bias in clinical research. Lancet 337 : 867–872

Begg CB and Berlin JA (1989) Publication bias and dissemination of clinical research. J Natl Cancer Inst 81 : 107–115

Moher D et al . (2000) Improving the quality of reports of meta-analyses of randomised controlled trials: the QUORUM statement. Br J Surg 87 : 1448–1454

Shea BJ et al . (2007) Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Medical Research Methodology 7 : 10 [10.1186/1471-2288-7-10]

Stroup DF et al . (2000) Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA 283 : 2008–2012

Young JM and Solomon MJ (2003) Improving the evidence-base in surgery: evaluating surgical effectiveness. ANZ J Surg 73 : 507–510

Schulz KF (1995) Subverting randomization in controlled trials. JAMA 274 : 1456–1458

Schulz KF et al . (1995) Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 273 : 408–412

Moher D et al . (2001) The CONSORT statement: revised recommendations for improving the quality of reports of parallel group randomized trials. BMC Medical Research Methodology 1 : 2 [ http://www.biomedcentral.com/ 1471-2288/1/2 ] (accessed 25 November 2008)

Rochon PA et al . (2005) Reader's guide to critical appraisal of cohort studies: 1. Role and design. BMJ 330 : 895–897

Mamdani M et al . (2005) Reader's guide to critical appraisal of cohort studies: 2. Assessing potential for confounding. BMJ 330 : 960–962

Normand S et al . (2005) Reader's guide to critical appraisal of cohort studies: 3. Analytical strategies to reduce confounding. BMJ 330 : 1021–1023

von Elm E et al . (2007) Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. BMJ 335 : 806–808

Sutton-Tyrrell K (1991) Assessing bias in case-control studies: proper selection of cases and controls. Stroke 22 : 938–942

Knottnerus J (2003) Assessment of the accuracy of diagnostic tests: the cross-sectional study. J Clin Epidemiol 56 : 1118–1128

Furukawa TA and Guyatt GH (2006) Sources of bias in diagnostic accuracy studies and the diagnostic process. CMAJ 174 : 481–482

Bossyut PM et al . (2003)The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Ann Intern Med 138 : W1–W12

STARD statement (Standards for the Reporting of Diagnostic Accuracy Studies). [ http://www.stard-statement.org/ ] (accessed 10 September 2008)

Raftery J (1998) Economic evaluation: an introduction. BMJ 316 : 1013–1014

Palmer S et al . (1999) Economics notes: types of economic evaluation. BMJ 318 : 1349

Russ S et al . (1999) Barriers to participation in randomized controlled trials: a systematic review. J Clin Epidemiol 52 : 1143–1156

Tinmouth JM et al . (2004) Are claims of equivalency in digestive diseases trials supported by the evidence? Gastroentrology 126 : 1700–1710

Kaul S and Diamond GA (2006) Good enough: a primer on the analysis and interpretation of noninferiority trials. Ann Intern Med 145 : 62–69

Piaggio G et al . (2006) Reporting of noninferiority and equivalence randomized trials: an extension of the CONSORT statement. JAMA 295 : 1152–1160

Heritier SR et al . (2007) Inclusion of patients in clinical trial analysis: the intention to treat principle. In Interpreting and Reporting Clinical Trials: a Guide to the CONSORT Statement and the Principles of Randomized Controlled Trials , 92–98 (Eds Keech A. et al .) Strawberry Hills, NSW: Australian Medical Publishing Company

National Health and Medical Research Council (2007) National Statement on Ethical Conduct in Human Research 89–90 Canberra: NHMRC

Lo B et al . (2000) Conflict-of-interest policies for investigators in clinical trials. N Engl J Med 343 : 1616–1620

Kim SYH et al . (2004) Potential research participants' views regarding researcher and institutional financial conflicts of interests. J Med Ethics 30 : 73–79

Komesaroff PA and Kerridge IH (2002) Ethical issues concerning the relationships between medical practitioners and the pharmaceutical industry. Med J Aust 176 : 118–121

Little M (1999) Research, ethics and conflicts of interest. J Med Ethics 25 : 259–262

Lemmens T and Singer PA (1998) Bioethics for clinicians: 17. Conflict of interest in research, education and patient care. CMAJ 159 : 960–965

Download references

Author information

Authors and affiliations.

JM Young is an Associate Professor of Public Health and the Executive Director of the Surgical Outcomes Research Centre at the University of Sydney and Sydney South-West Area Health Service, Sydney,

Jane M Young

MJ Solomon is Head of the Surgical Outcomes Research Centre and Director of Colorectal Research at the University of Sydney and Sydney South-West Area Health Service, Sydney, Australia.,

Michael J Solomon

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jane M Young .

Ethics declarations

Competing interests.

The authors declare no competing financial interests.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Young, J., Solomon, M. How to critically appraise an article. Nat Rev Gastroenterol Hepatol 6 , 82–91 (2009). https://doi.org/10.1038/ncpgasthep1331

Download citation

Received : 10 August 2008

Accepted : 03 November 2008

Published : 20 January 2009

Issue Date : February 2009

DOI : https://doi.org/10.1038/ncpgasthep1331

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Emergency physicians’ perceptions of critical appraisal skills: a qualitative study.

  • Sumintra Wood
  • Jacqueline Paulis
  • Angela Chen

BMC Medical Education (2022)

An integrative review on individual determinants of enrolment in National Health Insurance Scheme among older adults in Ghana

  • Anthony Kwame Morgan
  • Anthony Acquah Mensah

BMC Primary Care (2022)

Autopsy findings of COVID-19 in children: a systematic review and meta-analysis

  • Anju Khairwa
  • Kana Ram Jat

Forensic Science, Medicine and Pathology (2022)

The use of a modified Delphi technique to develop a critical appraisal tool for clinical pharmacokinetic studies

  • Alaa Bahaa Eldeen Soliman
  • Shane Ashley Pawluk
  • Ousama Rachid

International Journal of Clinical Pharmacy (2022)

Critical Appraisal: Analysis of a Prospective Comparative Study Published in IJS

  • Ramakrishna Ramakrishna HK
  • Swarnalatha MC

Indian Journal of Surgery (2021)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

research article appraisal tool

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 27, Issue Suppl 2
  • 12 Critical appraisal tools for qualitative research – towards ‘fit for purpose’
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Veronika Williams 1 ,
  • Anne-Marie Boylan 2 ,
  • Newhouse Nikki 2 ,
  • David Nunan 2
  • 1 Nipissing University, North Bay, Canada
  • 2 University of Oxford, Oxford, UK

Qualitative research has an important place within evidence-based health care (EBHC), contributing to policy on patient safety and quality of care, supporting understanding of the impact of chronic illness, and explaining contextual factors surrounding the implementation of interventions. However, the question of whether, when and how to critically appraise qualitative research persists. Whilst there is consensus that we cannot - and should not – simplistically adopt existing approaches for appraising quantitative methods, it is nonetheless crucial that we develop a better understanding of how to subject qualitative evidence to robust and systematic scrutiny in order to assess its trustworthiness and credibility. Currently, most appraisal methods and tools for qualitative health research use one of two approaches: checklists or frameworks. We have previously outlined the specific issues with these approaches (Williams et al 2019). A fundamental challenge still to be addressed, however, is the lack of differentiation between different methodological approaches when appraising qualitative health research. We do this routinely when appraising quantitative research: we have specific checklists and tools to appraise randomised controlled trials, diagnostic studies, observational studies and so on. Current checklists for qualitative research typically treat the entire paradigm as a single design (illustrated by titles of tools such as ‘CASP Qualitative Checklist’, ‘JBI checklist for qualitative research’) and frameworks tend to require substantial understanding of a given methodological approach without providing guidance on how they should be applied. Given the fundamental differences in the aims and outcomes of different methodologies, such as ethnography, grounded theory, and phenomenological approaches, as well as specific aspects of the research process, such as sampling, data collection and analysis, we cannot treat qualitative research as a single approach. Rather, we must strive to recognise core commonalities relating to rigour, but considering key methodological differences. We have argued for a reconsideration of current approaches to the systematic appraisal of qualitative health research (Williams et al 2021), and propose the development of a tool or tools that allow differentiated evaluations of multiple methodological approaches rather than continuing to treat qualitative health research as a single, unified method. Here we propose a workshop for researchers interested in the appraisal of qualitative health research and invite them to develop an initial consensus regarding core aspects of a new appraisal tool that differentiates between the different qualitative research methodologies and thus provides a ‘fit for purpose’ tool, for both, educators and clinicians.

https://doi.org/10.1136/ebm-2022-EBMLive.36

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Read the full text or download the PDF:

How to critically appraise an article

Affiliation.

  • 1 Surgical Outcomes Research Centre, Royal Prince Alfred Hospital, Missenden Road, Sydney, NSW 2050, Australia. [email protected]
  • PMID: 19153565
  • DOI: 10.1038/ncpgasthep1331

Critical appraisal is a systematic process used to identify the strengths and weaknesses of a research article in order to assess the usefulness and validity of research findings. The most important components of a critical appraisal are an evaluation of the appropriateness of the study design for the research question and a careful assessment of the key methodological features of this design. Other factors that also should be considered include the suitability of the statistical methods used and their subsequent interpretation, potential conflicts of interest and the relevance of the research to one's own practice. This Review presents a 10-step guide to critical appraisal that aims to assist clinicians to identify the most relevant high-quality studies available to guide their clinical practice.

Publication types

  • Decision Making*
  • Evidence-Based Medicine*

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 24, Issue 2
  • Five tips for developing useful literature summary tables for writing review articles
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0003-0157-5319 Ahtisham Younas 1 , 2 ,
  • http://orcid.org/0000-0002-7839-8130 Parveen Ali 3 , 4
  • 1 Memorial University of Newfoundland , St John's , Newfoundland , Canada
  • 2 Swat College of Nursing , Pakistan
  • 3 School of Nursing and Midwifery , University of Sheffield , Sheffield , South Yorkshire , UK
  • 4 Sheffield University Interpersonal Violence Research Group , Sheffield University , Sheffield , UK
  • Correspondence to Ahtisham Younas, Memorial University of Newfoundland, St John's, NL A1C 5C4, Canada; ay6133{at}mun.ca

https://doi.org/10.1136/ebnurs-2021-103417

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Literature reviews offer a critical synthesis of empirical and theoretical literature to assess the strength of evidence, develop guidelines for practice and policymaking, and identify areas for future research. 1 It is often essential and usually the first task in any research endeavour, particularly in masters or doctoral level education. For effective data extraction and rigorous synthesis in reviews, the use of literature summary tables is of utmost importance. A literature summary table provides a synopsis of an included article. It succinctly presents its purpose, methods, findings and other relevant information pertinent to the review. The aim of developing these literature summary tables is to provide the reader with the information at one glance. Since there are multiple types of reviews (eg, systematic, integrative, scoping, critical and mixed methods) with distinct purposes and techniques, 2 there could be various approaches for developing literature summary tables making it a complex task specialty for the novice researchers or reviewers. Here, we offer five tips for authors of the review articles, relevant to all types of reviews, for creating useful and relevant literature summary tables. We also provide examples from our published reviews to illustrate how useful literature summary tables can be developed and what sort of information should be provided.

Tip 1: provide detailed information about frameworks and methods

  • Download figure
  • Open in new tab
  • Download powerpoint

Tabular literature summaries from a scoping review. Source: Rasheed et al . 3

The provision of information about conceptual and theoretical frameworks and methods is useful for several reasons. First, in quantitative (reviews synthesising the results of quantitative studies) and mixed reviews (reviews synthesising the results of both qualitative and quantitative studies to address a mixed review question), it allows the readers to assess the congruence of the core findings and methods with the adapted framework and tested assumptions. In qualitative reviews (reviews synthesising results of qualitative studies), this information is beneficial for readers to recognise the underlying philosophical and paradigmatic stance of the authors of the included articles. For example, imagine the authors of an article, included in a review, used phenomenological inquiry for their research. In that case, the review authors and the readers of the review need to know what kind of (transcendental or hermeneutic) philosophical stance guided the inquiry. Review authors should, therefore, include the philosophical stance in their literature summary for the particular article. Second, information about frameworks and methods enables review authors and readers to judge the quality of the research, which allows for discerning the strengths and limitations of the article. For example, if authors of an included article intended to develop a new scale and test its psychometric properties. To achieve this aim, they used a convenience sample of 150 participants and performed exploratory (EFA) and confirmatory factor analysis (CFA) on the same sample. Such an approach would indicate a flawed methodology because EFA and CFA should not be conducted on the same sample. The review authors must include this information in their summary table. Omitting this information from a summary could lead to the inclusion of a flawed article in the review, thereby jeopardising the review’s rigour.

Tip 2: include strengths and limitations for each article

Critical appraisal of individual articles included in a review is crucial for increasing the rigour of the review. Despite using various templates for critical appraisal, authors often do not provide detailed information about each reviewed article’s strengths and limitations. Merely noting the quality score based on standardised critical appraisal templates is not adequate because the readers should be able to identify the reasons for assigning a weak or moderate rating. Many recent critical appraisal checklists (eg, Mixed Methods Appraisal Tool) discourage review authors from assigning a quality score and recommend noting the main strengths and limitations of included studies. It is also vital that methodological and conceptual limitations and strengths of the articles included in the review are provided because not all review articles include empirical research papers. Rather some review synthesises the theoretical aspects of articles. Providing information about conceptual limitations is also important for readers to judge the quality of foundations of the research. For example, if you included a mixed-methods study in the review, reporting the methodological and conceptual limitations about ‘integration’ is critical for evaluating the study’s strength. Suppose the authors only collected qualitative and quantitative data and did not state the intent and timing of integration. In that case, the strength of the study is weak. Integration only occurred at the levels of data collection. However, integration may not have occurred at the analysis, interpretation and reporting levels.

Tip 3: write conceptual contribution of each reviewed article

While reading and evaluating review papers, we have observed that many review authors only provide core results of the article included in a review and do not explain the conceptual contribution offered by the included article. We refer to conceptual contribution as a description of how the article’s key results contribute towards the development of potential codes, themes or subthemes, or emerging patterns that are reported as the review findings. For example, the authors of a review article noted that one of the research articles included in their review demonstrated the usefulness of case studies and reflective logs as strategies for fostering compassion in nursing students. The conceptual contribution of this research article could be that experiential learning is one way to teach compassion to nursing students, as supported by case studies and reflective logs. This conceptual contribution of the article should be mentioned in the literature summary table. Delineating each reviewed article’s conceptual contribution is particularly beneficial in qualitative reviews, mixed-methods reviews, and critical reviews that often focus on developing models and describing or explaining various phenomena. Figure 2 offers an example of a literature summary table. 4

Tabular literature summaries from a critical review. Source: Younas and Maddigan. 4

Tip 4: compose potential themes from each article during summary writing

While developing literature summary tables, many authors use themes or subthemes reported in the given articles as the key results of their own review. Such an approach prevents the review authors from understanding the article’s conceptual contribution, developing rigorous synthesis and drawing reasonable interpretations of results from an individual article. Ultimately, it affects the generation of novel review findings. For example, one of the articles about women’s healthcare-seeking behaviours in developing countries reported a theme ‘social-cultural determinants of health as precursors of delays’. Instead of using this theme as one of the review findings, the reviewers should read and interpret beyond the given description in an article, compare and contrast themes, findings from one article with findings and themes from another article to find similarities and differences and to understand and explain bigger picture for their readers. Therefore, while developing literature summary tables, think twice before using the predeveloped themes. Including your themes in the summary tables (see figure 1 ) demonstrates to the readers that a robust method of data extraction and synthesis has been followed.

Tip 5: create your personalised template for literature summaries

Often templates are available for data extraction and development of literature summary tables. The available templates may be in the form of a table, chart or a structured framework that extracts some essential information about every article. The commonly used information may include authors, purpose, methods, key results and quality scores. While extracting all relevant information is important, such templates should be tailored to meet the needs of the individuals’ review. For example, for a review about the effectiveness of healthcare interventions, a literature summary table must include information about the intervention, its type, content timing, duration, setting, effectiveness, negative consequences, and receivers and implementers’ experiences of its usage. Similarly, literature summary tables for articles included in a meta-synthesis must include information about the participants’ characteristics, research context and conceptual contribution of each reviewed article so as to help the reader make an informed decision about the usefulness or lack of usefulness of the individual article in the review and the whole review.

In conclusion, narrative or systematic reviews are almost always conducted as a part of any educational project (thesis or dissertation) or academic or clinical research. Literature reviews are the foundation of research on a given topic. Robust and high-quality reviews play an instrumental role in guiding research, practice and policymaking. However, the quality of reviews is also contingent on rigorous data extraction and synthesis, which require developing literature summaries. We have outlined five tips that could enhance the quality of the data extraction and synthesis process by developing useful literature summaries.

  • Aromataris E ,
  • Rasheed SP ,

Twitter @Ahtisham04, @parveenazamali

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests None declared.

Patient consent for publication Not required.

Provenance and peer review Not commissioned; externally peer reviewed.

Read the full text or download the PDF:

  • Open access
  • Published: 11 April 2024

The role of champions in the implementation of technology in healthcare services: a systematic mixed studies review

  • Sissel Pettersen 1 ,
  • Hilde Eide 2 &
  • Anita Berg 1  

BMC Health Services Research volume  24 , Article number:  456 ( 2024 ) Cite this article

Metrics details

Champions play a critical role in implementing technology within healthcare services. While prior studies have explored the presence and characteristics of champions, this review delves into the experiences of healthcare personnel holding champion roles, as well as the experiences of healthcare personnel interacting with them. By synthesizing existing knowledge, this review aims to inform decisions regarding the inclusion of champions as a strategy in technology implementation and guide healthcare personnel in these roles.

A systematic mixed studies review, covering qualitative, quantitative, or mixed designs, was conducted from September 2022 to March 2023. The search spanned Medline, Embase, CINAHL, and Scopus, focusing on studies published from 2012 onwards. The review centered on health personnel serving as champions in technology implementation within healthcare services. Quality assessments utilized the Mixed Methods Appraisal Tool (MMAT).

From 1629 screened studies, 23 were included. The champion role was often examined within the broader context of technology implementation. Limited studies explicitly explored experiences related to the champion role from both champions’ and health personnel’s perspectives. Champions emerged as promoters of technology, supporting its adoption. Success factors included anchoring and selection processes, champions’ expertise, and effective role performance.

The specific tasks and responsibilities assigned to champions differed across reviewed studies, highlighting that the role of champion is a broad one, dependent on the technology being implemented and the site implementing it. Findings indicated a correlation between champion experiences and organizational characteristics. The role’s firm anchoring within the organization is crucial. Limited evidence suggests that volunteering, hiring newly graduated health personnel, and having multiple champions can facilitate technology implementation. Existing studies predominantly focused on client health records and hospitals, emphasizing the need for broader research across healthcare services.

Conclusions

With a clear mandate, dedicated time, and proper training, health personnel in champion roles can significantly contribute professional, technological, and personal competencies to facilitate technology adoption within healthcare services. The review finds that the concept of champions is a broad one and finds varied definitions of the champion role concept. This underscores the importance of describing organizational characteristics, and highlights areas for future research to enhance technology implementation strategies in different healthcare settings with support of a champion.

Peer Review reports

Digital health technologies play a transformative role in healthcare service systems [ 1 , 2 ]. The utilization of technology and digitalization is essential for ensuring patient safety, delivering high quality, cost-effective, and sustainable healthcare services [ 3 , 4 ]. The implementation of technology in healthcare services is a complex process that demands systematic changes in roles, workflows, and service provision [ 5 , 6 ].

The successful implementation of new technologies in healthcare services relies on the adaptability of health professionals [ 7 , 8 , 9 ]. Champions have been identified as a key factor in the successful implementation of technology among health personnel [ 10 , 11 , 12 ]. However, they have rarely been studied as an independent strategy; instead, they are often part of a broader array of strategies in implementation studies (e.g., Hudson [ 13 ], Gullslett and Bergmo [ 14 ]). Prior research has frequently focused on determining the presence or absence of champions [ 10 , 12 , 15 ], as well as investigating the characteristics of individuals assuming the champion role (e.g., George et al. [ 16 ], Shea and Belden [ 17 ]).

Recent reviews on champions [ 18 , 19 , 20 ] have studied their effects on adherence to guidelines, implementation of innovations and facilitation of evidence-based practice. While these reviews suggest that having champions yields positive effects, they underscore the importance for studies that offer detailed insights into the champion’s role concerning specific types of interventions.

There is limited understanding of the practical role requirements and the actual experiences of health personnel in performing the champion role in the context of technology implementation within healthcare services. Further, this knowledge is needed to guide future research on the practical, professional, and relational prerequisites for health personnel in this role and for organizations to successfully employ champions as a strategy in technology implementation processes.

This review seeks to synthesize the existing empirical knowledge concerning the experiences of those in the champion role and the perspectives of health personnel involved in technology implementation processes. The aim is to contribute valuable insights that enhance our understanding of practical role requirements, the execution of the champion role, and best practices in this domain.

The term of champions varies [ 10 , 19 ] and there is a lack of explicit conceptualization of the term ‘champion’ in the implementation literature [ 12 , 18 ]. Various terms for individuals with similar roles also exist in the literature, such as implementation leader, opinion leader, facilitator, change agent, superuser and facilitator. For the purpose of this study, we have adopted the terminology utilized in the recent review by Rigby, Redley and Hutchinson [ 21 ] collectively referring to these roles as ‘champions’. This review aims to explore the experiences of health personnel in their role as champions and the experiences of health personnel interacting with them in the implementation of technology in the healthcare services.

Prior review studies on champions in healthcare services have employed various designs [ 10 , 18 , 19 , 20 ]. In this review, we utilized a comprehensive mixed studies search to identify relevant empirical studies [ 22 ]. The search was conducted utilizing the Preferred Reporting Items for Systematic and Meta-Analysis (PRISMA) guidelines, ensuring a transparent and comprehensive overview that can be replicated or updated by others [ 23 ]. The study protocol is registered in PROSPERO (ID CRD42022335750), providing a more comprehensive description of the methods [ 24 ]. A systematic mixed studies review, examining research using diverse study designs, is well-suited for synthesizing existing knowledge and identifying gaps by harnessing the strengths of both qualitative and quantitative methods [ 22 ]. Our search encompassed qualitative, quantitative, and mixed methods design to capture experiences with the role of champions in technology implementation.

Search strategy and study selection

Search strategy.

The first author, in collaboration with a librarian, developed the search strategy based on initial searches to identify appropriate terms and truncations that align with the eligibility criteria. The search was constructed utilizing a combination of MeSH terms and keywords related to technology, implementation, champion, and attitudes/experiences. Conducted in August/September 2022, the search encompassed four databases: Medline, Embase, CINAHL, and Scopus, with an updated search conducted in March 2023. The full search strategy for Medline is provided in Appendix  1 . The searches in Embase, CINAHL and Scopus employed the same strategy, with adopted terms and phrases to meet the requirements of each respective database.

Eligibility criteria

We included all empirical studies employing qualitative, quantitative, and mixed methods designs that detailed the experiences and/or attitudes of health personnel regarding the champions role in the implementation of technology in healthcare services. Articles in the English language published between 2012 and 2023 were considered. The selected studies involved technology implemented or adapted within healthcare services.

Conference abstract and review articles were excluded from consideration. Articles published prior 2012 were excluded as a result of the rapid development of technology, which could impact the experiences reported. Furthermore, articles involving surgical technology and pre-implementation studies were also excluded, as the focus was on capturing experiences and attitudes from the adoption and daily use of technology. The study also excluded articles that involved champions without clinical health care positions.

Study selection

A total of 1629 studies were identified and downloaded from the selected databases, with Covidence [ 25 ] utilized as a software platform for screening. After removing 624 duplicate records, all team members collaborated to calibrate the screening process utilizing the eligibility criteria on the initial 50 studies. Subsequently, the remaining abstracts were independently screened by two researchers, blinded to each other, to ensure adherence to the eligibility criteria. Studies were included if the title and abstract included the term champion or its synonyms, along with technology in healthcare services, implementation, and health personnel’s experiences or attitudes. Any discrepancies were resolved through consensus among all team members. A total of 949 abstracts were excluded for not meeting this inclusion condition. During the initial search, 56 remaining studies underwent full-text screening, resulting in identification of 22 studies qualified for review.

In the updated search covering the period September 2022 to March 2023, 64 new studies were identified. Of these, 18 studies underwent full-text screening, and one study was included in our review. The total number of included studies is 23. The PRISMA flowchart (Fig.  1 ) illustrates the process.

figure 1

Flow Chart illustrating the study selection and screening process

Data extraction

The research team developed an extraction form for the included studies utilizing an Excel spreadsheet. Following data extraction, the information included the Name of Author(s) Year of publication, Country/countries, Title of the article, Setting, Aim, Design, Participants, and Sample size of the studies, Technology utilized in healthcare services, name/title utilized to describe the Champion Role, how the studies were analyzed and details of Attitude/Experience with the role of champion. Data extraction was conducted by SP, and the results were deliberated in a workshop with the other researchers AB, and HE until a consensus was reached. Any discrepancies were resolved through discussions. The data extraction was categorized into three categories: qualitative, quantitative, and mixed methods, in preparation for quality appraisal.

Quality appraisal

The MMAT [ 26 ] was employed to assess the quality of the 23 included studies. Specifically designed for mixed studies reviews, the MMAT allows for the appraisal of the methodological quality of studies falling into five categories. The studies in our review encompassed qualitative, quantitative descriptive, and mixed methods studies. The MMAT begins with two screening questions to confirm the empirical nature of this study. Subsequently, all studies were categorized by type and evaluated utilizing specific criteria based on their research methods, with ratings of ‘Yes,’ ‘No’ or ‘Can’t tell.’ The MMAT discourages overall scores in favor of providing a detailed explanation for each criterion. Consequently, we did not rely on the MMAT’s overall methodical quality scores and continued to include all 23 studies for our review. Two researchers independently scored the studies, and any discrepancies were discussed among all team members until a consensus was reached. The results of the MMAT assessments are provided in Appendix  2 .

Data synthesis

Based on discussions of this material, additional tables were formulated to present a comprehensive overview of the study characteristics categorized by study design, study settings, technology included, and descriptions/characteristics of the champion role. To capture attitudes and experiences associated with the champion role, the findings from the included studies were translated into narrative texts [ 22 ]. Subsequently, the reviewers worked collaboratively to conduct a thematic analysis, drawing inspiration from Braun and Clarke [ 27 ]. Throughout the synthesis process, multiple meetings were conducted to discern and define the emerging themes and subthemes.

The adopting of new technology in healthcare services can be perceived as both an event and a process. According to Iqbal [ 28 ], experience is defined as the knowledge and understanding gained after an event or the process of living through or undergoing an event. This review synthesizes existing empirical knowledge regarding the experiences of occupying the champion role, and the perspectives of health personnel interacting with champions in technology implementation processes.

Study characteristics

The review encompassed a total of 23 studies, and an overview of these studies is presented in Table  1 . Of these, fourteen studies employed a qualitative design, four had quantitative design, and five utilized a mixed method design. The geographical distribution revealed that the majority of studies were conducted in the USA (8), followed by Australia (5), England (4), Canada (2), Norway (2), Ireland (1), and Malaysia (1). In terms of settings, 11 studies were conducted in hospitals, five in primary health care, three in home-based care settings, and four in a mixed settings where two or more settings collaborated. Various technologies were employed across these studies, with client health records (7) and telemedicine (5) being the most frequently utilized. All studies included experiences from champions or health personnel collaborating with champions in their respective healthcare services. Only three studies had the champion role as a main objective [ 29 , 30 , 31 ]. The remaining studies described champions as one of the strategies in technology implementation processes, including 10 evaluation studies (including feasibility studies [ 32 , 33 , 34 ] and one cost-benefit study [ 30 ]).

Several studies underscored the importance of champions for successful implementation [ 29 , 30 , 31 , 34 , 35 , 36 , 37 , 38 , 40 , 41 , 42 , 43 , 49 ]. Four studies specifically highlighted champions as a key factor for success [ 34 , 36 , 37 , 43 ], and one study went further to describe champions as the most important factor for successful implementation [ 39 ]. Additionally, one study associated champions with reduced labor cost [ 30 ].

Thin descriptions, yet clear expectations for technology champions’ role and -attributes

The analyses revealed that the concept of champions in studies pertaining to technology implementation in healthcare services varies, primarily as a result of the diversity of terms utilized to describe the role combined with short role descriptions. Nevertheless, the studies indicated clear expectations for the champion’s role and associated attributes.

The term champion

The term champion was expressed in 20 different forms across the 23 studies included in our review. Three studies utilized multiple terms within the same study [ 32 , 47 , 48 ] and 15 different authors [ 29 , 32 , 33 , 35 , 36 , 37 , 39 , 40 , 41 , 42 , 43 , 44 , 46 , 47 , 50 ] employed the term with different compositions (Table  1 ). Furthermore, four authors utilized the term Super user [ 30 , 31 , 49 , 51 ], while four authors employed the terms Facilitator [ 38 ], IT clinician [ 48 ], Leader [ 45 ], and Manager [ 34 ], each in combination with more specific terms (such as local opinion leaders, IT nurse, or practice manager).

Most studies associated champion roles with specific professions. In seven studies, the professional title was explicitly linked to the concept of champions, such as physician champions or clinical nurse champions, or through the strategic selection of specific professions [ 29 , 33 , 36 , 40 , 43 , 47 , 50 ]. Additionally, some studies did not specify professions, but utilized terms like clinicians [ 45 ] or health professionals [ 41 ].

All included articles portray the champion’s role as facilitating implementation and daily use of technology among staff. In four studies, the champion’s role was not elaborated beyond indicating that the individual holding the role is confident with an interest in technology [ 35 , 41 , 42 , 44 ]. The champion’s role was explicitly examined in six studies [ 29 , 30 , 31 , 33 , 46 , 50 ]. Furthermore, seven studies described the champion in both the methods and results [ 32 , 36 , 38 , 47 , 48 , 49 , 51 ]. In ten of the studies, champions were solely mentioned in the results [ 34 , 35 , 37 , 39 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 ].

Eight studies provided a specific description or definition of the champion [ 29 , 30 , 31 , 32 , 38 , 48 , 49 , 50 ]. The champion’s role was described as involving training in the specific technology, being an expert on the technology, providing support and assisting peers when needed. In some instance, the champion had a role in leading the implementation [ 50 ], while in other situations, the champion operated as a mediator [ 48 ].

The champions tasks

In the included studies, the champion role encompassed two interrelated facilitators tasks: promoting the technology and supporting others in adopting the technology in their daily practice. Promoting the technology involved encouraging staff adaptation [ 32 , 34 , 35 , 37 , 40 , 41 , 49 ], generally described as being enthusiastic about the technology [ 32 , 35 , 37 , 41 , 48 ], influencing the attitudes and beliefs of colleagues [ 42 , 45 ] and legitimizing the introduction of the technology [ 42 , 46 , 48 ]. Supporting others in technology adaption involved training and teaching [ 31 , 35 , 38 , 40 , 51 ], as well as providing technical support [ 30 , 31 , 39 , 43 , 49 ] and social support [ 49 ]. Only four studies reported that the champions received their own training to enable them able to support their colleagues [ 30 , 31 , 39 , 48 ]. Furthermore, eight studies [ 32 , 34 , 38 , 40 , 48 , 49 , 50 , 51 ], specified that the champion role included leadership and management responsibilities, mentioning tasks such as planning, organizing, coordinating, and mediating technology adaption without providing further details.

Desirable champion attributes

To effectively fulfill their role, champions should ideally possess clinical expertise and experience [ 29 , 35 , 38 , 40 , 48 ], stay professionally updated [ 37 , 48 ], and possess knowledge of the organization and workflows [ 29 , 34 , 46 ]. They should have the ability to understand and communicate effectively with healthcare personnel [ 31 , 32 , 46 , 49 ] and be proficient in IT language [ 51 ]. Moreover, champions should demonstrate a general technological interest and competence, and competence, along with specific knowledge of the technology to be implemented [ 32 , 37 , 49 ]. It is also emphasized that they should command formal and/or informal respect and authority in the organization [ 36 , 45 ], be accessible to others [ 39 , 43 ], possess leadership qualities [ 34 , 37 , 38 , 46 ], and understand and balance the needs of stakeholders [ 43 ]. Lastly, the champions should be enthusiastic promoters of the technology, engaging and supporting others [ 31 , 32 , 33 , 34 , 37 , 39 , 40 , 41 , 43 , 49 ], while also effectively coping with cultural resistance to change [ 31 , 46 ].

Anchoring and recruiting for the champion role

The champions were organized differently within services, holding various positions in the organizations, and being recruited for the role in different ways.

Anchoring the champion role

The champion’s role is primarily anchored at two levels: the management level and/or the clinical level, with two studies having champions at both levels [ 34 , 49 ]. Those working with the management actively participated in the planning of the technology implementation [ 29 , 36 , 40 , 41 , 45 ]. Serving as advisors to management, they leveraged their clinical knowledge to guide the implementation in alignment with the necessities and possibilities of daily work routines in the clinics. Champions in this capacity experienced having a clear formal position that enabled them to fulfil their role effectively [ 29 , 40 ]. Moreover, these champions served as bridge builders between the management and department levels [ 36 , 45 ], ensuring the necessary flow of information in both directions.

Champions anchored at the clinic level played a pivotal role in the practical implementation and facilitation of the daily use of technology [ 31 , 33 , 35 , 37 , 38 , 43 , 48 , 51 ]. Additionally, these champions actively participated in meetings with senior management to discuss the technology and its implementation in the clinic. This position conferred potential influence over health personnel [ 33 , 35 ]. Champions at the clinic level facilitated collaboration between employees, management, and suppliers [ 48 ]. Fontaine et al. [ 36 ] identified respected champions at the clinical level, possessing authority and formal support from all leadership levels, as the most important factor for success.

Only one study reported that the champions received additional compensation for their role [ 36 ], while another study mentioned champions having dedicated time to fulfil their role [ 46 ]. The remaining studies did not provide this information.

Recruiting for the role as champion

Several studies have reported different experiences regarding the management’s selection of champions. A study highlighted the distinctions between a volunteered role and an appointed champion’s role [ 31 ]. Some studies underscored that appointed champions were chosen based on technological expertise and skills [ 41 , 48 , 51 ]. Moreover, the selection criteria included champions’ interest in the specific technology [ 42 ] or experiential skills [ 40 ]. The remaining studies did not provide this information.

While the champion role was most frequently held by health personnel with clinical experience, one study deviated by hiring 150 newly qualified nurses as champions [ 30 ] for a large-scale implementation of an Electronic Health Record (EHR). Opting for clinical novices assisted in reducing implementation costs, as it avoided disrupting daily tasks and interfering with daily operations. According to Bullard [ 30 ], these super-user nurses became highly sought after post-implementation as a result of their technological confidence and competence.

Reported experiences of champions and health personnel

Drawing from the experiences of both champions and health personnel, it is essential for a champion to possess a combination of general knowledge and specific champion characteristics. Furthermore, champions are required to collaborate with individuals both within and outside the organization. The subsequent paragraphs delineate these experiences, categorizing them into four subsets: champions’ contextual knowledge and expertise, preferred performance of the champion role, recognizing that a champion alone is insufficient, and distinguishing between reactive and proactive champions.

Champions’ contextual knowledge and know-how

Health personnel with experience interacting with champions emphasized that a champion must be familiar with the department and its daily work routines [ 35 , 40 ]. Knowledge of the department’s daily routines made it easier for champions to facilitate the adaptation of technology. However, there was a divergence of opinions on whether champions were required to possess extensive clinical experience to fulfil their role. In most studies, having an experienced and competent clinician as a champion instilled a sense of confidence among health personnel. Conversely, Bullard’s study [ 30 ] exhibited that health personnel were satisfied with newly qualified nurses in the role of champion, despite their initial skepticism.

It is a generally expected that champions should possess technological knowledge beyond that of other health professionals [ 37 , 41 ]. Some health personnel perceived the champions as uncritical promoters of technology, with the impression that health personnel were being compelled to utilize technology [ 46 ]. Champions could also overestimate the readiness of health personnel to implement a technology, especially during the early phases of the implementation process [ 32 ]. Regardless of whether the champion is at the management level or the clinic level, champions themselves have acknowledged the importance of providing time and space for innovation. Moreover, the recruitment of champions should span all levels of the organization [ 34 , 46 ]. Furthermore, champions must be familiar with daily work routines, work tools, and work surfaces [ 38 , 40 , 43 ].

Preferable performance of the champion role

The studies identified several preferable characteristics of successful champions. Health personnel favored champions utilizing positive words when discussing technology and exhibiting positive attitudes while facilitating and adapting it [ 33 , 34 , 37 , 38 , 41 , 46 ]. Additionally, champions who were enthusiastic and engaging were considered good role models for the adoption of technology. Successful champions were perceived as knowledgeable and adept problem solvers who motivated and supported health personnel [ 41 , 43 , 44 , 48 ]. They were also valued for being available and responding promptly when contacted [ 42 ]. Health professionals noted that champions perceived as competent garnered respect in the organization [ 40 ]. Moreover, some health personnel felt that some certain champions wielded a greater influence based on how they encouraged the use of the system [ 48 ]. It was also emphasized that health personnel needed to feel it was safe to provide feedback to champions, especially when encountering difficulties or uncertainties [ 49 ].

A champion is not enough

The role of champions proved to be more demanding than expected [ 29 , 31 , 38 ], involving tasks such as handling an overwhelming number of questions or actively participating in the installation process to ensure the technology functions effectively in the department [ 29 ]. Regardless of the organizational characteristics or the champion’s profile, appointing the champion as a “solo implementation agent” is deemed unsuitable. If the organization begins with one champion, it is recommended that this individual promptly recruits others into the role [ 42 ].

Health personnel, reliant on champions’ expertise, found it beneficial to have champions in all departments, and these champions had to be actively engaged in day-to-day operations [ 31 , 33 , 34 , 37 ]. Champions themselves also noted that health personnel increased their technological expertise through their role as champions in the department [ 39 ].

Furthermore, the successful implementation of technology requires the collaboration of various professions and support functions, a task that cannot be solely addressed by a champion [ 29 , 43 , 48 ]. In Orchard et. al.‘s study [ 34 ], champions explicitly emphasized the necessity of support from other personnel in the organization, such as those responsible for the technical aspects and archiving routines, to provide essential assistance.

According to health personnel, the role of champions is vulnerable in case they become sick or leave their position [ 42 , 51 ]. In some of the included studies, only one or a few hold the position of champion [ 37 , 38 , 42 , 48 ]. Two studies observed that their implementations were not completed because champions left or reassigned for various reasons [ 32 , 51 ]. The health professionals in the study by Owens and Charles [ 32 ] expressed that champions must be replaced in such cases. Further, the study of Olsen et al., 2021 [ 42 ] highlights the need for quicky building a champion network within the organization.

Reactive and proactive champions

Health personnel and champions alike noted that champions played both a reactive and proactive role. The proactive role entailed facilitating measures such as training and coordination [ 31 , 32 , 33 , 34 , 37 , 39 , 40 , 41 , 43 , 48 , 49 ] as initiatives to generate enthusiasm for the technology [ 31 , 32 , 33 , 34 , 35 , 37 , 39 , 40 , 41 , 43 , 49 ]. On the other hand, the reactive role entailed hands-on support and troubleshooting [ 30 , 31 , 39 , 43 , 49 ].

In a study presenting experiences from both health personnel and champions, Yuan et al. [ 31 ] found that personnel observed differences in the assistance provided by appointed and self-chosen champions. Appointed champions demonstrated the technology, answered questions from health personnel, but quickly lost patience and track of employees who had received training [ 31 ]. Health personnel perceived that self-chosen champions were proactive and well-prepared to facilitate the utilization of technology, communicating with the staff as a group and being more competent in utilizing the technology in daily practice [ 31 ]. Health personnel also noted that volunteer champions were supportive, positive, and proactive in promoting the technology, whereas appointed champions acted on request and had a more reactive approach [ 31 ].

This review underscores the breadth of the concept of champion and the significant variation in the champion’s role in implementation of technology in healthcare services. This finding supports the results from previous reviews [ 10 , 18 , 19 , 20 ]. The majority of studies meeting our inclusion criteria did not specifically focus on the experiences of champions and health personnel regarding the champion role, with the exception of studies by Bullard [ 30 ], Gui et al. [ 29 ], Helmer-Smith et al. [ 33 ], Hogan-Murphy et al. [ 46 ], Rea et al. [ 50 ], and Yuan et al. [ 31 ].

The 23 studies encompassed in this review utilized 20 different terms for the champion role. In most studies, the champion’s role was briefly described in terms of the duties it entailed or should entail. This may be linked to the fact that the role of champions was not the primary focus of the study, but rather one of the strategies in the implementation process being investigated. This result reinforces the conclusions drawn by Miech et al. [ 10 ] and Shea et al. [ 12 ] regarding the lack of united understandings of the concept. Furthermore, in Santos et al.‘s [ 19 ] review, champions were only operationalized through presence or absence in 71.4% of the included studies. However, our review finds that there is a consistent and shared understanding that champions should promote and support technology implementation.

Several studies advocate for champions as an effective and recommended strategy for implementing technology [ 30 , 31 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 42 , 43 , 45 , 46 ]. However, we identified that few studies exclusively explore health personnel`s experiences within the champion role when implementing technology in healthcare services.

This suggests a general lack of information essential for understanding the pros, cons, and prerequisites for champions as a strategy within this field of knowledge. However, this review identifies, on a general basis, the types of support and structures required for champions to perform their role successfully from the perspectives of health personnel, contributing to Shea’s conceptual model [ 12 ].

Regarding the organization of the role, this review identified champions holding both formal appointed and informal roles, working in management or clinical settings, being recruited for their clinical and/or technological expertise, and either volunteering or being hired with specific benefits for the role. Regardless of these variations, anchoring the role is crucial for both the individuals holding the champion role and the health personnel interacting with them. Anchoring, in this context, is associated with the clarity of the role’s content and a match between role expectations and opportunities for fulfilment. Furthermore, the role should be valued by the management, preferably through dedicated time and/or salary support [ 34 , 36 , 46 ]. Additionally, our findings indicate that relying on a “solo champion” is vulnerable to issues such as illness, turnover, excessive workload, and individual champion performance [ 32 , 37 ]. Based on these insights, it appears preferable to appoint multiple champions, with roles at both management and clinical levels [ 33 ].

Some studies have explored the selection of champions and its impact on role performance, revealing diverse experiences [ 30 , 31 ]. Notably, Bullard [ 30 ], stands out for emphasizing long clinical experience, and hiring newly trained nurses as superusers to facilitate the use of electronic health records. Despite facing initial reluctance, these newly trained nurses gradually succeeded in their roles. This underscores the importance of considering contextual factors in the champion selection [ 30 , 52 ]. In Bullard’s study [ 30 ], the collaboration between newly trained nurses as digital natives and clinical experienced health personnel proved beneficial, highlighting the need to align champion selection with the organization’s needs based on personal characteristics. This finding aligns with Melkas et al.‘s [ 9 ] argument that implementing technology requires a deeper understanding of users, access to contextual know-how, and health personnel’s tacit knowledge.

To meet role expectations and effectively leverage their professional and technological expertise, champions should embody personal qualities such as the ability to engage others, take a leadership role, be accessible, supportive, and communicate clearly. These qualities align with the key attributes for change in healthcare champions described by Bonawitz et al. [ 15 ]. These attributes include influence, ownership, physical presence, persuasiveness, grit, and a participative leadership style (p.5). These findings suggest that the active performance of the role, beyond mere presence, is crucial for champions to be a successful strategy in technology implementation. Moreover, the recruitment process is not inconsequential. Identifying the right person for the role and providing them with adequate training, organizational support, and dedicated time to fulfill their responsibilities emerge as an important factor based on the insights from champions and health personnel.

Strengths and limitations

While this study benefits from identifying various terms associated with the role of champions, it acknowledges the possibility of missing some studies as a result of diverse descriptions of the role. Nonetheless, a notable strength of the study lies in its specific focus on the health personnel’s experiences in holding the champion role and the broader experiences of health personnel concerning champions in technology implementation within healthcare services. This approach contributes valuable insights into the characteristics of experiences and attitudes toward the role of champions in implementing technology. Lastly, the study emphasizes the relationship between the experiences with the champion role and the organizational setting’s characteristics.

The champion role was frequently inadequately defined [ 30 , 33 , 34 , 35 , 36 , 37 , 39 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 51 ], aligning with previous reviews [ 17 , 19 , 21 ]. As indicated by van Laere and Aggestam [ 52 ], this lack of clarity complicates the identification and comparison of champions across studies. Studies that lacking a distinct definition of the champion’s role were consequently excluded. Only studies written in English were included, introducing the possibility of overlooking relevant studies based on our chosen terms for identifying the champion’s role. Most of the included studies focused on technology implementation in a general context, with champions being just one of several measures. This approach resulted in scant descriptions, as champions were often discussed in the results, discussion, or implications sections rather than being the central focus of the research.

As highlighted by Hall et al. [ 18 ]., methodological issues and inadequate reporting in studies of the champion role create challenges for conducting high-quality reviews, introducing uncertainty around the findings. We have adopted a similar approach to Santos et al. [ 19 ], including all studies even when some issues were identified during the quality assessment. Our review shares the same limitations as previous review by Santos et al. [ 19 ] on the champion role.

Practical implications, policy, and future research

The findings emphasize the significance of the relationship between experiences with the champion role and characteristics of organizational settings as crucial factors for success in the champion role. Clear anchoring of the role within the organization is vital and may impact routines, workflows, staffing, and budgets. Despite limited evidence on the experience of the champion’s role, volunteering, hiring newly graduated health personnel, and appointing more than one champion are identified as facilitators of technology implementation. This study underscores the need for future empirical research including clear descriptions of the champion roles, details on study settings and the technologies to be adopted. This will enable the determination of outcomes and success factors in holding champions in technology implementation processes, transferability of knowledge between contexts and technologies as well as enhance the comparability of studies. Furthermore, there is a need for studies to explore experiences with the champion role, preferably from the perspective of multiple stakeholders, as well as focus on the champion role within various healthcare settings.

This study emphasizes that champions can hold significant positions when provided with a clear mandate, dedicated time, and training, contributing their professional, technological, and personal competencies to expedite technology adoption within services. It appears to be an advantage if the health personnel volunteer or apply for the role to facilitate engaged and proactive champions. The implementation of technology in healthcare services demands efforts from the entire service, and the experiences highlighted in this review exhibits that champions can play an important role. Consequently, empirical studies dedicated to the champion role, employing robust designs based current knowledge, are still needed to provide solid understanding of how champions can be a successful initiative when implementing technology in healthcare services.

Data availability

This review relies exclusively on previously published studies. The datasets supporting the conclusions of this article are included within the article and its supplementary files: Description and characteristics of included studies in Table  1 , Study characteristics. The search strategy is provided in Appendix  1 , and the Critical Appraisal Summary of included studies utilizing MMAT is presented in Appendix  2 .

Abbreviations

Electronic Health Record

Implementation Outcomes Framework

Preferred Reporting Items for Systematics and Meta-Analysis

Meskó B, Drobni Z, Bényei É, Gergely B, Győrffy Z. Digital health is a cultural transformation of traditional healthcare. mHealth. 2017;3:38. https://doi.org/10.21037/mhealth.2017.08.07 .

Article   PubMed   PubMed Central   Google Scholar  

Pérez Sust P, Solans O, Fajardo JC, Medina Peralta M, Rodenas P, Gabaldà J, et al. Turning the crisis into an opportunity: Digital health strategies deployed during the COVID-19 outbreak. JMIR Public Health Surveill. 2020;6:e19106. https://doi.org/10.2196/19106 .

Alotaibi YK, Federico F. The impact of health information technology on patient safety. Saudi MedJ. 2017;38:117380. https://doi.org/10.15537/smj.2017.12.20631 .

Article   Google Scholar  

Kuoppamäki S. The application and deployment of welfare technology in Swedish municipal care: a qualitative study of procurement practices among municipal actors. BMC Health Serv Res. 2021;21:918. https://doi.org/10.1186/s12913-021-06944-w .

Kraus S, Schiavone F, Pluzhnikova A, Invernizzi AC. Digital transformation in healthcare: analyzing the current state-of-research. J Bus Res. 2021;123:557–67. https://doi.org/10.1016/j.jbusres.2020.10.030 .

Frennert S. Approaches to welfare technology in municipal eldercare. JTechnolHum. 2020;38:22646. https://doi.org/10.1080/15228835.2020.1747043 .

Konttila J, Siira H, Kyngäs H, Lahtinen M, Elo S, Kääriäinen M, et al. Healthcare professionals’ competence in digitalisation: a systematic review. Clin Nurs. 2019;28:74561. https://doi.org/10.1111/jocn.14710 .

Jacob C, Sanchez-Vazquez A, Ivory C. Social, organizational, and technological factors impacting clinicians’ adoption of mobile health tools: systematic literature review. JMIR mHealth uHealth. 2020;8:e15935. https://doi.org/10.2196/15935 .

Melkas H, Hennala L, Pekkarinen S, Kyrki V. Impacts of robot implementation on care personnel and clients in elderly-care institutions. Int J Med Inf. 2020;134:104041. https://doi.org/10.1016/j.ijmedinf.2019.104041 .

Miech EJ, Rattray NA, Flanagan ME, Damschroder L, Schmid AA, Damush TM. Inside help: an integrative review of champions in healthcare-related implementation. SAGE Open Med. 2018;6. https://doi.org/10.1177/2050312118773261 .

Foong HF, Kyaw BM, Upton Z, Tudor Car L. Facilitators and barriers of using digital technology for the management of diabetic foot ulcers: a qualitative systematic review. Int Wound J. 2020;17:126681. https://doi.org/10.1111/iwj.13396 .

Shea CM. A conceptual model to guide research on the activities and effects of innovation champions. Implement Res Pract. 2021;2. https://doi.org/10.1177/2633489521990443 .

Hudson D. Physician engagement strategies in health information system implementations. Healthc Manage Forum. 2023;36:86–9. https://doi.org/10.1177/08404704221131921 .

Article   PubMed   Google Scholar  

Gullslett MK, Strand Bergmo T. Implementation of E-prescription for multidose dispensed drugs: qualitative study of general practitioners’’ experiences. JMIR HumFactors. 2022;9:e27431. https://doi.org/10.2196/27431 .

Bonawitz K, Wetmore M, Heisler M, Dalton VK, Damschroder LJ, Forman J, et al. Champions in context: which attributes matter for change efforts in healthcare? Implement Sci. 2020;15:62. https://doi.org/10.1186/s13012-020-01024-9 .

George ER, Sabin LL, Elliott PA, Wolff JA, Osani MC, McSwiggan Hong J, et al. Examining health care champions: a mixed-methods study exploring self and peer perspectives of champions. Implement Res Pract. 2022;3. https://doi.org/10.1177/26334895221077880 .

Shea CM, Belden CM. What is the extent of research on the characteristics, behaviors, and impacts of health information technology champions? A scoping review. BMC Med Inf Decis Mak. 2016;16:2. https://doi.org/10.1186/s12911-016-0240-4 .

Hall AM, Flodgren GM, Richmond HL, Welsh S, Thompson JY, Furlong BM, Sherriff A. Champions for improved adherence to guidelines in long-term care homes: a systematic review. Implement Sci Commun. 2021;2(1):85–85. https://doi.org/10.1186/s43058-021-00185-y .

Santos WJ, Graham ID, Lalonde M, Demery Varin M, Squires JE. The effectiveness of champions in implementing innovations in health care: a systematic review. Implement Sci Commun. 2022;3(1):1–80. https://doi.org/10.1186/s43058-022-00315-0 .

Wood K, Giannopoulos V, Louie E, Baillie A, Uribe G, Lee KS, Haber PS, Morley KC. The role of clinical champions in facilitating the use of evidence-based practice in drug and alcohol and mental health settings: a systematic review. Implement Res Pract. 2020;1:2633489520959072–2633489520959072. https://doi.org/10.1177/2633489520959072 .

Rigby K, Redley B, Hutchinson AM. Change agent’s role in facilitating use of technology in residential aged care: a systematic review. Int J Med Informatics. 2023;105216. https://doi.org/10.1016/j.ijmedinf.2023.105216 .

Pluye P, Hong QN. Combining the power of stories and the power of numbers: mixed methods research and mixed studies reviews. Annu Rev Public Health. 2014;35:29–45. https://doi.org/10.1146/annurev-publhealth-032013-182440 .

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. https://doi.org/10.1136/bmj.n71 .

Pettersen S, Berg A, Eide H. Experiences and attitudes to the role of champions in implementation of technology in health services. A systematic review. PROSPERO. 2022. https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42022335750 . Accessed [15 Feb 2023].

Covidence. Better Syst Rev Manag. https://www.covidence.org/ . 2023;26.

Hong QN, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, et al. The mixed methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. Educ Inf. 2018;34:285–91. https://doi.org/10.3233/EFI-180221 .

Braun V, Clarke V. Thematic analysis: a practical guide. 1st ed. SAGE; 2022.

Iqbal MP, Manias E, Mimmo L, Mears S, Jack B, Hay L, Harrison R. Clinicians’ experience of providing care: a rapid review. BMC Health Serv Res. 2020;20:1–10. https://doi.org/10.1186/s12913-020-05812-3 .

Gui X, Chen Y, Zhou X, Reynolds TL, Zheng K, Hanauer DA. Physician champions’ perspectives and practices on electronic health records implementation: challenges and strategies. JAMIA open. 2020;3:5361. https://doi.org/10.1093/jamiaopen/ooz051 .

Bullard KL. Cost effective staffing for an EHR implementation. Nurs Econ. 2016;34:726.

Google Scholar  

Yuan CT, Bradley EH, Nembhard IM. A mixed methods study of how clinician ‘super users’ influence others during the implementation of electronic health records. BMC Med Inf Decis Mak. 2015;15:26. https://doi.org/10.1186/s12911-015-0154-6 .

Owens C, Charles N. Implementation of a text-messaging intervention for adolescents who self-harm (TeenTEXT): a feasibility study using normalisation process theory. Child Adolesc Psychiatry Ment Health. 2016;10:14. https://doi.org/10.1186/s13034-016-0101-z .

Helmer-Smith M, Fung C, Afkham A, Crowe L, Gazarin M, Keely E, et al. The feasibility of using electronic consultation in long-term care homes. JAm Med Dir Assoc. 2020;21:11661170e2. https://doi.org/10.1016/j.jamda.2020.03.003 .

Orchard J, Lowres N, Freedman SB, Ladak L, Lee W, Zwar N, et al. Screening for atrial fibrillation during influenza vaccinations by primary care nurses using a smartphone electrocardiograph (iECG): a feasibility study. Eur J Prev Cardiol. 2016;23:13–20. https://doi.org/10.1177/2047487316670255 .

Bee P, Lovell K, Airnes Z, Pruszynska A. Embedding telephone therapy in statutory mental health services: a qualitative, theory-driven analysis. BMC Psychiatry. 2016;16:56. https://doi.org/10.1186/s12888-016-0761-5 .

Fontaine P, Whitebird R, Solberg LI, Tillema J, Smithson A, Crabtree BF. Minnesota’s early experience with medical home implementation: viewpoints from the front lines. J Gen Intern Med. 2015;30(7):899–906. https://doi.org/10.1007/s11606-014-3136-y .

Kolltveit B-CH, Gjengedal E, Graue M, Iversen MM, Thorne S, Kirkevold M. Conditions for success in introducing telemedicine in diabetes foot care: a qualitative inquiry. BMC Nurs. 2017;16:2. https://doi.org/10.1186/s12912-017-0201-y .

Salbach NM, McDonald A, MacKay-Lyons M, Bulmer B, Howe JA, Bayley MT, et al. Experiences of physical therapists and professional leaders with implementing a toolkit to advance walking assessment poststroke: a realist evaluation. Phys Ther. 2021;101:1–11. https://doi.org/10.1093/ptj/pzab232 .

Schwarz M, Coccetti A, Draheim M, Gordon G. Perceptions of allied health staff of the implementation of an integrated electronic medical record across regional and metropolitan settings. Aust Health Rev. 2020;44:965–72. https://doi.org/10.1071/AH19024 .

Stewart J, McCorry N, Reid H, Hart N, Kee F. Implementation of remote asthma consulting in general practice in response to the COVID-19 pandemic: an evaluation using extended normalisation process theory. BJGP Open. 2022;6:1–10. https://doi.org/10.3399/BJGPO.2021.0189 .

Bennett-Levy J, Singer J, DuBois S, Hyde K. Translating mental health into practice: what are the barriers and enablers to e-mental health implementation by aboriginal and Torres Strait Islander health professionals? JMed. Internet Res. 2017;19:e1. https://doi.org/10.2196/jmir.6269 .

Olsen J, Peterson S, Stevens A. Implementing electronic health record-based National Diabetes Prevention Program referrals in a rural county. Public Health Nurs (Boston Mass). 2021;38(3):464–9. https://doi.org/10.1111/phn.12860 .

Yang L, Brown-Johnson CG, Miller-Kuhlmann R, Kling SMR, Saliba-Gustafsson EA, Shaw JG, et al. Accelerated launch of video visits in ambulatory neurology during COVID-19: key lessons from the Stanford experience. Neurology. 2020;95:305–11. https://doi.org/10.1212/WNL.0000000000010015 .

Article   CAS   PubMed   Google Scholar  

Buckingham SA, Sein K, Anil K, Demain S, Gunn H, Jones RB, et al. Telerehabilitation for physical disabilities and movement impairment: a service evaluation in South West England. JEval Clin Pract. 2022;28:108495. https://doi.org/10.1111/jep.13689 .

Chung OS, Robinson T, Johnson AM, Dowling NL, Ng CH, Yücel M, et al. Implementation of therapeutic virtual reality into psychiatric care: clinicians’ and service managers’’ perspectives. Front Psychiatry. 2022;12:791123. https://doi.org/10.3389/fpsyt.2021.791123 .

Hogan-Murphy D, Stewart D, Tonna A, Strath A, Cunningham S. Use of normalization process theory to explore key stakeholders’ perceptions of the facilitators and barriers to implementing electronic systems for medicines management in hospital settings. Res SocialAdm Pharm. 2021;17:398405. https://doi.org/10.1016/j.sapharm.2020.03.005 .

Moss SR, Martinez KA, Nathan C, Pfoh ER, Rothberg MB. Physicians’ views on utilization of an electronic health record-embedded calculator to assess risk for venous thromboembolism among medical inpatients: a qualitative study. TH Open. 2022;6:e33–9. https://doi.org/10.1055/s-0041-1742227 .

Yusof MM. A case study evaluation of a critical Care Information System adoption using the socio-technical and fit approach. Int J Med Inf. 2015;84:486–99. https://doi.org/10.1016/j.ijmedinf.2015.03.001 .

Dugstad J, Sundling V, Nilsen ER, Eide H. Nursing staff’s evaluation of facilitators and barriers during implementation of wireless nurse call systems in residential care facilities. A cross-sectional study. BMC Health Serv Res. 2020;20:163. https://doi.org/10.1186/s12913-020-4998-9 .

Rea K, Le-Jenkins U, Rutledge C. A technology intervention for nurses engaged in preventing catheter-associated urinary tract infections. Comput Inf Nurs. 2018;36:305–13. https://doi.org/10.1097/CIN.0000000000000429 .

Bail K, Davey R, Currie M, Gibson J, Merrick E, Redley B. Implementation pilot of a novel electronic bedside nursing chart: a mixed-methods case study. Aust Health Rev. 2020;44:672–6. https://doi.org/10.1071/AH18231 .

van Laere J, Aggestam L. Understanding champion behaviour in a health-care information system development project – how multiple champions and champion behaviours build a coherent whole. Eur J Inf Syst. 2016;25:47–63. https://doi.org/10.1057/ejis.2015.5 .

Download references

Acknowledgements

We would like to thank the librarian Malin E. Norman, at Nord university, for her assistance in the development of the search, as well as guidance regarding the scientific databases.

This study is a part of a PhD project undertaken by the first author, SP, and funded by Nord University, Norway. This research did not receive any specific grant from funding agencies in the public, commercial, as well as not-for-profit sectors.

Open access funding provided by Nord University

Author information

Authors and affiliations.

Faculty of Nursing and Health Sciences, Nord university, P.O. Box 474, N-7801, Namsos, Norway

Sissel Pettersen & Anita Berg

Centre for Health and Technology, Faculty of Health Sciences, University of South-Eastern Norway, PO Box 7053, N-3007, Drammen, Norway

You can also search for this author in PubMed   Google Scholar

Contributions

The first author/SP has been the project manager and was mainly responsible for all phases of the study. The second and third authors HE and AB have contributed to screening, quality assessment, analysis and discussion of findings. Drafting of the final manuscript has been a collaboration between the first/SP and third athor/AB. The final manuscript has been approved by all authors.

Corresponding author

Correspondence to Sissel Pettersen .

Ethics declarations

Ethics approval and consent to participate.

This review does not involve the processing of personal data, and given the nature of this study, formal consent is not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Pettersen, S., Eide, H. & Berg, A. The role of champions in the implementation of technology in healthcare services: a systematic mixed studies review. BMC Health Serv Res 24 , 456 (2024). https://doi.org/10.1186/s12913-024-10867-7

Download citation

Received : 19 June 2023

Accepted : 14 March 2024

Published : 11 April 2024

DOI : https://doi.org/10.1186/s12913-024-10867-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Technology implementation
  • Healthcare personnel
  • Healthcare services
  • Mixed methods
  • Organizational characteristics
  • Technology adoption
  • Role definitions
  • Healthcare settings
  • Systematic review

BMC Health Services Research

ISSN: 1472-6963

research article appraisal tool

  • Open access
  • Published: 06 April 2024

Delayed discharge in inpatient psychiatric care: a systematic review

  • Ashley-Louise Teale   ORCID: orcid.org/0000-0002-1756-7711 1 ,
  • Ceri Morgan   ORCID: orcid.org/0000-0002-2417-8677 1 ,
  • Tom A. Jenkins   ORCID: orcid.org/0000-0001-7875-4417 1 &
  • Pamela Jacobsen   ORCID: orcid.org/0000-0001-8847-7775 1  

International Journal of Mental Health Systems volume  18 , Article number:  14 ( 2024 ) Cite this article

218 Accesses

8 Altmetric

Metrics details

Delayed discharge is problematic. It is financially costly and can create barriers to delivering best patient care, by preventing return to usual functioning and delaying admissions of others in need. This systematic review aimed to collate existing evidence on delayed discharge in psychiatric inpatient settings and to develop understanding of factors and outcomes of delays in these services.

A search of relevant literature published between 2002 and 2022 was conducted on Pubmed, PsycInfo and Embase. Studies of any design, which published data on delayed discharge from psychiatric inpatient care in high income countries were included. Studies examining child and adolescent, general medical or forensic settings were excluded. A narrative synthesis method was utilised. Quality of research was appraised using the Mixed Methods Appraisal Tool (MMAT).

Eighteen studies from England, Canada, Australia, Ireland, and Norway met the inclusion criteria. Six main reasons for delayed discharge were identified: (1) accommodation needs, (2) challenges securing community or rehabilitation support, (3) funding difficulties, (4) family/carer factors, (5) forensic considerations and (6) person being out of area. Some demographic and clinical factors were also found to relate to delays, such as having a diagnosis of schizophrenia or other psychotic disorder, cognitive impairment, and increased service input prior to admission. Being unemployed and socially isolated were also linked to delays. Only one study commented on consequences of delays for patients, finding they experienced feelings of lack of choice and control. Four studies examined consequences on services, identifying high financial costs.

Overall, the findings suggest there are multiple interlinked factors relevant in delayed discharge that should be considered in practice and policy. Suggestions for future research are discussed, including investigating delayed discharge in other high-income countries, examining delayed discharge from child and forensic psychiatric settings, and exploring consequences of delays on patients and staff. We suggest that future research be consistent in terms used to define delayed discharge, to enhance the clarity of the evidence base.

Review registration number on PROSPERO

Date of registration.

9th December 2021.

Delayed discharge, also termed ‘bed blocking’ and ‘delayed transfer of care,’ refers to when patients remain in hospital beyond the time they are determined to be clinically fit to leave [ 1 , 2 ]. It is an international challenge, costly to individuals, health services and governments [ 3 , 4 ], impacting physical health settings, and also psychiatric inpatient services [ 5 ].

Psychiatric inpatient stays are one of the most expensive forms of treatment for mental health conditions, particularly when compared to care delivered in community settings [ 6 ]. Prolonged stays in mental health hospitals likely increase resource use and as such financial expenditure. This is particularly concerning in instances of delayed discharge when stays are determined to not be of clinical benefit. Delayed discharge also could prevent admission of new patients, contributing to bed crises, where there are not enough beds for all who require admission [ 7 ]. This can have consequences on the course of recovery for newly referred patients, either delaying admission, contributing to inappropriate placements, or leading to individuals being placed out of area [ 7 , 8 ]. Extended hospital stay could also detrimentally impact the delayed patient themselves, preventing their return to usual day-to-day functioning and make returning to the community increasingly difficult [ 9 , 10 ].

Existing reviews have examined predictors of longer stays in psychiatric inpatient settings, finding substance use and being employed are associated with shorter length of stay; while being female, having a diagnoses of mood or psychotic disorders and use of Electroconvulsive Therapy are associated with longer stay [ 11 ]. However, there is not to our knowledge a systematic review collating evidence examining delayed discharge in psychiatric settings. As delayed discharge is a unique experience, distinct from long stay driven by clinical need, it requires separate focus to further understand this specific experience.

Furthermore, a large body of evidence has examined delayed discharge in physical health settings with several systematic reviews, examining causes and outcomes. Such reviews have found that delayed discharges were linked to problems in discharge planning, transfer of care difficulties and patient age [ 12 , 13 ]. Outcomes for services included overcrowding and financial costs, whereas outcomes for patients included infections, depression, reduction in activities and mortality. There may be both overlapping and non-overlapping factors associated with delayed discharge between physical and psychiatric inpatient settings. For example, inpatient psychiatric services may differ in organisational structure, daily workings, and treatment focus from general medical services. The clinical population might also differ in psychiatric and physical health settings, for example in age, socio-economic status, and other demographic, plus clinical factors. As such, it is vital that separate attention be given to the area of psychiatric care.

This systematic review aims to fill the current research gap and synthesise existing literature on psychiatric delayed discharges. We aimed to synthesise the available international data from high-income countries, as the prevalence and underlying reasons for delayed discharge are likely to be highly sensitive to context and heterogeneous across countries. This is due to factors such as different models of healthcare funding, and the varying social role of the family in providing care, for example. Developing in-depth understanding of the causes and consequences of delays in a psychiatric inpatient context is important in informing practice and policies at a service, organisational, societal, and government level. This could help develop ways to reduce occurrence of delays and mitigate any negative impacts.

The aim of this review was to increase understanding of what is known about factors influencing delayed discharge in adult psychiatric inpatient settings. Secondary aims were to examine outcomes of delayed discharge for patients and compare findings across different psychiatric settings and age groups.

The systematic review protocol was pre-registered on PROSPERO before the review was started and the searches were run (PROSPERO: 292515). The review is reported in line with PRISMA guidelines [ 14 , 15 ]. The primary research question of this review is: What is known about factors associated with delayed discharge from inpatient psychiatric care settings?

Secondary research questions were:

What are the outcomes for those who have experienced delayed discharge from inpatient psychiatric settings, for example, in mental health outcomes, health outcomes, readmissions and quality of life?

What are the outcomes on services in terms of resources and costs from delayed psychiatric inpatient discharge?

What are the experiences of staff and patients of delayed discharge from inpatient mental health wards?

Are there differences between types of inpatient services, including acute, rehabilitation or specialist inpatient wards, in factors and costs, are there differences between working age adults and older adults, in experiences of delayed discharge, search strategy.

Initial searches were conducted on the 15th of January 2022, and updated on the 5th of August 2022. Pubmed, PsycInfo and Embase were searched.

Search terms (Appendix B in supplementary materials) were developed through examining key words of published studies on the topic, reviewing the terms used in comparative reviews based in physical health settings and thesaurus mapping. Terms included: “delayed discharge,” “bed blocking” and “long stays.” Search terms were piloted on each database prior to running the final search.

The search included studies published from 2002. A 20-year search timeframe was selected, as psychiatric inpatient care has adapted in response to changing need and updated knowledge over time. As such, studies published before 2002 are likely to be less relevant to current practice.

Following database searches, reference lists of included papers were examined, to identify any relevant studies missed in the search. A forward citation search was also conducted, to identify any relevant studies that were cited in the included papers.

Inclusion and exclusion criteria

Studies were included if they reported data related to delayed discharge or associated outcomes, in adult psychiatric inpatient wards. Specialist and rehabilitation psychiatric inpatient settings were included. Studies of any design were included, providing they were published in a peer-reviewed journal. Both quantitative and qualitative studies were included.

Studies exploring delayed discharge in child or adolescent units and/or forensic units were excluded. This was because the causes and outcomes of delays in such settings are likely unique, given the specialist context. For example, there is likely different systemic involvement from families and different governing legislation in these contexts. As such, it was determined that such settings were too disparate, and synthesising studies from these settings together with adult psychiatric settings could lead to inaccurate conclusions. Physical health settings were also excluded, given the different processes, procedures and treatment focus involved in such settings. In addition, reviews have already been conducted examining delayed discharge from such settings. Studies not conducted in high-income countries were also excluded. In this review, we included high-income countries as defined by World Bank criteria, accessed in January 2022 [ 16 ] (see Appendix C in supplementary material for the list of included countries). Globally, countries differ in the conceptualisation of mental health and provisions offered, therefore, limiting this review to only high-income countries would enable comparisons to be made.

Study selection and data extraction

Screening was conducted using Covidence Systematic Review Software [ 17 ]. All records were independently double-screened by two reviewers at both title/abstract and full-text stage. Conflicts were resolved by discussion to reach consensus, with referral to the senior author (PJ) when needed.

A standardised template was used for data extraction, with all included studies being independently double extracted by two reviewers, with consensus achieved by discussion where needed.

A narrative synthesis method was used. For data examining reasons for delayed discharge, a deductive approach was taken initially. Authors identified possible reasons for delays based on existing literature and organised data under these categories/themes. Any data that did not fit into the pre-defined categories was pooled as ‘other’. All categories were then reviewed, with particular attention placed on the ‘other’ categories, to determine if additional categories need to be added or existing categories adapted. Sub-categories were identified when appropriate through coding. Once categories were established, the number of papers which reported each reason/factor were tabulated and data was reviewed to examine relationships, exploring both links and disparities within and between studies. The final synthesis was checked by three authors (AT, TJ, and CM), to achieve final agreement.

Data relating to outcomes/consequences of delayed discharge was synthesised in a similar way, with data initially organised into three categories: (1) consequences for patients, (2) consequences for service, (3) consequences for staff. Categories were reviewed by the authors following synthesis. Financial costs were converted to US dollars by the authors to support comparison.

Quality assessment of the included studies formed part of the synthesis with the appraisal of quality considered in the interpretation of results.

Quality Assessment

Quality assessment of studies was completed during the synthesis stage. In the protocol, we initially outlined that the Quality Assessment Tool for Studies of Diverse Designs (QATSDD) would be utilised [ 18 ]. However, following a trial of this tool with the included papers, we noted disparities in interpretations between authors. Therefore, the Mixed Methods Appraisal Tool (MMAT) was established to be a more suitable appraisal of quality for the included studies. The MMAT was developed for assessing and comparing the quality of studies using quantitative, qualitative and mixed-methods design, in one tool [ 19 ]. This tool was selected as studies of different designs were included in the review and this tool allows for quality appraisal across five different study types, distinguishing between methodology.

Two initial screening questions were answered to determine appropriateness of using the MMAT to assess quality of the study (are there clear research questions and do the collected data address the research questions). If screening questions are not passed, this tool is deemed inappropriate. Providing the screening questions were passed, quality was assessed on five questions within one of five categories. The category in which questions were answered was determined by study design. The MMAT discourages from scoring and assigning qualitative labels to describe quality, instead advises a more detailed evaluation of quality [ 19 ]. This approach has therefore been taken in this paper.

To achieve reliable and accurate quality ratings, every study was quality rated by two members of the research team and conflicts were discussed to reach consensus.

Identification of studies

Figure  1 (PRISMA flowchart) shows the study selection process. After removing duplicates, a total of 4891 papers were identified for screening. 4397 papers were excluded at title and abstract stage. Full texts were then obtained for 492 papers. Two full texts could not be obtained via the library service and the authors did not respond to a request for the paper. There were four papers obtained that were erratum’s, all of which related to excluded studies that were not examining delayed discharge and as such, were not linked to the included studies. Following full text screening 18 papers were eligible for inclusion. Each paper represented a different study.

figure 1

Preferred reporting items for systematic reviews and meta-analyses (PRISMA) flowchart

Study characteristics

Table  1 shows the characteristics of the 18 included studies. Twelve of these studies examined delayed discharge as a primary outcome, with three of these studies specifically examining Housing Related Delayed Discharge (HRDD). HRDD is defined as instances where delayed discharge is attributed to housing issues. The remaining studies ( n  = 6) reported delayed discharge as secondary outcomes. Fifteen studies were of quantitative observational design, two studies used mixed methodologies and one was qualitative.

In the included studies, there was a range of psychiatric inpatient settings: psychiatric/general mental health units ( n  = 11), Psychiatric Intensive Care Units (PICUs) ( n  = 2), older adult psychiatric units ( n  = 3) and Mental Health Trusts ( n  = 1). One study looked across three inpatient settings: acute psychiatric, PICU and older adult. Studies were conducted in five high income countries (England = 10, Ireland = 1, Australia = 3, Canada = 3, and Norway = 1). There were no studies from any other high-income countries, identified in the search.

The MMAT quality scores are shown (Table  2 ). One included study [ 20 ] did not meet initial criteria to be assessed using this tool, as the research questions were unclear.

All studies were of fairly good quality, with all studies meeting at least three out of five of the quality assessment criteria. Quality was highest in Australian and Canadian studies, with included papers in these countries meeting all five quality assessment criteria [ 21 , 22 , 23 , 24 , 25 , 26 ]. Quality assessment ratings indicate that three quantitative descriptive studies included, did not clearly report use of a representative sample or appropriate measures. Ratings per question are shown in Table two.

Research Q1

What is known about factors associated with delayed discharge.

Thirteen studies identified reasons for delayed discharge (Table  1 ). The results showed that there are many complex reasons for delays with often overlapping contributing factors. We categorised reasons for delay into six categories: (1) accommodation needs, (2) difficulty securing rehabilitation or community support, (3) finance/funding challenges, (4) family/carer factors, (5) forensic factors, (6) patient being out of area.

The most common reason for delays was due to accommodation and placement factors. This was identified as a contributing reason for delay in twelve studies and a further two studies assessed Housing-Related Delayed Discharge (HRDD), suggesting accommodation factors contributing to delay in these cases. Accommodation/placement factors included limited availability of placements ( n  = 7), difficulty finding appropriate placements ( n  = 5), awaiting or undergoing placement assessment ( n  = 3), challenges in person returning to accommodation ( n  = 3), e.g., awaiting repairs or adaptations to their home, individuals being rejected from placement ( n  = 2), patients/family rejecting placement ( n  = 2) and awaiting transfer ( n  = 1). It should be noted that one of the studies which examined specific accommodation factors was unable to be quality assessed due to not having clear research questions and therefore did not meet the screening criteria for assessment with the MMAT [ 20 ], and two studies only met three of the five quality assessment criteria, with queries regarding the quality of measures used and analysis technique for one study [ 27 ], and some difficulties integrating and meeting the full quality criteria for the mixed methods approaches used in the second [ 28 ]. The second reason identified for delays was difficulty sourcing support for the person to enable discharge, such as community, rehabilitation, and homecare support. This contributed to delays in twelve studies. Eight of these studies met four to five of the quality assessment criteria, one was not able to be assessed [ 20 ], and three only met three of the five quality assessment criteria [ 27 , 28 , 29 ]. A third reason for delay was finance/funding challenges identified in nine studies. These included challenges obtaining funding, patients/families’ refusal to pay for placements and funding applications being rejected. Six studies identified family/carers factors in creating delays, such as family conflict, family not wanting the person to live with them and ongoing family discussion. The quality of two of the studies identifying family and finance factors should be considered, as one of these studies was unable to be quality assessed due to a lack of clear research questions [ 20 ] and a second met only three of the five quality assessment criteria [ 28 ]. The fifth reason identified in this review as contributing to delay was forensic factors, which accounted for delays in three studies, all of good methodological quality. Forensic delays incorporated delay by Ministry of Justice and awaiting forensic assessment. Person being out of area was highlighted as a reason for delay in only one study and it was not possible to quality assess this study due to no specific research questions identified [ 20 ], suggesting limited exploration or evidence for out of areas contributing to delays.

Fourteen studies included in this review examined the demographic and clinical factors relevant in delays, with eight conducting significance testing to establish associations. Significant associations with delay were having a diagnosis of schizophrenia or other psychotic disorder ( n  = 4), cognitive impairment ( n  = 3) and type/amount of service input prior to admission ( n  = 3). All studies reporting these significant results were of a good methodological quality, achieving at least four of the five MMAT quality criteria. Results were mainly consistent across those studies which examined significance, however, there was one study of good quality that did not find significant association with schizophrenia diagnosis [ 22 ]. The impact of physical health differed between Australia and England, where in one English study having fair-excellent health was more associated with delays [ 30 ], though two Australian studies found poorer physical health linked to delays [ 24 , 25 ]. Findings related to demographic characteristics, including gender, age, ethnicity, socio-economic status, were inconsistent across studies. The only consistent finding was that a smaller proportion of the delayed group were employed ( n  = 3). One of these studies found significant association between being unemployed and delayed discharge. The two other studies found only one member of the delayed group was employed, less than non-delayed groups, though this was not significance tested. There was some indication that being not being married and lacking a support network, was higher in delayed groups. One study found significant relationships to being unmarried and another finding that the delayed group were visited significantly less often by relatives. The other studies did not conduct significance testing. However, there was no significant relationship related to marriage between delayed and non-delayed groups in two studies [ 22 , 31 ]. One of these studies only clearly met three of the quality assessment criteria [ 31 ], though the other met all five quality assessment criteria. Being male was significantly associated with delays in two Canadian studies [ 21 , 22 ]. No significant association with gender was found in other studies.

The supplementary materials provide additional analysis of results for research question one, further describing each study’s findings. Additional materials also include tables showing tabulation of which study examined each variable.

Research Q2

What are the outcomes for those who have experienced delayed discharge from inpatient psychiatric settings for example, in mental health outcomes, health outcomes, readmissions and quality of life.

Only one study examined individual outcomes of delayed discharge for patients [ 26 ]. As such, there is limited data to draw conclusions to answer this research question. The study that evaluated patient outcomes was of qualitative design and good quality. The study explored Housing-Related Delayed Discharge (HRDD) in Australia for 10 patients using semi-structured interviews. They found consequences of lack of choice and control for patients, which impacted mental wellbeing, physical health and created a sense of anticipation for transition to community. Some participants highlighted a positive outcome of delayed discharge in preventing homelessness.

Research Q3

What is the outcome on services in terms of resources and costs from delayed psychiatric inpatient discharge.

Four studies assessed financial costs of delayed discharge for services, providing limited evidence in terms of financial outcomes. Each study focused on a different country. At an old age psychiatry unit in England, delayed discharges were estimated to cost over $855,820 for the year [ 20 ]. Notably, this study was not quality assessed due to the omission of research questions. In a high-quality paper from Australia, HRDD cost the health district $2,828,174 over one year [ 25 ]. While both papers present yearly costs, there is disparity in area covered, contributing to difficulty making comparisons regarding financial expenditure. Two studies calculated financial expenditure and did not present the cost per year. In a Canadian study, using the median number of delayed days ( M  = 17), it was calculated that the average cost incurred by one episode of delayed days was approximately $5,746 [ 21 ]. Furthermore, in Norway, $491,406 was allocated to delays on the acute ward included in the study, though methodological quality might be queried, due to lack of clarity on whether the sample was representative and the appropriateness of measures utilised [ 29 ]. The information necessary to calculate costs per year or costs per delayed day, to enable comparisons to be made across studies, has not included in the studies.

Aside from financial costs, no other type of outcome for services were assessed.

Research Q4

None of the included studies explored specific experiences of delayed discharge for staff. Some information on experiences for patients is detailed in question two.

Research Q5

This systematic review identified studies in acute psychiatric, older adult and Psychiatric Intensive Care Unit (PICU) settings. Only one study included Learning Disability inpatient care settings [ 28 ]. This study was of mixed-method design and met three quality assessment criteria. No studies reported data from rehabilitation units. There were few differences identified between types of setting. Prevalence of delayed discharge was highest in older adult settings (56.9%) [ 30 ] and PICU settings (51.1%) [ 32 ], compared to working age adult settings (18–32%) [ 31 , 33 ]. However, the highest proportion of delayed days was found in acute psychiatric settings in Norway acute psychiatric units (54.8%) [ 29 ]. More information on prevalence is provided in supplementary materials.

Reasons for delay did not vary much across type of setting. There is a potential service difference in the impact of physical health in delays, as having fair-excellent health was more associated with delays in an English older adult study [ 30 ], while in a working age adult sample in Australian studies [ 24 , 25 ], having poor health was more associated with delays. However, this could represent a disparity in country. There were some other differences across countries found. Forensic reasons for delay were only found in the UK ( n  = 2), as was due to patient being out of area ( n  = 1). In UK settings, there was no significant difference found in gender between those delayed and those not [ 30 , 34 ], though there was in Canada [ 21 ]. England and Australia were the only countries identifying funding issues as contributing to delay. Each country will have its own respective funding system, which could impact delays. For example, two Australian studies identified difficulties with their own National Disability Insurance Scheme [ 24 , 25 ].

Research Q6

Only five of the included studies looked specifically at older adult settings, all of which were in the UK. A further five studies, from the UK and Canada, included older adults within their sample, despite not examining a specific older adult setting.

The highest proportion of inpatients experiencing delayed discharge were from older adult settings, with one study identifying 56.9% [ 30 ] of inpatients experiencing delays. There were lower rates of delayed patients in working age adult psychiatric inpatient settings in comparison, with 3.5% [ 21 , 25 ] to 39.1% [ 29 ] of patients experiencing delay. Similarly, two studies in Canada identified that a higher proportion of older adults made up the delayed group compared to the non-delayed group, suggesting that older adult inpatients are more likely to experience delay [ 21 , 22 ]. However, two English studies found delayed discharge was not associated with age [ 31 , 35 ]. One of these studies met only three quality assessment criteria, with lack of clarity regarding the quality of sampling and representativeness of the sample [ 31 ].

In terms of reasons for delay, no clear differences were found across age groups. Although when limiting comparisons to studies conducted in the UK, family/carer factors was identified as a reason for delay more frequently in older adult samples ( n  = 3) compared to studies looking at working age adults ( n  = 1). To support this finding, one study in England found that eight older adult trusts identified patient/carer exercising choice as a reason for delay, whereas the same was true for only four working age adult trusts [ 28 ]. However, this finding cannot be generalised across all countries. There is also some indication that cognitive impairment/dementia might increase likelihood of delay in older adult samples, as two studies identified the role of dementia and greater cognitive impairment in the delayed older adult groups [ 20 , 30 ]. A further two studies examined the impact of cognitive impairment, finding association with delay [ 21 , 22 ]. However, these studies included working age samples, so it is unclear who in the sample this impacted. In addition, physical health status could cause delays differently in older adult populations. In an older adult UK sample having fair-excellent health was more associated with delays [ 30 ], whereas two Australian studies in working age adult inpatient settings found poorer physical health increased delays [ 24 , 25 ]. This difference could however be attributed to country or setting. Funding was identified as a reason for delay in all studies in older adult settings ( n  = 5), but the same was not true for the other setting types. Forensic factors were not found to be a reason for delay in any of the studies with older adult inpatients, conversely patient being out of area was only identified as a reason for delay in an older adult sample [ 20 ].

This systematic review aimed to fill a research gap and examine factors contributing to delayed discharge in adult psychiatric inpatient settings and explore associated consequences. This adds a unique contribution to the evidence base, which predominantly has focused on delayed discharge from physical health settings. Eighteen studies were included for synthesis.

The findings suggest that there are varying inter-related reasons for delay, including accommodation or placement needs, difficulties securing the required support services, funding and finance challenges, family/carer factors, forensic factors and the person being out of area. There were mixed findings regarding demographic and clinical characteristics associated with delays. However, this review showed that delays could be associated with the person having diagnosis of schizophrenia or other psychotic disorder, cognitive impairment, being unemployed and receiving increased service input prior to admission.

There were only a few studies that commented on outcomes of delays. Only one study examined outcomes for patients, identifying feelings of lack of choice and control, while four studies looked at financial outcomes for services, finding large costs associated with delays. This points to a lack of evidence examining the outcomes and experiences of psychiatric delayed discharge, and therefore requires further attention in research.

This review adds to and expands on existing findings, identifying similarities and differences between longer stay generally. For example, one review [ 11 ] found that long stay was associated with mood and psychotic disorders, use of Electroconvulsive Therapy, and being female. Being married, employed, and using substances were associated with a shorter stay [ 11 ]. Our review found that psychiatric delayed discharge was also associated with diagnosis of schizophrenia or other psychotic disorder and being unemployed. However, we found delayed discharge to be associated with cognitive impairment and increased service input prior to admission, but not gender or treatment. This could suggest some important differences in those at risk of delays or those requiring longer inpatient treatment. It is important to note however, the review by Gopalakrishina and colleagues did not distinguish between those patients with long stay clinically warranted and delayed discharge patients [ 11 ]. It would be of benefit for future research on long stay patients to better define their sample based on those who clinically needed treatment or longer stay patients in the context of delayed discharge, allowing similarities and differences to be better explored. This will support policy makers and service managers to better identify those at risk of delays that are not clinically necessary, and those who might need additional clinical input. The findings in this review provide some suggestion that there could be benefit in considering a person’s social context when they are admitted to psychiatric inpatient care, including their living situation at admission, employment status and cognitive functioning. Identifying patients at higher risk of delays earlier in admission might be useful, to ensure more time be given to organise and find appropriate accommodations, placements and service support and facilitate discharge. Wider policy and structural changes are needed, such as improving the availability of appropriate accommodation placements.

It is important to highlight that there were discrepancies across studies in language used to term delayed discharge, e.g., ‘alternate level of care,’ ‘waiting days’ and ‘prolonged stay.’ Due to such discrepancies in definitions and terminology, during the screening process it was at times difficult to determine if studies were focused on delayed discharge or longer lengths of stay clinically required. In this review, studies were excluded if the focus was unclear to prevent incorrect conclusions being drawn related to the unique experience of delayed discharge. However, this means other relevant findings might have been missed. It would therefore be useful for future research on psychiatric inpatient care to ensure clarity in the terminology and definitions used in reports. There were also discrepancies in the way financial costs related to delays were reported, i.e., whether reported as cost per day, cost per year. This made comparing the costs across countries challenging and prevented clear conclusions being drawn. Future research should therefore aim to ensure clarity when reporting financial expenditure, for example, by calculating the daily cost of delays. It is important to highlight that only eighteen studies were identified over the 20-year search period, suggesting this area has not yet been subject to much research focus. All high-income countries met inclusion, but the final sample included studies from only five countries. It might have been expected that studies in other high-income countries be identified, particularly given the expensive nature of inpatient stays and as such delayed discharge. It might be beneficial for future research to further examine delayed discharge in psychiatric settings across other countries, particularly in the USA and EU. For the purposes of this review, studies not conducted in high-income countries were excluded. This was because lower-income countries might experience different factors contributing to delays due to differences in healthcare funding and social factors. As such, separate attention should be given to these settings, to understand similarities or differences in reasons for delays across low- and mid- income countries. Studies on forensic psychiatric settings and child and adolescent settings were also excluded in this instance, so again, there might be benefit in future research examining these areas.

Furthermore, future research could look not only at factors creating delays, but those causing longer delays. Some of the studies in this review began examining this, but more research in this area could be of interest. Finally, while the quality of included studies was relatively high, the studies were primarily of quantitative audit design and infrequently conducted significance testing. As such, further exploration of associations using significance testing would strengthen the evidence base.

In conclusion, 18 studies identified reasons for delayed discharge, including accommodation and placement related factors, challenges securing appropriate support, funding difficulties, family/carer factors, forensic factors and person being out of area. Delay was associated with having a diagnosis of schizophrenia or other psychotic disorder, cognitive impairment, increased service involvement prior to admission, and being unemployed. Service, societal and policy changes might be indicated, to improve accommodation and care provisions following discharge. Future research should continue to examine prolonged inpatient psychiatric stays, ensuring to distinguish between long stays and delayed discharge and improve clarity in terminology used.

Data availability

The data on which this review is based will be made publicly available on publication. A link to data for anonymous peer-review is here: https://osf.io/j4kng/?view_only=1fbf2558d9d044bbb1778fccd5fd6f51 .

Bryan K. Policies for reducing delayed discharge from hospital. Br Med Bull. 2010;95(1):33–46.

Article   PubMed   Google Scholar  

NHS England. Monthly delayed transfers of care situation reports: definitions and guidance. London: NHS England; 2015.

Google Scholar  

House of Commons Health Committee. Delayed Discharges (third report). In: Committee H, editor. 2002.

Rojas-García A, Turner S, Pizzo E, Hudson E, Thomas J, Raine R. Impact and experiences of delayed discharge: a mixed‐studies systematic review. Health Expect. 2018;21(1):41–56.

Christ W. Factors delaying discharge of psychiatric patients. Health Soc Work. 1984;9(3):178–87.

Article   CAS   Google Scholar  

Department of Health. NHS reference costs 2013 to 2014. Department of Health London; 2013.

Glasby J, Lester H. Delayed hospital discharge and mental health: the policy implications of recent research. Social Policy Adm. 2004;38(7):744–57.

Article   Google Scholar  

National Institute of Health and Care Excellence [NICE]. Transition between inpatient mental health settings and community or care home settings. National Institue for Health and Care Excellence; 2016.

Katsakou C, Rose D, Amos T, Bowers L, McCabe R, Oliver D, et al. Psychiatric patients’ views on why their involuntary hospitalisation was right or wrong: a qualitative study. Soc Psychiatry Psychiatr Epidemiol. 2012;47:1169–79.

Csipke E, Williams P, Rose D, Koeser L, McCrone P, Wykes T, et al. Following the Francis report: investigating patient experience of mental health in-patient care. Br J Psychiatry. 2016;209(1):35–9.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Gopalakrishna G, Ithman M, Malwitz K. Predictors of length of stay in a psychiatric hospital. Int J Psychiatry Clin Pract. 2015;19(4):238–44.

Micallef A, Buttigieg SC, Tomaselli G, Garg L. Defining delayed discharges of inpatients and their impact in acute hospital care: a scoping review. Int J Health Policy Manage. 2022;11(2):103.

Cadel L, Guilcher SJ, Kokorelias KM, Sutherland J, Glasby J, Kiran T, et al. Initiatives for improving delayed discharge from a hospital setting: a scoping review. BMJ open. 2021;11(2):e044291.

Article   PubMed   PubMed Central   Google Scholar  

Moher D, Liberati A, Tetzlaff J, Altman DG. PRISMA Group* t. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264–9.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. Updating guidance for reporting systematic reviews: development of the PRISMA 2020 statement. J Clin Epidemiol. 2021;134:103–12.

World Bank. World Bank Country and Lending Groups 2022 [ https://datahelpdesk.worldbank.org/knowledgebase/articles/906519.]

Covidence systematic review software. Veritas health innovation Melbourne, Australia. 2020. Available at: www.covidence.org.

Sirriyeh R, Lawton R, Gardner P, Armitage G. Reviewing studies with diverse designs: the development and evaluation of a new tool. J Eval Clin Pract. 2012;18(4):746–52.

Hong QN, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, et al. The mixed methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. Educ Inform. 2018;34(4):285–91.

Hanif I, Rathod B. Delays in discharging elderly psychiatric in-patients. Psychiatr Bull. 2008;32(6):211–3.

Little J, Hirdes JP, Daniel I, editors. ALC status in in-patient mental health settings: evidence based on the Ontario Mental Health Reporting System. Healthcare Management Forum. Los Angeles, CA: SAGE Publications Sage CA; 2015.

Little J, Hirdes JP, Perlman CM, Meyer SB. Clinical predictors of delayed discharges in inpatient mental health settings across Ontario. Adm Policy Mental Health Mental Health Serv Res. 2019;46:105–14.

Aflalo M, Soucy N, Xue X, Colacone A, Jourdenais E, Boivin J-F. Characteristics and needs of psychiatric patients with prolonged hospital stay. Can J Psychiatry. 2015;60(4):181–8.

Honey A, Arblaster K, Nguyen J, Heard R. Predicting Housing Related delayed discharge from Mental Health Inpatient units: a Case Control Study. Adm Policy Mental Health Mental Health Serv Res. 2022;49(6):962–72.

Nguyen J, Honey A, Arblaster K, Heard R. Housing-related delayed discharge from inpatient mental health units: Magnitude and contributors in a metropolitan mental health service. Australian J Social Issues. 2022;57(1):144–63.

Chuah CPT, Honey A, Arblaster K. I’m institutionalised… there’s not much I can do’: lived experience of housing related delayed discharge. Aust Occup Ther J. 2022;69(5):574–84.

Cowman J, Whitty P. Prevalence of housing needs among inpatients: a 1 year audit of housing needs in the acute mental health unit in Tallaght Hospital. Ir J Psychol Med. 2016;33(3):159–64.

Article   CAS   PubMed   Google Scholar  

Lewis R, Glasby J. Delayed discharge from mental health hospitals: results of an English postal survey. Health Soc Care Commun. 2006;14(3):225–30.

Berg JE, Restan A. Duration of bed occupancy as calculated at a random chosen day in an acute care ward. Implications for the use of scarce resources in psychiatric care. Ann Gen Psychiatry. 2005;4(1):1–6.

Tucker S, Hargreaves C, Wilberforce M, Brand C, Challis D. What becomes of people admitted to acute old age psychiatry wards? An exploration of factors affecting length of stay, delayed discharge and discharge destination. Int J Geriatr Psychiatry. 2017;32(9):1027–36.

Tyrer P, Suryanarayan G, Rao B, Cicchetti D, Fulop N, Roberts F, et al. The bed requirement inventory: a simple measure to estimate the need for a psychiatric bed. Int J Soc Psychiatry. 2006;52(3):267–77.

Onyon R, Khan S, George M. Delayed discharges from a psychiatric intensive care unit–are we detaining patients unlawfully? J Psychiatric Intensive Care. 2006;2(2):59–64.

Impey M, Milner E. Delayed discharge from mental health inpatient care in the UK. Mental Health Pract. 2013;16(9).

Haw C, Otuwehinmi O, Kotterbova E. Out of area admissions to two independent sector PICUs: patient characteristics, length of stay and delayed discharges. J Psychiatric Intensive Care. 2017;13(1):27–36.

Poole R, Pearsall A, Ryan T. Delayed discharges in an urban in-patient mental health service in England. Psychiatric Bull. 2014;38(2):66–70.

Commander M, Rooprai D. Survey of long-stay patients on acute psychiatric wards. Psychiatr Bull. 2008;32(10):380–3.

Paton JM, Fahy MA, Livingston GA. Delayed discharge—a solvable problem? The place of intermediate care in mental health care of older people. Aging Ment Health. 2004;8(1):34–9.

Download references

No funding to declare.

Author information

Authors and affiliations.

Department of Psychology, University of Bath, Bath, BA2 7AY, UK

Ashley-Louise Teale, Ceri Morgan, Tom A. Jenkins & Pamela Jacobsen

You can also search for this author in PubMed   Google Scholar

Contributions

AT and PJ formulated the initial research questions and developed the systematic review protocol. AT ran the searches on databases. AT, CM and TJ conducted the screening, data extraction and quality assessment. PJ acted as senior reviewer to resolve any conflicts. AT synthesised the results. All authors contributed to data synthesis and interpretation. AT wrote the paper. All authors read and approved the final version of manuscript for submission.

Corresponding author

Correspondence to Ashley-Louise Teale .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Teale, AL., Morgan, C., Jenkins, T.A. et al. Delayed discharge in inpatient psychiatric care: a systematic review. Int J Ment Health Syst 18 , 14 (2024). https://doi.org/10.1186/s13033-024-00635-9

Download citation

Received : 28 July 2023

Accepted : 25 March 2024

Published : 06 April 2024

DOI : https://doi.org/10.1186/s13033-024-00635-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Delayed discharge
  • Bed blocking
  • Delayed transfer
  • Psychiatric inpatient
  • Inpatient treatment
  • Prolonged stays
  • Length of stay

International Journal of Mental Health Systems

ISSN: 1752-4458

research article appraisal tool

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Clin Diagn Res
  • v.11(5); 2017 May

Critical Appraisal of Clinical Research

Azzam al-jundi.

1 Professor, Department of Orthodontics, King Saud bin Abdul Aziz University for Health Sciences-College of Dentistry, Riyadh, Kingdom of Saudi Arabia.

Salah Sakka

2 Associate Professor, Department of Oral and Maxillofacial Surgery, Al Farabi Dental College, Riyadh, KSA.

Evidence-based practice is the integration of individual clinical expertise with the best available external clinical evidence from systematic research and patient’s values and expectations into the decision making process for patient care. It is a fundamental skill to be able to identify and appraise the best available evidence in order to integrate it with your own clinical experience and patients values. The aim of this article is to provide a robust and simple process for assessing the credibility of articles and their value to your clinical practice.

Introduction

Decisions related to patient value and care is carefully made following an essential process of integration of the best existing evidence, clinical experience and patient preference. Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value and relevance in order to direct professionals in their vital clinical decision making [ 1 ].

Critical appraisal is essential to:

  • Combat information overload;
  • Identify papers that are clinically relevant;
  • Continuing Professional Development (CPD).

Carrying out Critical Appraisal:

Assessing the research methods used in the study is a prime step in its critical appraisal. This is done using checklists which are specific to the study design.

Standard Common Questions:

  • What is the research question?
  • What is the study type (design)?
  • Selection issues.
  • What are the outcome factors and how are they measured?
  • What are the study factors and how are they measured?
  • What important potential confounders are considered?
  • What is the statistical method used in the study?
  • Statistical results.
  • What conclusions did the authors reach about the research question?
  • Are ethical issues considered?

The Critical Appraisal starts by double checking the following main sections:

I. Overview of the paper:

  • The publishing journal and the year
  • The article title: Does it state key trial objectives?
  • The author (s) and their institution (s)

The presence of a peer review process in journal acceptance protocols also adds robustness to the assessment criteria for research papers and hence would indicate a reduced likelihood of publication of poor quality research. Other areas to consider may include authors’ declarations of interest and potential market bias. Attention should be paid to any declared funding or the issue of a research grant, in order to check for a conflict of interest [ 2 ].

II. ABSTRACT: Reading the abstract is a quick way of getting to know the article and its purpose, major procedures and methods, main findings, and conclusions.

  • Aim of the study: It should be well and clearly written.
  • Materials and Methods: The study design and type of groups, type of randomization process, sample size, gender, age, and procedure rendered to each group and measuring tool(s) should be evidently mentioned.
  • Results: The measured variables with their statistical analysis and significance.
  • Conclusion: It must clearly answer the question of interest.

III. Introduction/Background section:

An excellent introduction will thoroughly include references to earlier work related to the area under discussion and express the importance and limitations of what is previously acknowledged [ 2 ].

-Why this study is considered necessary? What is the purpose of this study? Was the purpose identified before the study or a chance result revealed as part of ‘data searching?’

-What has been already achieved and how does this study be at variance?

-Does the scientific approach outline the advantages along with possible drawbacks associated with the intervention or observations?

IV. Methods and Materials section : Full details on how the study was actually carried out should be mentioned. Precise information is given on the study design, the population, the sample size and the interventions presented. All measurements approaches should be clearly stated [ 3 ].

V. Results section : This section should clearly reveal what actually occur to the subjects. The results might contain raw data and explain the statistical analysis. These can be shown in related tables, diagrams and graphs.

VI. Discussion section : This section should include an absolute comparison of what is already identified in the topic of interest and the clinical relevance of what has been newly established. A discussion on a possible related limitations and necessitation for further studies should also be indicated.

Does it summarize the main findings of the study and relate them to any deficiencies in the study design or problems in the conduct of the study? (This is called intention to treat analysis).

  • Does it address any source of potential bias?
  • Are interpretations consistent with the results?
  • How are null findings interpreted?
  • Does it mention how do the findings of this study relate to previous work in the area?
  • Can they be generalized (external validity)?
  • Does it mention their clinical implications/applicability?
  • What are the results/outcomes/findings applicable to and will they affect a clinical practice?
  • Does the conclusion answer the study question?
  • -Is the conclusion convincing?
  • -Does the paper indicate ethics approval?
  • -Can you identify potential ethical issues?
  • -Do the results apply to the population in which you are interested?
  • -Will you use the results of the study?

Once you have answered the preliminary and key questions and identified the research method used, you can incorporate specific questions related to each method into your appraisal process or checklist.

1-What is the research question?

For a study to gain value, it should address a significant problem within the healthcare and provide new or meaningful results. Useful structure for assessing the problem addressed in the article is the Problem Intervention Comparison Outcome (PICO) method [ 3 ].

P = Patient or problem: Patient/Problem/Population:

It involves identifying if the research has a focused question. What is the chief complaint?

E.g.,: Disease status, previous ailments, current medications etc.,

I = Intervention: Appropriately and clearly stated management strategy e.g.,: new diagnostic test, treatment, adjunctive therapy etc.,

C= Comparison: A suitable control or alternative

E.g.,: specific and limited to one alternative choice.

O= Outcomes: The desired results or patient related consequences have to be identified. e.g.,: eliminating symptoms, improving function, esthetics etc.,

The clinical question determines which study designs are appropriate. There are five broad categories of clinical questions, as shown in [ Table/Fig-1 ].

[Table/Fig-1]:

Categories of clinical questions and the related study designs.

2- What is the study type (design)?

The study design of the research is fundamental to the usefulness of the study.

In a clinical paper the methodology employed to generate the results is fully explained. In general, all questions about the related clinical query, the study design, the subjects and the correlated measures to reduce bias and confounding should be adequately and thoroughly explored and answered.

Participants/Sample Population:

Researchers identify the target population they are interested in. A sample population is therefore taken and results from this sample are then generalized to the target population.

The sample should be representative of the target population from which it came. Knowing the baseline characteristics of the sample population is important because this allows researchers to see how closely the subjects match their own patients [ 4 ].

Sample size calculation (Power calculation): A trial should be large enough to have a high chance of detecting a worthwhile effect if it exists. Statisticians can work out before the trial begins how large the sample size should be in order to have a good chance of detecting a true difference between the intervention and control groups [ 5 ].

  • Is the sample defined? Human, Animals (type); what population does it represent?
  • Does it mention eligibility criteria with reasons?
  • Does it mention where and how the sample were recruited, selected and assessed?
  • Does it mention where was the study carried out?
  • Is the sample size justified? Rightly calculated? Is it adequate to detect statistical and clinical significant results?
  • Does it mention a suitable study design/type?
  • Is the study type appropriate to the research question?
  • Is the study adequately controlled? Does it mention type of randomization process? Does it mention the presence of control group or explain lack of it?
  • Are the samples similar at baseline? Is sample attrition mentioned?
  • All studies report the number of participants/specimens at the start of a study, together with details of how many of them completed the study and reasons for incomplete follow up if there is any.
  • Does it mention who was blinded? Are the assessors and participants blind to the interventions received?
  • Is it mentioned how was the data analysed?
  • Are any measurements taken likely to be valid?

Researchers use measuring techniques and instruments that have been shown to be valid and reliable.

Validity refers to the extent to which a test measures what it is supposed to measure.

(the extent to which the value obtained represents the object of interest.)

  • -Soundness, effectiveness of the measuring instrument;
  • -What does the test measure?
  • -Does it measure, what it is supposed to be measured?
  • -How well, how accurately does it measure?

Reliability: In research, the term reliability means “repeatability” or “consistency”

Reliability refers to how consistent a test is on repeated measurements. It is important especially if assessments are made on different occasions and or by different examiners. Studies should state the method for assessing the reliability of any measurements taken and what the intra –examiner reliability was [ 6 ].

3-Selection issues:

The following questions should be raised:

  • - How were subjects chosen or recruited? If not random, are they representative of the population?
  • - Types of Blinding (Masking) Single, Double, Triple?
  • - Is there a control group? How was it chosen?
  • - How are patients followed up? Who are the dropouts? Why and how many are there?
  • - Are the independent (predictor) and dependent (outcome) variables in the study clearly identified, defined, and measured?
  • - Is there a statement about sample size issues or statistical power (especially important in negative studies)?
  • - If a multicenter study, what quality assurance measures were employed to obtain consistency across sites?
  • - Are there selection biases?
  • • In a case-control study, if exercise habits to be compared:
  • - Are the controls appropriate?
  • - Were records of cases and controls reviewed blindly?
  • - How were possible selection biases controlled (Prevalence bias, Admission Rate bias, Volunteer bias, Recall bias, Lead Time bias, Detection bias, etc.,)?
  • • Cross Sectional Studies:
  • - Was the sample selected in an appropriate manner (random, convenience, etc.,)?
  • - Were efforts made to ensure a good response rate or to minimize the occurrence of missing data?
  • - Were reliability (reproducibility) and validity reported?
  • • In an intervention study, how were subjects recruited and assigned to groups?
  • • In a cohort study, how many reached final follow-up?
  • - Are the subject’s representatives of the population to which the findings are applied?
  • - Is there evidence of volunteer bias? Was there adequate follow-up time?
  • - What was the drop-out rate?
  • - Any shortcoming in the methodology can lead to results that do not reflect the truth. If clinical practice is changed on the basis of these results, patients could be harmed.

Researchers employ a variety of techniques to make the methodology more robust, such as matching, restriction, randomization, and blinding [ 7 ].

Bias is the term used to describe an error at any stage of the study that was not due to chance. Bias leads to results in which there are a systematic deviation from the truth. As bias cannot be measured, researchers need to rely on good research design to minimize bias [ 8 ]. To minimize any bias within a study the sample population should be representative of the population. It is also imperative to consider the sample size in the study and identify if the study is adequately powered to produce statistically significant results, i.e., p-values quoted are <0.05 [ 9 ].

4-What are the outcome factors and how are they measured?

  • -Are all relevant outcomes assessed?
  • -Is measurement error an important source of bias?

5-What are the study factors and how are they measured?

  • -Are all the relevant study factors included in the study?
  • -Have the factors been measured using appropriate tools?

Data Analysis and Results:

- Were the tests appropriate for the data?

- Are confidence intervals or p-values given?

  • How strong is the association between intervention and outcome?
  • How precise is the estimate of the risk?
  • Does it clearly mention the main finding(s) and does the data support them?
  • Does it mention the clinical significance of the result?
  • Is adverse event or lack of it mentioned?
  • Are all relevant outcomes assessed?
  • Was the sample size adequate to detect a clinically/socially significant result?
  • Are the results presented in a way to help in health policy decisions?
  • Is there measurement error?
  • Is measurement error an important source of bias?

Confounding Factors:

A confounder has a triangular relationship with both the exposure and the outcome. However, it is not on the causal pathway. It makes it appear as if there is a direct relationship between the exposure and the outcome or it might even mask an association that would otherwise have been present [ 9 ].

6- What important potential confounders are considered?

  • -Are potential confounders examined and controlled for?
  • -Is confounding an important source of bias?

7- What is the statistical method in the study?

  • -Are the statistical methods described appropriate to compare participants for primary and secondary outcomes?
  • -Are statistical methods specified insufficient detail (If I had access to the raw data, could I reproduce the analysis)?
  • -Were the tests appropriate for the data?
  • -Are confidence intervals or p-values given?
  • -Are results presented as absolute risk reduction as well as relative risk reduction?

Interpretation of p-value:

The p-value refers to the probability that any particular outcome would have arisen by chance. A p-value of less than 1 in 20 (p<0.05) is statistically significant.

  • When p-value is less than significance level, which is usually 0.05, we often reject the null hypothesis and the result is considered to be statistically significant. Conversely, when p-value is greater than 0.05, we conclude that the result is not statistically significant and the null hypothesis is accepted.

Confidence interval:

Multiple repetition of the same trial would not yield the exact same results every time. However, on average the results would be within a certain range. A 95% confidence interval means that there is a 95% chance that the true size of effect will lie within this range.

8- Statistical results:

  • -Do statistical tests answer the research question?

Are statistical tests performed and comparisons made (data searching)?

Correct statistical analysis of results is crucial to the reliability of the conclusions drawn from the research paper. Depending on the study design and sample selection method employed, observational or inferential statistical analysis may be carried out on the results of the study.

It is important to identify if this is appropriate for the study [ 9 ].

  • -Was the sample size adequate to detect a clinically/socially significant result?
  • -Are the results presented in a way to help in health policy decisions?

Clinical significance:

Statistical significance as shown by p-value is not the same as clinical significance. Statistical significance judges whether treatment effects are explicable as chance findings, whereas clinical significance assesses whether treatment effects are worthwhile in real life. Small improvements that are statistically significant might not result in any meaningful improvement clinically. The following questions should always be on mind:

  • -If the results are statistically significant, do they also have clinical significance?
  • -If the results are not statistically significant, was the sample size sufficiently large to detect a meaningful difference or effect?

9- What conclusions did the authors reach about the study question?

Conclusions should ensure that recommendations stated are suitable for the results attained within the capacity of the study. The authors should also concentrate on the limitations in the study and their effects on the outcomes and the proposed suggestions for future studies [ 10 ].

  • -Are the questions posed in the study adequately addressed?
  • -Are the conclusions justified by the data?
  • -Do the authors extrapolate beyond the data?
  • -Are shortcomings of the study addressed and constructive suggestions given for future research?
  • -Bibliography/References:

Do the citations follow one of the Council of Biological Editors’ (CBE) standard formats?

10- Are ethical issues considered?

If a study involves human subjects, human tissues, or animals, was approval from appropriate institutional or governmental entities obtained? [ 10 , 11 ].

Critical appraisal of RCTs: Factors to look for:

  • Allocation (randomization, stratification, confounders).
  • Follow up of participants (intention to treat).
  • Data collection (bias).
  • Sample size (power calculation).
  • Presentation of results (clear, precise).
  • Applicability to local population.

[ Table/Fig-2 ] summarizes the guidelines for Consolidated Standards of Reporting Trials CONSORT [ 12 ].

[Table/Fig-2]:

Summary of the CONSORT guidelines.

Critical appraisal of systematic reviews: provide an overview of all primary studies on a topic and try to obtain an overall picture of the results.

In a systematic review, all the primary studies identified are critically appraised and only the best ones are selected. A meta-analysis (i.e., a statistical analysis) of the results from selected studies may be included. Factors to look for:

  • Literature search (did it include published and unpublished materials as well as non-English language studies? Was personal contact with experts sought?).
  • Quality-control of studies included (type of study; scoring system used to rate studies; analysis performed by at least two experts).
  • Homogeneity of studies.

[ Table/Fig-3 ] summarizes the guidelines for Preferred Reporting Items for Systematic reviews and Meta-Analyses PRISMA [ 13 ].

[Table/Fig-3]:

Summary of PRISMA guidelines.

Critical appraisal is a fundamental skill in modern practice for assessing the value of clinical researches and providing an indication of their relevance to the profession. It is a skills-set developed throughout a professional career that facilitates this and, through integration with clinical experience and patient preference, permits the practice of evidence based medicine and dentistry. By following a systematic approach, such evidence can be considered and applied to clinical practice.

Financial or other Competing Interests

IMAGES

  1. Critical Appraisal Of A Meta-Analysis Or Systematic Review Template

    research article appraisal tool

  2. (PDF) A systematic review of critical appraisal tools

    research article appraisal tool

  3. Last updated by Emilie Francis 25 July, 2022 1 min read

    research article appraisal tool

  4. Critical Appraisal Worksheet Template

    research article appraisal tool

  5. Modified McMaster Quantitative Critical Appraisal Tool

    research article appraisal tool

  6. 4 Critical Appraisal

    research article appraisal tool

VIDEO

  1. HS2405 AssessmentTask1 Group4 Maru

  2. Critical Appraisal of Research NOV 23

  3. Critical appraisal of Research Papers and Protocols Testing Presence of Confounders GKSingh

  4. 2021 SHINE Conference

  5. Critical Appraisal of Research Article, and Clinical Audit

  6. Critical Appraisal of a Research Article and Brochure

COMMENTS

  1. Critical Appraisal Tools and Reporting Guidelines

    More. Critical appraisal tools and reporting guidelines are the two most important instruments available to researchers and practitioners involved in research, evidence-based practice, and policymaking. Each of these instruments has unique characteristics, and both instruments play an essential role in evidence-based practice and decision-making.

  2. JBI Critical Appraisal Tools

    JBI's critical appraisal tools assist in assessing the trustworthiness, relevance and results of published papers. These tools have been revised. Recently published articles detail the revision. "Assessing the risk of bias of quantitative analytical studies: introducing the vision for critical appraisal within JBI systematic reviews".

  3. A guide to critical appraisal of evidence : Nursing2020 Critical Care

    Critical appraisal is the assessment of research studies' worth to clinical practice. Critical appraisal—the heart of evidence-based practice—involves four phases: rapid critical appraisal, evaluation, synthesis, and recommendation. This article reviews each phase and provides examples, tips, and caveats to help evidence appraisers ...

  4. Critical Appraisal tools

    This section contains useful tools and downloads for the critical appraisal of different types of medical evidence. Example appraisal sheets are provided together with several helpful examples. Critical appraisal worksheets to help you appraise the reliability, importance and applicability of clinical evidence.

  5. Critical Appraisal Tools & Resources

    Critical Appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context. It is an essential skill for evidence-based medicine because it allows people to find and use research evidence reliably and efficiently. Learn more about what critical appraisal ...

  6. Full article: Critical appraisal

    More than 100 critical appraisal tools currently exist for qualitative research. Tools fall into two categories: checklists and holistic frameworks encouraging reflection (Majid & Vanstone, Citation 2018; Santiago-Delefosse et al., Citation 2016; Williams et al., Citation 2020). Both checklists and holistic frameworks are subject to criticisms.

  7. Critical appraisal of published research papers

    INTRODUCTION. Critical appraisal of a research paper is defined as "The process of carefully and systematically examining research to judge its trustworthiness, value and relevance in a particular context."[] Since scientific literature is rapidly expanding with more than 12,000 articles being added to the MEDLINE database per week,[] critical appraisal is very important to distinguish ...

  8. Evidence appraisal: a scoping review, conceptual framework, and

    Objective. Critical appraisal of clinical evidence promises to help prevent, detect, and address flaws related to study importance, ethics, validity, applicability, and reporting. These research issues are of growing concern. The purpose of this scoping review is to survey the current literature on evidence appraisal to develop a conceptual ...

  9. The fundamentals of critically appraising an article

    Here are some of the tools and basic considerations you might find useful when critically appraising an article. In a nutshell when appraising an article, you are assessing: 1. Its relevance ...

  10. Critical appraisal of qualitative research

    How is appraisal currently performed? Appraising the quality of qualitative research is not a new concept—there are a number of published appraisal tools, frameworks and checklists in existence.21-23 An important and often overlooked point is the confusion between tools designed for appraising methodological quality and reporting guidelines designed to assess the quality of methods reporting.

  11. Introduction

    Critical Appraisal of Studies. Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value/relevance in a particular context by providing a framework to evaluate the research. During the critical appraisal process, researchers can: Decide whether studies have been undertaken ...

  12. (PDF) How to critically appraise an article

    SuMMarY. Critical appraisal is a systematic process used to identify the strengths. and weaknesse s of a res earch article in order t o assess the usefulness and. validity of r esearch findings ...

  13. A systematic review of the content of critical appraisal tools

    The majority of the critical appraisal tools were developed for a specific research design (87%), with most designed for use on experimental studies (38% of all critical appraisal tools sourced). This finding is not surprising as, according to the medical model, experimental studies sit at or near the top of the hierarchy of evidence [ 2 , 8 ].

  14. Critical Appraisal

    Critical Appraisal. Critical appraisal is the process of carefully and systematically assessing the outcome of scientific research (evidence) to judge its trustworthiness, value and relevance in a particular context. Critical appraisal looks at the way a study is conducted and examines factors such as internal validity, generalizability and ...

  15. How to critically appraise an article

    Key Points. Critical appraisal is a systematic process used to identify the strengths and weaknesses of a research article. Critical appraisal provides a basis for decisions on whether to use the ...

  16. 12 Critical appraisal tools for qualitative research

    Qualitative research has an important place within evidence-based health care (EBHC), contributing to policy on patient safety and quality of care, supporting understanding of the impact of chronic illness, and explaining contextual factors surrounding the implementation of interventions. However, the question of whether, when and how to critically appraise qualitative research persists ...

  17. Critical Appraisal Tools and Reporting Guidelines for Evidence ...

    Rationale: This article serves as a resource to help nurse navigate the often-overwhelming terrain of critical appraisal tools and reporting guidelines, and will help both novice and experienced consumers of evidence more easily select the appropriate tool(s) to use for critical appraisal and reporting of evidence. Having the skills to select ...

  18. Scientific writing: Critical Appraisal Toolkit (CAT) for assessing

    Abstract. Healthcare professionals are often expected to critically appraise research evidence in order to make recommendations for practice and policy development. Here we describe the Critical Appraisal Toolkit (CAT) currently used by the Public Health Agency of Canada. The CAT consists of: algorithms to identify the type of study design ...

  19. How to critically appraise an article

    Abstract. Critical appraisal is a systematic process used to identify the strengths and weaknesses of a research article in order to assess the usefulness and validity of research findings. The most important components of a critical appraisal are an evaluation of the appropriateness of the study design for the research question and a careful ...

  20. Optimising the value of the critical appraisal skills programme (CASP

    A key stage common to all systematic reviews is quality appraisal of the evidence to be synthesized. 1,8 There is broad debate and little consensus among the academic community over what constitutes 'quality' in qualitative research. 'Qualitative' is an umbrella term that encompasses a diverse range of methods, which makes it difficult to have a 'one size fits all' definition of ...

  21. Critiquing Research Evidence for Use in Practice: Revisited

    APPRAISING THE RESEARCH EVIDENCE. Some aspects of appraising a research article are the same whether the study is quantitative, qualitative, or mixed methods (Dale, 2005, Gray and Grove, 2017).Caldwell, Henshaw, and Taylor (2011) described the development of a framework for critiquing health research, addressing both quantitative and qualitative research with one list of questions.

  22. Five tips for developing useful literature summary tables for writing

    Many recent critical appraisal checklists (eg, Mixed Methods Appraisal Tool) discourage review authors from assigning a quality score and recommend noting the main strengths and limitations of included studies. ... The conceptual contribution of this research article could be that experiential learning is one way to teach compassion to nursing ...

  23. The role of champions in the implementation of technology in healthcare

    Data extraction. The research team developed an extraction form for the included studies utilizing an Excel spreadsheet. Following data extraction, the information included the Name of Author(s) Year of publication, Country/countries, Title of the article, Setting, Aim, Design, Participants, and Sample size of the studies, Technology utilized in healthcare services, name/title utilized to ...

  24. Appraisal Tools for Clinical Practice Guidelines: A Systematic Review

    a: A generic appraisal tool is a tool that can be used to appraise all kinds of clinical practice guidelines.b: For 4 of the 11 questions.c: For 7 of the 11 questions.d: The appraisal tool includes some disease-specific questions. Eleven appraisal tools provided additional information on their development process.

  25. Delayed discharge in inpatient psychiatric care: a systematic review

    Studies of any design, which published data on delayed discharge from psychiatric inpatient care in high income countries were included. Studies examining child and adolescent, general medical or forensic settings were excluded. A narrative synthesis method was utilised. Quality of research was appraised using the Mixed Methods Appraisal Tool ...

  26. Critical Appraisal of Clinical Research

    Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value and relevance in order to direct professionals in their vital clinical decision making [ 1 ]. Critical appraisal is essential to: Continuing Professional Development (CPD).