• Open access
  • Published: 27 November 2020

Designing process evaluations using case study to explore the context of complex interventions evaluated in trials

  • Aileen Grant 1 ,
  • Carol Bugge 2 &
  • Mary Wells 3  

Trials volume  21 , Article number:  982 ( 2020 ) Cite this article

10k Accesses

9 Citations

5 Altmetric

Metrics details

Process evaluations are an important component of an effectiveness evaluation as they focus on understanding the relationship between interventions and context to explain how and why interventions work or fail, and whether they can be transferred to other settings and populations. However, historically, context has not been sufficiently explored and reported resulting in the poor uptake of trial results. Therefore, suitable methodologies are needed to guide the investigation of context. Case study is one appropriate methodology, but there is little guidance about what case study design can offer the study of context in trials. We address this gap in the literature by presenting a number of important considerations for process evaluation using a case study design.

In this paper, we define context, the relationship between complex interventions and context, and describe case study design methodology. A well-designed process evaluation using case study should consider the following core components: the purpose; definition of the intervention; the trial design, the case, the theories or logic models underpinning the intervention, the sampling approach and the conceptual or theoretical framework. We describe each of these in detail and highlight with examples from recently published process evaluations.

Conclusions

There are a number of approaches to process evaluation design in the literature; however, there is a paucity of research on what case study design can offer process evaluations. We argue that case study is one of the best research designs to underpin process evaluations, to capture the dynamic and complex relationship between intervention and context during implementation. We provide a comprehensive overview of the issues for process evaluation design to consider when using a case study design.

Trial registration

DQIP - ClinicalTrials.gov number, NCT01425502 - OPAL - ISRCTN57746448

Peer Review reports

Contribution to the literature

We illustrate how case study methodology can explore the complex, dynamic and uncertain relationship between context and interventions within trials.

We depict different case study designs and illustrate there is not one formula and that design needs to be tailored to the context and trial design.

Case study can support comparisons between intervention and control arms and between cases within arms to uncover and explain differences in detail.

We argue that case study can illustrate how components have evolved and been redefined through implementation.

Key issues for consideration in case study design within process evaluations are presented and illustrated with examples.

Process evaluations are an important component of an effectiveness evaluation as they focus on understanding the relationship between interventions and context to explain how and why interventions work or fail and whether they can be transferred to other settings and populations. However, historically, not all trials have had a process evaluation component, nor have they sufficiently reported aspects of context, resulting in poor uptake of trial findings [ 1 ]. Considerations of context are often absent from published process evaluations, with few studies acknowledging, taking account of or describing context during implementation, or assessing the impact of context on implementation [ 2 , 3 ]. At present, evidence from trials is not being used in a timely manner [ 4 , 5 ], and this can negatively impact on patient benefit and experience [ 6 ]. It takes on average 17 years for knowledge from research to be implemented into practice [ 7 ]. Suitable methodologies are therefore needed that allow for context to be exposed; one appropriate methodological approach is case study [ 8 , 9 ].

In 2015, the Medical Research Council (MRC) published guidance for process evaluations [ 10 ]. This was a key milestone in legitimising as well as providing tools, methods and a framework for conducting process evaluations. Nevertheless, as with all guidance, there is a need for reflection, challenge and refinement. There have been a number of critiques of the MRC guidance, including that interventions should be considered as events in systems [ 11 , 12 , 13 , 14 ]; a need for better use, critique and development of theories [ 15 , 16 , 17 ]; and a need for more guidance on integrating qualitative and quantitative data [ 18 , 19 ]. Although the MRC process evaluation guidance does consider appropriate qualitative and quantitative methods, it does not mention case study design and what it can offer the study of context in trials.

The case study methodology is ideally suited to real-world, sustainable intervention development and evaluation because it can explore and examine contemporary complex phenomena, in depth, in numerous contexts and using multiple sources of data [ 8 ]. Case study design can capture the complexity of the case, the relationship between the intervention and the context and how the intervention worked (or not) [ 8 ]. There are a number of textbooks on a case study within the social science fields [ 8 , 9 , 20 ], but there are no case study textbooks and a paucity of useful texts on how to design, conduct and report case study within the health arena. Few examples exist within the trial design and evaluation literature [ 3 , 21 ]. Therefore, guidance to enable well-designed process evaluations using case study methodology is required.

We aim to address the gap in the literature by presenting a number of important considerations for process evaluation using a case study design. First, we define the context and describe the relationship between complex health interventions and context.

What is context?

While there is growing recognition that context interacts with the intervention to impact on the intervention’s effectiveness [ 22 ], context is still poorly defined and conceptualised. There are a number of different definitions in the literature, but as Bate et al. explained ‘almost universally, we find context to be an overworked word in everyday dialogue but a massively understudied and misunderstood concept’ [ 23 ]. Ovretveit defines context as ‘everything the intervention is not’ [ 24 ]. This last definition is used by the MRC framework for process evaluations [ 25 ]; however; the problem with this definition is that it is highly dependent on how the intervention is defined. We have found Pfadenhauer et al.’s definition useful:

Context is conceptualised as a set of characteristics and circumstances that consist of active and unique factors that surround the implementation. As such it is not a backdrop for implementation but interacts, influences, modifies and facilitates or constrains the intervention and its implementation. Context is usually considered in relation to an intervention or object, with which it actively interacts. A boundary between the concepts of context and setting is discernible: setting refers to the physical, specific location in which the intervention is put into practice. Context is much more versatile, embracing not only the setting but also roles, interactions and relationships [ 22 ].

Traditionally, context has been conceptualised in terms of barriers and facilitators, but what is a barrier in one context may be a facilitator in another, so it is the relationship and dynamics between the intervention and context which are the most important [ 26 ]. There is a need for empirical research to really understand how different contextual factors relate to each other and to the intervention. At present, research studies often list common contextual factors, but without a depth of meaning and understanding, such as government or health board policies, organisational structures, professional and patient attitudes, behaviours and beliefs [ 27 ]. The case study methodology is well placed to understand the relationship between context and intervention where these boundaries may not be clearly evident. It offers a means of unpicking the contextual conditions which are pertinent to effective implementation.

The relationship between complex health interventions and context

Health interventions are generally made up of a number of different components and are considered complex due to the influence of context on their implementation and outcomes [ 3 , 28 ]. Complex interventions are often reliant on the engagement of practitioners and patients, so their attitudes, behaviours, beliefs and cultures influence whether and how an intervention is effective or not. Interventions are context-sensitive; they interact with the environment in which they are implemented. In fact, many argue that interventions are a product of their context, and indeed, outcomes are likely to be a product of the intervention and its context [ 3 , 29 ]. Within a trial, there is also the influence of the research context too—so the observed outcome could be due to the intervention alone, elements of the context within which the intervention is being delivered, elements of the research process or a combination of all three. Therefore, it can be difficult and unhelpful to separate the intervention from the context within which it was evaluated because the intervention and context are likely to have evolved together over time. As a result, the same intervention can look and behave differently in different contexts, so it is important this is known, understood and reported [ 3 ]. Finally, the intervention context is dynamic; the people, organisations and systems change over time, [ 3 ] which requires practitioners and patients to respond, and they may do this by adapting the intervention or contextual factors. So, to enable researchers to replicate successful interventions, or to explain why the intervention was not successful, it is not enough to describe the components of the intervention, they need to be described by their relationship to their context and resources [ 3 , 28 ].

What is a case study?

Case study methodology aims to provide an in-depth, holistic, balanced, detailed and complete picture of complex contemporary phenomena in its natural context [ 8 , 9 , 20 ]. In this case, the phenomena are the implementation of complex interventions in a trial. Case study methodology takes the view that the phenomena can be more than the sum of their parts and have to be understood as a whole [ 30 ]. It is differentiated from a clinical case study by its analytical focus [ 20 ].

The methodology is particularly useful when linked to trials because some of the features of the design naturally fill the gaps in knowledge generated by trials. Given the methodological focus on understanding phenomena in the round, case study methodology is typified by the use of multiple sources of data, which are more commonly qualitatively guided [ 31 ]. The case study methodology is not epistemologically specific, like realist evaluation, and can be used with different epistemologies [ 32 ], and with different theories, such as Normalisation Process Theory (which explores how staff work together to implement a new intervention) or the Consolidated Framework for Implementation Research (which provides a menu of constructs associated with effective implementation) [ 33 , 34 , 35 ]. Realist evaluation can be used to explore the relationship between context, mechanism and outcome, but case study differs from realist evaluation by its focus on a holistic and in-depth understanding of the relationship between an intervention and the contemporary context in which it was implemented [ 36 ]. Case study enables researchers to choose epistemologies and theories which suit the nature of the enquiry and their theoretical preferences.

Designing a process evaluation using case study

An important part of any study is the research design. Due to their varied philosophical positions, the seminal authors in the field of case study have different epistemic views as to how a case study should be conducted [ 8 , 9 ]. Stake takes an interpretative approach (interested in how people make sense of their world), and Yin has more positivistic leanings, arguing for objectivity, validity and generalisability [ 8 , 9 ].

Regardless of the philosophical background, a well-designed process evaluation using case study should consider the following core components: the purpose; the definition of the intervention, the trial design, the case, and the theories or logic models underpinning the intervention; the sampling approach; and the conceptual or theoretical framework [ 8 , 9 , 20 , 31 , 33 ]. We now discuss these critical components in turn, with reference to two process evaluations that used case study design, the DQIP and OPAL studies [ 21 , 37 , 38 , 39 , 40 , 41 ].

The purpose of a process evaluation is to evaluate and explain the relationship between the intervention and its components, to context and outcome. It can help inform judgements about validity (by exploring the intervention components and their relationship with one another (construct validity), the connections between intervention and outcomes (internal validity) and the relationship between intervention and context (external validity)). It can also distinguish between implementation failure (where the intervention is poorly delivered) and intervention failure (intervention design is flawed) [ 42 , 43 ]. By using a case study to explicitly understand the relationship between context and the intervention during implementation, the process evaluation can explain the intervention effects and the potential generalisability and optimisation into routine practice [ 44 ].

The DQIP process evaluation aimed to qualitatively explore how patients and GP practices responded to an intervention designed to reduce high-risk prescribing of nonsteroidal anti-inflammatory drugs (NSAIDs) and/or antiplatelet agents (see Table  1 ) and quantitatively examine how change in high-risk prescribing was associated with practice characteristics and implementation processes. The OPAL process evaluation (see Table  2 ) aimed to quantitatively understand the factors which influenced the effectiveness of a pelvic floor muscle training intervention for women with urinary incontinence and qualitatively explore the participants’ experiences of treatment and adherence.

Defining the intervention and exploring the theories or assumptions underpinning the intervention design

Process evaluations should also explore the utility of the theories or assumptions underpinning intervention design [ 49 ]. Not all theories underpinning interventions are based on a formal theory, but they based on assumptions as to how the intervention is expected to work. These can be depicted as a logic model or theory of change [ 25 ]. To capture how the intervention and context evolve requires the intervention and its expected mechanisms to be clearly defined at the outset [ 50 ]. Hawe and colleagues recommend defining interventions by function (what processes make the intervention work) rather than form (what is delivered) [ 51 ]. However, in some cases, it may be useful to know if some of the components are redundant in certain contexts or if there is a synergistic effect between all the intervention components.

The DQIP trial delivered two interventions, one intervention was delivered to professionals with high fidelity and then professionals delivered the other intervention to patients by form rather than function allowing adaptations to the local context as appropriate. The assumptions underpinning intervention delivery were prespecified in a logic model published in the process evaluation protocol [ 52 ].

Case study is well placed to challenge or reinforce the theoretical assumptions or redefine these based on the relationship between the intervention and context. Yin advocates the use of theoretical propositions; these direct attention to specific aspects of the study for investigation [ 8 ] can be based on the underlying assumptions and tested during the course of the process evaluation. In case studies, using an epistemic position more aligned with Yin can enable research questions to be designed, which seek to expose patterns of unanticipated as well as expected relationships [ 9 ]. The OPAL trial was more closely aligned with Yin, where the research team predefined some of their theoretical assumptions, based on how the intervention was expected to work. The relevant parts of the data analysis then drew on data to support or refute the theoretical propositions. This was particularly useful for the trial as the prespecified theoretical propositions linked to the mechanisms of action on which the intervention was anticipated to have an effect (or not).

Tailoring to the trial design

Process evaluations need to be tailored to the trial, the intervention and the outcomes being measured [ 45 ]. For example, in a stepped wedge design (where the intervention is delivered in a phased manner), researchers should try to ensure process data are captured at relevant time points or in a two-arm or multiple arm trial, ensure data is collected from the control group(s) as well as the intervention group(s). In the DQIP trial, a stepped wedge trial, at least one process evaluation case, was sampled per cohort. Trials often continue to measure outcomes after delivery of the intervention has ceased, so researchers should also consider capturing ‘follow-up’ data on contextual factors, which may continue to influence the outcome measure. The OPAL trial had two active treatment arms so collected process data from both arms. In addition, as the trial was interested in long-term adherence, the trial and the process evaluation collected data from participants for 2 years after the intervention was initially delivered, providing 24 months follow-up data, in line with the primary outcome for the trial.

Defining the case

Case studies can include single or multiple cases in their design. Single case studies usually sample typical or unique cases, their advantage being the depth and richness that can be achieved over a long period of time. The advantages of multiple case study design are that cases can be compared to generate a greater depth of analysis. Multiple case study sampling may be carried out in order to test for replication or contradiction [ 8 ]. Given that trials are often conducted over a number of sites, a multiple case study design is more sensible for process evaluations, as there is likely to be variation in implementation between sites. Case definition may occur at a variety of levels but is most appropriate if it reflects the trial design. For example, a case in an individual patient level trial is likely to be defined as a person/patient (e.g. a woman with urinary incontinence—OPAL trial) whereas in a cluster trial, a case is like to be a cluster, such as an organisation (e.g. a general practice—DQIP trial). Of course, the process evaluation could explore cases with less distinct boundaries, such as communities or relationships; however, the clarity with which these cases are defined is important, in order to scope the nature of the data that will be generated.

Carefully sampled cases are critical to a good case study as sampling helps inform the quality of the inferences that can be made from the data [ 53 ]. In both qualitative and quantitative research, how and how many participants to sample must be decided when planning the study. Quantitative sampling techniques generally aim to achieve a random sample. Qualitative research generally uses purposive samples to achieve data saturation, occurring when the incoming data produces little or no new information to address the research questions. The term data saturation has evolved from theoretical saturation in conventional grounded theory studies; however, its relevance to other types of studies is contentious as the term saturation seems to be widely used but poorly justified [ 54 ]. Empirical evidence suggests that for in-depth interview studies, saturation occurs at 12 interviews for thematic saturation, but typically more would be needed for a heterogenous sample higher degrees of saturation [ 55 , 56 ]. Both DQIP and OPAL case studies were huge with OPAL designed to interview each of the 40 individual cases four times and DQIP designed to interview the lead DQIP general practitioner (GP) twice (to capture change over time), another GP and the practice manager from each of the 10 organisational cases. Despite the plethora of mixed methods research textbooks, there is very little about sampling as discussions typically link to method (e.g. interviews) rather than paradigm (e.g. case study).

Purposive sampling can improve the generalisability of the process evaluation by sampling for greater contextual diversity. The typical or average case is often not the richest source of information. Outliers can often reveal more important insights, because they may reflect the implementation of the intervention using different processes. Cases can be selected from a number of criteria, which are not mutually exclusive, to enable a rich and detailed picture to be built across sites [ 53 ]. To avoid the Hawthorne effect, it is recommended that process evaluations sample from both intervention and control sites, which enables comparison and explanation. There is always a trade-off between breadth and depth in sampling, so it is important to note that often quantity does not mean quality and that carefully sampled cases can provide powerful illustrative examples of how the intervention worked in practice, the relationship between the intervention and context and how and why they evolved together. The qualitative components of both DQIP and OPAL process evaluations aimed for maximum variation sampling. Please see Table  1 for further information on how DQIP’s sampling frame was important for providing contextual information on processes influencing effective implementation of the intervention.

Conceptual and theoretical framework

A conceptual or theoretical framework helps to frame data collection and analysis [ 57 ]. Theories can also underpin propositions, which can be tested in the process evaluation. Process evaluations produce intervention-dependent knowledge, and theories help make the research findings more generalizable by providing a common language [ 16 ]. There are a number of mid-range theories which have been designed to be used with process evaluation [ 34 , 35 , 58 ]. The choice of the appropriate conceptual or theoretical framework is, however, dependent on the philosophical and professional background of the research. The two examples within this paper used our own framework for the design of process evaluations, which proposes a number of candidate processes which can be explored, for example, recruitment, delivery, response, maintenance and context [ 45 ]. This framework was published before the MRC guidance on process evaluations, and both the DQIP and OPAL process evaluations were designed before the MRC guidance was published. The DQIP process evaluation explored all candidates in the framework whereas the OPAL process evaluation selected four candidates, illustrating that process evaluations can be selective in what they explore based on the purpose, research questions and resources. Furthermore, as Kislov and colleagues argue, we also have a responsibility to critique the theoretical framework underpinning the evaluation and refine theories to advance knowledge [ 59 ].

Data collection

An important consideration is what data to collect or measure and when. Case study methodology supports a range of data collection methods, both qualitative and quantitative, to best answer the research questions. As the aim of the case study is to gain an in-depth understanding of phenomena in context, methods are more commonly qualitative or mixed method in nature. Qualitative methods such as interviews, focus groups and observation offer rich descriptions of the setting, delivery of the intervention in each site and arm, how the intervention was perceived by the professionals delivering the intervention and the patients receiving the intervention. Quantitative methods can measure recruitment, fidelity and dose and establish which characteristics are associated with adoption, delivery and effectiveness. To ensure an understanding of the complexity of the relationship between the intervention and context, the case study should rely on multiple sources of data and triangulate these to confirm and corroborate the findings [ 8 ]. Process evaluations might consider using routine data collected in the trial across all sites and additional qualitative data across carefully sampled sites for a more nuanced picture within reasonable resource constraints. Mixed methods allow researchers to ask more complex questions and collect richer data than can be collected by one method alone [ 60 ]. The use of multiple sources of data allows data triangulation, which increases a study’s internal validity but also provides a more in-depth and holistic depiction of the case [ 20 ]. For example, in the DQIP process evaluation, the quantitative component used routinely collected data from all sites participating in the trial and purposively sampled cases for a more in-depth qualitative exploration [ 21 , 38 , 39 ].

The timing of data collection is crucial to study design, especially within a process evaluation where data collection can potentially influence the trial outcome. Process evaluations are generally in parallel or retrospective to the trial. The advantage of a retrospective design is that the evaluation itself is less likely to influence the trial outcome. However, the disadvantages include recall bias, lack of sensitivity to nuances and an inability to iteratively explore the relationship between intervention and outcome as it develops. To capture the dynamic relationship between intervention and context, the process evaluation needs to be parallel and longitudinal to the trial. Longitudinal methodological design is rare, but it is needed to capture the dynamic nature of implementation [ 40 ]. How the intervention is delivered is likely to change over time as it interacts with context. For example, as professionals deliver the intervention, they become more familiar with it, and it becomes more embedded into systems. The OPAL process evaluation was a longitudinal, mixed methods process evaluation where the quantitative component had been predefined and built into trial data collection systems. Data collection in both the qualitative and quantitative components mirrored the trial data collection points, which were longitudinal to capture adherence and contextual changes over time.

There is a lot of attention in the recent literature towards a systems approach to understanding interventions in context, which suggests interventions are ‘events within systems’ [ 61 , 62 ]. This framing highlights the dynamic nature of context, suggesting that interventions are an attempt to change systems dynamics. This conceptualisation would suggest that the study design should collect contextual data before and after implementation to assess the effect of the intervention on the context and vice versa.

Data analysis

Designing a rigorous analysis plan is particularly important for multiple case studies, where researchers must decide whether their approach to analysis is case or variable based. Case-based analysis is the most common, and analytic strategies must be clearly articulated for within and across case analysis. A multiple case study design can consist of multiple cases, where each case is analysed at the case level, or of multiple embedded cases, where data from all the cases are pulled together for analysis at some level. For example, OPAL analysis was at the case level, but all the cases for the intervention and control arms were pulled together at the arm level for more in-depth analysis and comparison. For Yin, analytical strategies rely on theoretical propositions, but for Stake, analysis works from the data to develop theory. In OPAL and DQIP, case summaries were written to summarise the cases and detail within-case analysis. Each of the studies structured these differently based on the phenomena of interest and the analytic technique. DQIP applied an approach more akin to Stake [ 9 ], with the cases summarised around inductive themes whereas OPAL applied a Yin [ 8 ] type approach using theoretical propositions around which the case summaries were structured. As the data for each case had been collected through longitudinal interviews, the case summaries were able to capture changes over time. It is beyond the scope of this paper to discuss different analytic techniques; however, to ensure the holistic examination of the intervention(s) in context, it is important to clearly articulate and demonstrate how data is integrated and synthesised [ 31 ].

There are a number of approaches to process evaluation design in the literature; however, there is a paucity of research on what case study design can offer process evaluations. We argue that case study is one of the best research designs to underpin process evaluations, to capture the dynamic and complex relationship between intervention and context during implementation [ 38 ]. Case study can enable comparisons within and across intervention and control arms and enable the evolving relationship between intervention and context to be captured holistically rather than considering processes in isolation. Utilising a longitudinal design can enable the dynamic relationship between context and intervention to be captured in real time. This information is fundamental to holistically explaining what intervention was implemented, understanding how and why the intervention worked or not and informing the transferability of the intervention into routine clinical practice.

Case study designs are not prescriptive, but process evaluations using case study should consider the purpose, trial design, the theories or assumptions underpinning the intervention, and the conceptual and theoretical frameworks informing the evaluation. We have discussed each of these considerations in turn, providing a comprehensive overview of issues for process evaluations using a case study design. There is no single or best way to conduct a process evaluation or a case study, but researchers need to make informed choices about the process evaluation design. Although this paper focuses on process evaluations, we recognise that case study design could also be useful during intervention development and feasibility trials. Elements of this paper are also applicable to other study designs involving trials.

Availability of data and materials

No data and materials were used.

Abbreviations

Data-driven Quality Improvement in Primary Care

Medical Research Council

Nonsteroidal anti-inflammatory drugs

Optimizing Pelvic Floor Muscle Exercises to Achieve Long-term benefits

Blencowe NB. Systematic review of intervention design and delivery in pragmatic and explanatory surgical randomized clinical trials. Br J Surg. 2015;102:1037–47.

Article   CAS   PubMed   Google Scholar  

Dixon-Woods M. The problem of context in quality improvement. In: Foundation TH, editor. Perspectives on context: The Health Foundation; 2014.

Wells M, Williams B, Treweek S, Coyle J, Taylor J. Intervention description is not enough: evidence from an in-depth multiple case study on the untold role and impact of context in randomised controlled trials of seven complex interventions. Trials. 2012;13(1):95.

Article   PubMed   PubMed Central   Google Scholar  

Grant A, Sullivan F, Dowell J. An ethnographic exploration of influences on prescribing in general practice: why is there variation in prescribing practices? Implement Sci. 2013;8(1):72.

Lang ES, Wyer PC, Haynes RB. Knowledge translation: closing the evidence-to-practice gap. Ann Emerg Med. 2007;49(3):355–63.

Article   PubMed   Google Scholar  

Ward V, House AF, Hamer S. Developing a framework for transferring knowledge into action: a thematic analysis of the literature. J Health Serv Res Policy. 2009;14(3):156–64.

Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104(12):510–20.

Yin R. Case study research and applications: design and methods. Los Angeles: Sage Publications Inc; 2018.

Google Scholar  

Stake R. The art of case study research. Thousand Oaks, California: Sage Publications Ltd; 1995.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, Moore L, O’Cathain A, Tinati T, Wight D, et al. Process evaluation of complex interventions: Medical Research Council guidance. Br Med J. 2015;350.

Hawe P. Minimal, negligible and negligent interventions. Soc Sci Med. 2015;138:265–8.

Moore GF, Evans RE, Hawkins J, Littlecott H, Melendez-Torres GJ, Bonell C, Murphy S. From complex social interventions to interventions in complex social systems: future directions and unresolved questions for intervention development and evaluation. Evaluation. 2018;25(1):23–45.

Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med. 2018;16(1):95.

Rutter H, Savona N, Glonti K, Bibby J, Cummins S, Finegood DT, Greaves F, Harper L, Hawe P, Moore L, et al. The need for a complex systems model of evidence for public health. Lancet. 2017;390(10112):2602–4.

Moore G, Cambon L, Michie S, Arwidson P, Ninot G, Ferron C, Potvin L, Kellou N, Charlesworth J, Alla F, et al. Population health intervention research: the place of theories. Trials. 2019;20(1):285.

Kislov R. Engaging with theory: from theoretically informed to theoretically informative improvement research. BMJ Qual Saf. 2019;28(3):177–9.

Boulton R, Sandall J, Sevdalis N. The cultural politics of ‘Implementation Science’. J Med Human. 2020;41(3):379-94. h https://doi.org/10.1007/s10912-020-09607-9 .

Cheng KKF, Metcalfe A. Qualitative methods and process evaluation in clinical trials context: where to head to? Int J Qual Methods. 2018;17(1):1609406918774212.

Article   Google Scholar  

Richards DA, Bazeley P, Borglin G, Craig P, Emsley R, Frost J, Hill J, Horwood J, Hutchings HA, Jinks C, et al. Integrating quantitative and qualitative data and findings when undertaking randomised controlled trials. BMJ Open. 2019;9(11):e032081.

Thomas G. How to do your case study, 2nd edition edn. London: Sage Publications Ltd; 2016.

Grant A, Dreischulte T, Guthrie B. Process evaluation of the Data-driven Quality Improvement in Primary Care (DQIP) trial: case study evaluation of adoption and maintenance of a complex intervention to reduce high-risk primary care prescribing. BMJ Open. 2017;7(3).

Pfadenhauer L, Rohwer A, Burns J, Booth A, Lysdahl KB, Hofmann B, Gerhardus A, Mozygemba K, Tummers M, Wahlster P, et al. Guidance for the assessment of context and implementation in health technology assessments (HTA) and systematic reviews of complex interventions: the Context and Implementation of Complex Interventions (CICI) framework: Integrate-HTA; 2016.

Bate P, Robert G, Fulop N, Ovretveit J, Dixon-Woods M. Perspectives on context. London: The Health Foundation; 2014.

Ovretveit J. Understanding the conditions for improvement: research to discover which context influences affect improvement success. BMJ Qual Saf. 2011;20.

Medical Research Council: Process evaluation of complex interventions: UK Medical Research Council (MRC) guidance. 2015.

May CR, Johnson M, Finch T. Implementation, context and complexity. Implement Sci. 2016;11(1):141.

Bate P. Context is everything. In: Perpesctives on Context. The Health Foundation 2014.

Horton TJ, Illingworth JH, Warburton WHP. Overcoming challenges in codifying and replicating complex health care interventions. Health Aff. 2018;37(2):191–7.

O'Connor AM, Tugwell P, Wells GA, Elmslie T, Jolly E, Hollingworth G, McPherson R, Bunn H, Graham I, Drake E. A decision aid for women considering hormone therapy after menopause: decision support framework and evaluation. Patient Educ Couns. 1998;33:267–79.

Creswell J, Poth C. Qualiative inquiry and research design, fourth edition edn. Thousan Oaks, California: Sage Publications; 2018.

Carolan CM, Forbat L, Smith A. Developing the DESCARTE model: the design of case study research in health care. Qual Health Res. 2016;26(5):626–39.

Takahashi ARW, Araujo L. Case study research: opening up research opportunities. RAUSP Manage J. 2020;55(1):100–11.

Tight M. Understanding case study research, small-scale research with meaning. London: Sage Publications; 2017.

May C, Finch T. Implementing, embedding, and integrating practices: an outline of normalisation process theory. Sociology. 2009;43:535.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice. A consolidated framework for advancing implementation science. Implement Sci. 2009;4.

Pawson R, Tilley N. Realist evaluation. London: Sage; 1997.

Dreischulte T, Donnan P, Grant A, Hapca A, McCowan C, Guthrie B. Safer prescribing - a trial of education, informatics & financial incentives. N Engl J Med. 2016;374:1053–64.

Grant A, Dreischulte T, Guthrie B. Process evaluation of the Data-driven Quality Improvement in Primary Care (DQIP) trial: active and less active ingredients of a multi-component complex intervention to reduce high-risk primary care prescribing. Implement Sci. 2017;12(1):4.

Dreischulte T, Grant A, Hapca A, Guthrie B. Process evaluation of the Data-driven Quality Improvement in Primary Care (DQIP) trial: quantitative examination of variation between practices in recruitment, implementation and effectiveness. BMJ Open. 2018;8(1):e017133.

Grant A, Dean S, Hay-Smith J, Hagen S, McClurg D, Taylor A, Kovandzic M, Bugge C. Effectiveness and cost-effectiveness randomised controlled trial of basic versus biofeedback-mediated intensive pelvic floor muscle training for female stress or mixed urinary incontinence: protocol for the OPAL (Optimising Pelvic Floor Exercises to Achieve Long-term benefits) trial mixed methods longitudinal qualitative case study and process evaluation. BMJ Open. 2019;9(2):e024152.

Hagen S, McClurg D, Bugge C, Hay-Smith J, Dean SG, Elders A, Glazener C, Abdel-fattah M, Agur WI, Booth J, et al. Effectiveness and cost-effectiveness of basic versus biofeedback-mediated intensive pelvic floor muscle training for female stress or mixed urinary incontinence: protocol for the OPAL randomised trial. BMJ Open. 2019;9(2):e024153.

Steckler A, Linnan L. Process evaluation for public health interventions and research; 2002.

Durlak JA. Why programme implementation is so important. J Prev Intervent Commun. 1998;17(2):5–18.

Bonell C, Oakley A, Hargreaves J, VS, Rees R. Assessment of generalisability in trials of health interventions: suggested framework and systematic review. Br Med J. 2006;333(7563):346–9.

Article   CAS   Google Scholar  

Grant A, Treweek S, Dreischulte T, Foy R, Guthrie B. Process evaluations for cluster-randomised trials of complex interventions: a proposed framework for design and reporting. Trials. 2013;14(1):15.

Yin R. Case study research: design and methods. London: Sage Publications; 2003.

Bugge C, Hay-Smith J, Grant A, Taylor A, Hagen S, McClurg D, Dean S: A 24 month longitudinal qualitative study of women’s experience of electromyography biofeedback pelvic floor muscle training (PFMT) and PFMT alone for urinary incontinence: adherence, outcome and context. ICS Gothenburg 2019 2019. https://www.ics.org/2019/abstract/473 . Access 10.9.2020.

Suzanne Hagen, Andrew Elders, Susan Stratton, Nicole Sergenson, Carol Bugge, Sarah Dean, Jean Hay-Smith, Mary Kilonzo, Maria Dimitrova, Mohamed Abdel-Fattah, Wael Agur, Jo Booth, Cathryn Glazener, Karen Guerrero, Alison McDonald, John Norrie, Louise R Williams, Doreen McClurg. Effectiveness of pelvic floor muscle training with and without electromyographic biofeedback for urinary incontinence in women: multicentre randomised controlled trial BMJ 2020;371. https://doi.org/10.1136/bmj.m3719 .

Cook TD. Emergent principles for the design, implementation, and analysis of cluster-based experiments in social science. Ann Am Acad Pol Soc Sci. 2005;599(1):176–98.

Hoffmann T, Glasziou P, Boutron I, Milne R, Perera R, Moher D. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. Br Med J. 2014;348.

Hawe P, Shiell A, Riley T. Complex interventions: how “out of control” can a randomised controlled trial be? Br Med J. 2004;328(7455):1561–3.

Grant A, Dreischulte T, Treweek S, Guthrie B. Study protocol of a mixed-methods evaluation of a cluster randomised trial to improve the safety of NSAID and antiplatelet prescribing: Data-driven Quality Improvement in Primary Care. Trials. 2012;13:154.

Flyvbjerg B. Five misunderstandings about case-study research. Qual Inq. 2006;12(2):219–45.

Thorne S. The great saturation debate: what the “S word” means and doesn’t mean in qualitative research reporting. Can J Nurs Res. 2020;52(1):3–5.

Guest G, Bunce A, Johnson L. How many interviews are enough?: an experiment with data saturation and variability. Field Methods. 2006;18(1):59–82.

Guest G, Namey E, Chen M. A simple method to assess and report thematic saturation in qualitative research. PLoS One. 2020;15(5):e0232076.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in improvement. BMJ Qual Saf. 2015;24(3):228–38.

Rycroft-Malone J. The PARIHS framework: a framework for guiding the implementation of evidence-based practice. J Nurs Care Qual. 2004;4:297-304.

Kislov R, Pope C, Martin GP, Wilson PM. Harnessing the power of theorising in implementation science. Implement Sci. 2019;14(1):103.

Cresswell JW, Plano Clark VL. Designing and conducting mixed methods research. Thousand Oaks: Sage Publications Ltd; 2007.

Hawe P, Shiell A, Riley T. Theorising interventions as events in systems. Am J Community Psychol. 2009;43:267–76.

Craig P, Ruggiero E, Frohlich KL, Mykhalovskiy E, White M. Taking account of context in population health intervention research: guidance for producers, users and funders of research: National Institute for Health Research; 2018. https://www.ncbi.nlm.nih.gov/books/NBK498645/pdf/Bookshelf_NBK498645.pdf .

Download references

Acknowledgements

We would like to thank Professor Shaun Treweek for the discussions about context in trials.

No funding was received for this work.

Author information

Authors and affiliations.

School of Nursing, Midwifery and Paramedic Practice, Robert Gordon University, Garthdee Road, Aberdeen, AB10 7QB, UK

Aileen Grant

Faculty of Health Sciences and Sport, University of Stirling, Pathfoot Building, Stirling, FK9 4LA, UK

Carol Bugge

Department of Surgery and Cancer, Imperial College London, Charing Cross Campus, London, W6 8RP, UK

You can also search for this author in PubMed   Google Scholar

Contributions

AG, CB and MW conceptualised the study. AG wrote the paper. CB and MW commented on the drafts. All authors have approved the final manuscript.

Corresponding author

Correspondence to Aileen Grant .

Ethics declarations

Ethics approval and consent to participate.

Ethics approval and consent to participate is not appropriate as no participants were included.

Consent for publication

Consent for publication is not required as no participants were included.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Grant, A., Bugge, C. & Wells, M. Designing process evaluations using case study to explore the context of complex interventions evaluated in trials. Trials 21 , 982 (2020). https://doi.org/10.1186/s13063-020-04880-4

Download citation

Received : 09 April 2020

Accepted : 06 November 2020

Published : 27 November 2020

DOI : https://doi.org/10.1186/s13063-020-04880-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Process evaluation
  • Case study design

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

case study with evaluation

  • Open access
  • Published: 10 November 2020

Case study research for better evaluations of complex interventions: rationale and challenges

  • Sara Paparini   ORCID: orcid.org/0000-0002-1909-2481 1 ,
  • Judith Green 2 ,
  • Chrysanthi Papoutsi 1 ,
  • Jamie Murdoch 3 ,
  • Mark Petticrew 4 ,
  • Trish Greenhalgh 1 ,
  • Benjamin Hanckel 5 &
  • Sara Shaw 1  

BMC Medicine volume  18 , Article number:  301 ( 2020 ) Cite this article

17k Accesses

40 Citations

35 Altmetric

Metrics details

The need for better methods for evaluation in health research has been widely recognised. The ‘complexity turn’ has drawn attention to the limitations of relying on causal inference from randomised controlled trials alone for understanding whether, and under which conditions, interventions in complex systems improve health services or the public health, and what mechanisms might link interventions and outcomes. We argue that case study research—currently denigrated as poor evidence—is an under-utilised resource for not only providing evidence about context and transferability, but also for helping strengthen causal inferences when pathways between intervention and effects are likely to be non-linear.

Case study research, as an overall approach, is based on in-depth explorations of complex phenomena in their natural, or real-life, settings. Empirical case studies typically enable dynamic understanding of complex challenges and provide evidence about causal mechanisms and the necessary and sufficient conditions (contexts) for intervention implementation and effects. This is essential evidence not just for researchers concerned about internal and external validity, but also research users in policy and practice who need to know what the likely effects of complex programmes or interventions will be in their settings. The health sciences have much to learn from scholarship on case study methodology in the social sciences. However, there are multiple challenges in fully exploiting the potential learning from case study research. First are misconceptions that case study research can only provide exploratory or descriptive evidence. Second, there is little consensus about what a case study is, and considerable diversity in how empirical case studies are conducted and reported. Finally, as case study researchers typically (and appropriately) focus on thick description (that captures contextual detail), it can be challenging to identify the key messages related to intervention evaluation from case study reports.

Whilst the diversity of published case studies in health services and public health research is rich and productive, we recommend further clarity and specific methodological guidance for those reporting case study research for evaluation audiences.

Peer Review reports

The need for methodological development to address the most urgent challenges in health research has been well-documented. Many of the most pressing questions for public health research, where the focus is on system-level determinants [ 1 , 2 ], and for health services research, where provisions typically vary across sites and are provided through interlocking networks of services [ 3 ], require methodological approaches that can attend to complexity. The need for methodological advance has arisen, in part, as a result of the diminishing returns from randomised controlled trials (RCTs) where they have been used to answer questions about the effects of interventions in complex systems [ 4 , 5 , 6 ]. In conditions of complexity, there is limited value in maintaining the current orientation to experimental trial designs in the health sciences as providing ‘gold standard’ evidence of effect.

There are increasing calls for methodological pluralism [ 7 , 8 ], with the recognition that complex intervention and context are not easily or usefully separated (as is often the situation when using trial design), and that system interruptions may have effects that are not reducible to linear causal pathways between intervention and outcome. These calls are reflected in a shifting and contested discourse of trial design, seen with the emergence of realist [ 9 ], adaptive and hybrid (types 1, 2 and 3) [ 10 , 11 ] trials that blend studies of effectiveness with a close consideration of the contexts of implementation. Similarly, process evaluation has now become a core component of complex healthcare intervention trials, reflected in MRC guidance on how to explore implementation, causal mechanisms and context [ 12 ].

Evidence about the context of an intervention is crucial for questions of external validity. As Woolcock [ 4 ] notes, even if RCT designs are accepted as robust for maximising internal validity, questions of transferability (how well the intervention works in different contexts) and generalisability (how well the intervention can be scaled up) remain unanswered [ 5 , 13 ]. For research evidence to have impact on policy and systems organisation, and thus to improve population and patient health, there is an urgent need for better methods for strengthening external validity, including a better understanding of the relationship between intervention and context [ 14 ].

Policymakers, healthcare commissioners and other research users require credible evidence of relevance to their settings and populations [ 15 ], to perform what Rosengarten and Savransky [ 16 ] call ‘careful abstraction’ to the locales that matter for them. They also require robust evidence for understanding complex causal pathways. Case study research, currently under-utilised in public health and health services evaluation, can offer considerable potential for strengthening faith in both external and internal validity. For example, in an empirical case study of how the policy of free bus travel had specific health effects in London, UK, a quasi-experimental evaluation (led by JG) identified how important aspects of context (a good public transport system) and intervention (that it was universal) were necessary conditions for the observed effects, thus providing useful, actionable evidence for decision-makers in other contexts [ 17 ].

The overall approach of case study research is based on the in-depth exploration of complex phenomena in their natural, or ‘real-life’, settings. Empirical case studies typically enable dynamic understanding of complex challenges rather than restricting the focus on narrow problem delineations and simple fixes. Case study research is a diverse and somewhat contested field, with multiple definitions and perspectives grounded in different ways of viewing the world, and involving different combinations of methods. In this paper, we raise awareness of such plurality and highlight the contribution that case study research can make to the evaluation of complex system-level interventions. We review some of the challenges in exploiting the current evidence base from empirical case studies and conclude by recommending that further guidance and minimum reporting criteria for evaluation using case studies, appropriate for audiences in the health sciences, can enhance the take-up of evidence from case study research.

Case study research offers evidence about context, causal inference in complex systems and implementation

Well-conducted and described empirical case studies provide evidence on context, complexity and mechanisms for understanding how, where and why interventions have their observed effects. Recognition of the importance of context for understanding the relationships between interventions and outcomes is hardly new. In 1943, Canguilhem berated an over-reliance on experimental designs for determining universal physiological laws: ‘As if one could determine a phenomenon’s essence apart from its conditions! As if conditions were a mask or frame which changed neither the face nor the picture!’ ([ 18 ] p126). More recently, a concern with context has been expressed in health systems and public health research as part of what has been called the ‘complexity turn’ [ 1 ]: a recognition that many of the most enduring challenges for developing an evidence base require a consideration of system-level effects [ 1 ] and the conceptualisation of interventions as interruptions in systems [ 19 ].

The case study approach is widely recognised as offering an invaluable resource for understanding the dynamic and evolving influence of context on complex, system-level interventions [ 20 , 21 , 22 , 23 ]. Empirically, case studies can directly inform assessments of where, when, how and for whom interventions might be successfully implemented, by helping to specify the necessary and sufficient conditions under which interventions might have effects and to consolidate learning on how interdependencies, emergence and unpredictability can be managed to achieve and sustain desired effects. Case study research has the potential to address four objectives for improving research and reporting of context recently set out by guidance on taking account of context in population health research [ 24 ], that is to (1) improve the appropriateness of intervention development for specific contexts, (2) improve understanding of ‘how’ interventions work, (3) better understand how and why impacts vary across contexts and (4) ensure reports of intervention studies are most useful for decision-makers and researchers.

However, evaluations of complex healthcare interventions have arguably not exploited the full potential of case study research and can learn much from other disciplines. For evaluative research, exploratory case studies have had a traditional role of providing data on ‘process’, or initial ‘hypothesis-generating’ scoping, but might also have an increasing salience for explanatory aims. Across the social and political sciences, different kinds of case studies are undertaken to meet diverse aims (description, exploration or explanation) and across different scales (from small N qualitative studies that aim to elucidate processes, or provide thick description, to more systematic techniques designed for medium-to-large N cases).

Case studies with explanatory aims vary in terms of their positioning within mixed-methods projects, with designs including (but not restricted to) (1) single N of 1 studies of interventions in specific contexts, where the overall design is a case study that may incorporate one or more (randomised or not) comparisons over time and between variables within the case; (2) a series of cases conducted or synthesised to provide explanation from variations between cases; and (3) case studies of particular settings within RCT or quasi-experimental designs to explore variation in effects or implementation.

Detailed qualitative research (typically done as ‘case studies’ within process evaluations) provides evidence for the plausibility of mechanisms [ 25 ], offering theoretical generalisations for how interventions may function under different conditions. Although RCT designs reduce many threats to internal validity, the mechanisms of effect remain opaque, particularly when the causal pathways between ‘intervention’ and ‘effect’ are long and potentially non-linear: case study research has a more fundamental role here, in providing detailed observational evidence for causal claims [ 26 ] as well as producing a rich, nuanced picture of tensions and multiple perspectives [ 8 ].

Longitudinal or cross-case analysis may be best suited for evidence generation in system-level evaluative research. Turner [ 27 ], for instance, reflecting on the complex processes in major system change, has argued for the need for methods that integrate learning across cases, to develop theoretical knowledge that would enable inferences beyond the single case, and to develop generalisable theory about organisational and structural change in health systems. Qualitative Comparative Analysis (QCA) [ 28 ] is one such formal method for deriving causal claims, using set theory mathematics to integrate data from empirical case studies to answer questions about the configurations of causal pathways linking conditions to outcomes [ 29 , 30 ].

Nonetheless, the single N case study, too, provides opportunities for theoretical development [ 31 ], and theoretical generalisation or analytical refinement [ 32 ]. How ‘the case’ and ‘context’ are conceptualised is crucial here. Findings from the single case may seem to be confined to its intrinsic particularities in a specific and distinct context [ 33 ]. However, if such context is viewed as exemplifying wider social and political forces, the single case can be ‘telling’, rather than ‘typical’, and offer insight into a wider issue [ 34 ]. Internal comparisons within the case can offer rich possibilities for logical inferences about causation [ 17 ]. Further, case studies of any size can be used for theory testing through refutation [ 22 ]. The potential lies, then, in utilising the strengths and plurality of case study to support theory-driven research within different methodological paradigms.

Evaluation research in health has much to learn from a range of social sciences where case study methodology has been used to develop various kinds of causal inference. For instance, Gerring [ 35 ] expands on the within-case variations utilised to make causal claims. For Gerring [ 35 ], case studies come into their own with regard to invariant or strong causal claims (such as X is a necessary and/or sufficient condition for Y) rather than for probabilistic causal claims. For the latter (where experimental methods might have an advantage in estimating effect sizes), case studies offer evidence on mechanisms: from observations of X affecting Y, from process tracing or from pattern matching. Case studies also support the study of emergent causation, that is, the multiple interacting properties that account for particular and unexpected outcomes in complex systems, such as in healthcare [ 8 ].

Finally, efficacy (or beliefs about efficacy) is not the only contributor to intervention uptake, with a range of organisational and policy contingencies affecting whether an intervention is likely to be rolled out in practice. Case study research is, therefore, invaluable for learning about contextual contingencies and identifying the conditions necessary for interventions to become normalised (i.e. implemented routinely) in practice [ 36 ].

The challenges in exploiting evidence from case study research

At present, there are significant challenges in exploiting the benefits of case study research in evaluative health research, which relate to status, definition and reporting. Case study research has been marginalised at the bottom of an evidence hierarchy, seen to offer little by way of explanatory power, if nonetheless useful for adding descriptive data on process or providing useful illustrations for policymakers [ 37 ]. This is an opportune moment to revisit this low status. As health researchers are increasingly charged with evaluating ‘natural experiments’—the use of face masks in the response to the COVID-19 pandemic being a recent example [ 38 ]—rather than interventions that take place in settings that can be controlled, research approaches using methods to strengthen causal inference that does not require randomisation become more relevant.

A second challenge for improving the use of case study evidence in evaluative health research is that, as we have seen, what is meant by ‘case study’ varies widely, not only across but also within disciplines. There is indeed little consensus amongst methodologists as to how to define ‘a case study’. Definitions focus, variously, on small sample size or lack of control over the intervention (e.g. [ 39 ] p194), on in-depth study and context [ 40 , 41 ], on the logic of inference used [ 35 ] or on distinct research strategies which incorporate a number of methods to address questions of ‘how’ and ‘why’ [ 42 ]. Moreover, definitions developed for specific disciplines do not capture the range of ways in which case study research is carried out across disciplines. Multiple definitions of case study reflect the richness and diversity of the approach. However, evidence suggests that a lack of consensus across methodologists results in some of the limitations of published reports of empirical case studies [ 43 , 44 ]. Hyett and colleagues [ 43 ], for instance, reviewing reports in qualitative journals, found little match between methodological definitions of case study research and how authors used the term.

This raises the third challenge we identify that case study reports are typically not written in ways that are accessible or useful for the evaluation research community and policymakers. Case studies may not appear in journals widely read by those in the health sciences, either because space constraints preclude the reporting of rich, thick descriptions, or because of the reported lack of willingness of some biomedical journals to publish research that uses qualitative methods [ 45 ], signalling the persistence of the aforementioned evidence hierarchy. Where they do, however, the term ‘case study’ is used to indicate, interchangeably, a qualitative study, an N of 1 sample, or a multi-method, in-depth analysis of one example from a population of phenomena. Definitions of what constitutes the ‘case’ are frequently lacking and appear to be used as a synonym for the settings in which the research is conducted. Despite offering insights for evaluation, the primary aims may not have been evaluative, so the implications may not be explicitly drawn out. Indeed, some case study reports might properly be aiming for thick description without necessarily seeking to inform about context or causality.

Acknowledging plurality and developing guidance

We recognise that definitional and methodological plurality is not only inevitable, but also a necessary and creative reflection of the very different epistemological and disciplinary origins of health researchers, and the aims they have in doing and reporting case study research. Indeed, to provide some clarity, Thomas [ 46 ] has suggested a typology of subject/purpose/approach/process for classifying aims (e.g. evaluative or exploratory), sample rationale and selection and methods for data generation of case studies. We also recognise that the diversity of methods used in case study research, and the necessary focus on narrative reporting, does not lend itself to straightforward development of formal quality or reporting criteria.

Existing checklists for reporting case study research from the social sciences—for example Lincoln and Guba’s [ 47 ] and Stake’s [ 33 ]—are primarily orientated to the quality of narrative produced, and the extent to which they encapsulate thick description, rather than the more pragmatic issues of implications for intervention effects. Those designed for clinical settings, such as the CARE (CAse REports) guidelines, provide specific reporting guidelines for medical case reports about single, or small groups of patients [ 48 ], not for case study research.

The Design of Case Study Research in Health Care (DESCARTE) model [ 44 ] suggests a series of questions to be asked of a case study researcher (including clarity about the philosophy underpinning their research), study design (with a focus on case definition) and analysis (to improve process). The model resembles toolkits for enhancing the quality and robustness of qualitative and mixed-methods research reporting, and it is usefully open-ended and non-prescriptive. However, even if it does include some reflections on context, the model does not fully address aspects of context, logic and causal inference that are perhaps most relevant for evaluative research in health.

Hence, for evaluative research where the aim is to report empirical findings in ways that are intended to be pragmatically useful for health policy and practice, this may be an opportune time to consider how to best navigate plurality around what is (minimally) important to report when publishing empirical case studies, especially with regards to the complex relationships between context and interventions, information that case study research is well placed to provide.

The conventional scientific quest for certainty, predictability and linear causality (maximised in RCT designs) has to be augmented by the study of uncertainty, unpredictability and emergent causality [ 8 ] in complex systems. This will require methodological pluralism, and openness to broadening the evidence base to better understand both causality in and the transferability of system change intervention [ 14 , 20 , 23 , 25 ]. Case study research evidence is essential, yet is currently under exploited in the health sciences. If evaluative health research is to move beyond the current impasse on methods for understanding interventions as interruptions in complex systems, we need to consider in more detail how researchers can conduct and report empirical case studies which do aim to elucidate the contextual factors which interact with interventions to produce particular effects. To this end, supported by the UK’s Medical Research Council, we are embracing the challenge to develop guidance for case study researchers studying complex interventions. Following a meta-narrative review of the literature, we are planning a Delphi study to inform guidance that will, at minimum, cover the value of case study research for evaluating the interrelationship between context and complex system-level interventions; for situating and defining ‘the case’, and generalising from case studies; as well as provide specific guidance on conducting, analysing and reporting case study research. Our hope is that such guidance can support researchers evaluating interventions in complex systems to better exploit the diversity and richness of case study research.

Availability of data and materials

Not applicable (article based on existing available academic publications)

Abbreviations

Qualitative comparative analysis

Quasi-experimental design

Randomised controlled trial

Diez Roux AV. Complex systems thinking and current impasses in health disparities research. Am J Public Health. 2011;101(9):1627–34.

Article   Google Scholar  

Ogilvie D, Mitchell R, Mutrie N, M P, Platt S. Evaluating health effects of transport interventions: methodologic case study. Am J Prev Med 2006;31:118–126.

Walshe C. The evaluation of complex interventions in palliative care: an exploration of the potential of case study research strategies. Palliat Med. 2011;25(8):774–81.

Woolcock M. Using case studies to explore the external validity of ‘complex’ development interventions. Evaluation. 2013;19:229–48.

Cartwright N. Are RCTs the gold standard? BioSocieties. 2007;2(1):11–20.

Deaton A, Cartwright N. Understanding and misunderstanding randomized controlled trials. Soc Sci Med. 2018;210:2–21.

Salway S, Green J. Towards a critical complex systems approach to public health. Crit Public Health. 2017;27(5):523–4.

Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med. 2018;16(1):95.

Bonell C, Warren E, Fletcher A. Realist trials and the testing of context-mechanism-outcome configurations: a response to Van Belle et al. Trials. 2016;17:478.

Pallmann P, Bedding AW, Choodari-Oskooei B. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med. 2018;16:29.

Curran G, Bauer M, Mittman B, Pyne J, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–26. https://doi.org/10.1097/MLR.0b013e3182408812 .

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015 [cited 2020 Jun 27];350. Available from: https://www.bmj.com/content/350/bmj.h1258 .

Evans RE, Craig P, Hoddinott P, Littlecott H, Moore L, Murphy S, et al. When and how do ‘effective’ interventions need to be adapted and/or re-evaluated in new contexts? The need for guidance. J Epidemiol Community Health. 2019;73(6):481–2.

Shoveller J. A critical examination of representations of context within research on population health interventions. Crit Public Health. 2016;26(5):487–500.

Treweek S, Zwarenstein M. Making trials matter: pragmatic and explanatory trials and the problem of applicability. Trials. 2009;10(1):37.

Rosengarten M, Savransky M. A careful biomedicine? Generalization and abstraction in RCTs. Crit Public Health. 2019;29(2):181–91.

Green J, Roberts H, Petticrew M, Steinbach R, Goodman A, Jones A, et al. Integrating quasi-experimental and inductive designs in evaluation: a case study of the impact of free bus travel on public health. Evaluation. 2015;21(4):391–406.

Canguilhem G. The normal and the pathological. New York: Zone Books; 1991. (1949).

Google Scholar  

Hawe P, Shiell A, Riley T. Theorising interventions as events in systems. Am J Community Psychol. 2009;43:267–76.

King G, Keohane RO, Verba S. Designing social inquiry: scientific inference in qualitative research: Princeton University Press; 1994.

Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629.

Yin R. Enhancing the quality of case studies in health services research. Health Serv Res. 1999;34(5 Pt 2):1209.

CAS   PubMed   PubMed Central   Google Scholar  

Raine R, Fitzpatrick R, Barratt H, Bevan G, Black N, Boaden R, et al. Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Health Serv Deliv Res. 2016 [cited 2020 Jun 30];4(16). Available from: https://www.journalslibrary.nihr.ac.uk/hsdr/hsdr04160#/abstract .

Craig P, Di Ruggiero E, Frohlich KL, E M, White M, Group CCGA. Taking account of context in population health intervention research: guidance for producers, users and funders of research. NIHR Evaluation, Trials and Studies Coordinating Centre; 2018.

Grant RL, Hood R. Complex systems, explanation and policy: implications of the crisis of replication for public health research. Crit Public Health. 2017;27(5):525–32.

Mahoney J. Strategies of causal inference in small-N analysis. Sociol Methods Res. 2000;4:387–424.

Turner S. Major system change: a management and organisational research perspective. In: Rosalind Raine, Ray Fitzpatrick, Helen Barratt, Gywn Bevan, Nick Black, Ruth Boaden, et al. Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Health Serv Deliv Res. 2016;4(16) 2016. https://doi.org/10.3310/hsdr04160.

Ragin CC. Using qualitative comparative analysis to study causal complexity. Health Serv Res. 1999;34(5 Pt 2):1225.

Hanckel B, Petticrew M, Thomas J, Green J. Protocol for a systematic review of the use of qualitative comparative analysis for evaluative questions in public health research. Syst Rev. 2019;8(1):252.

Schneider CQ, Wagemann C. Set-theoretic methods for the social sciences: a guide to qualitative comparative analysis: Cambridge University Press; 2012. 369 p.

Flyvbjerg B. Five misunderstandings about case-study research. Qual Inq. 2006;12:219–45.

Tsoukas H. Craving for generality and small-N studies: a Wittgensteinian approach towards the epistemology of the particular in organization and management studies. Sage Handb Organ Res Methods. 2009:285–301.

Stake RE. The art of case study research. London: Sage Publications Ltd; 1995.

Mitchell JC. Typicality and the case study. Ethnographic research: A guide to general conduct. Vol. 238241. 1984.

Gerring J. What is a case study and what is it good for? Am Polit Sci Rev. 2004;98(2):341–54.

May C, Mort M, Williams T, F M, Gask L. Health technology assessment in its local contexts: studies of telehealthcare. Soc Sci Med 2003;57:697–710.

McGill E. Trading quality for relevance: non-health decision-makers’ use of evidence on the social determinants of health. BMJ Open. 2015;5(4):007053.

Greenhalgh T. We can’t be 100% sure face masks work – but that shouldn’t stop us wearing them | Trish Greenhalgh. The Guardian. 2020 [cited 2020 Jun 27]; Available from: https://www.theguardian.com/commentisfree/2020/jun/05/face-masks-coronavirus .

Hammersley M. So, what are case studies? In: What’s wrong with ethnography? New York: Routledge; 1992.

Crowe S, Cresswell K, Robertson A, Huby G, Avery A, Sheikh A. The case study approach. BMC Med Res Methodol. 2011;11(1):100.

Luck L, Jackson D, Usher K. Case study: a bridge across the paradigms. Nurs Inq. 2006;13(2):103–9.

Yin RK. Case study research and applications: design and methods: Sage; 2017.

Hyett N, A K, Dickson-Swift V. Methodology or method? A critical review of qualitative case study reports. Int J Qual Stud Health Well-Being. 2014;9:23606.

Carolan CM, Forbat L, Smith A. Developing the DESCARTE model: the design of case study research in health care. Qual Health Res. 2016;26(5):626–39.

Greenhalgh T, Annandale E, Ashcroft R, Barlow J, Black N, Bleakley A, et al. An open letter to the BMJ editors on qualitative research. Bmj. 2016;352.

Thomas G. A typology for the case study in social science following a review of definition, discourse, and structure. Qual Inq. 2011;17(6):511–21.

Lincoln YS, Guba EG. Judging the quality of case study reports. Int J Qual Stud Educ. 1990;3(1):53–9.

Riley DS, Barber MS, Kienle GS, Aronson JK, Schoen-Angerer T, Tugwell P, et al. CARE guidelines for case reports: explanation and elaboration document. J Clin Epidemiol. 2017;89:218–35.

Download references

Acknowledgements

Not applicable

This work was funded by the Medical Research Council - MRC Award MR/S014632/1 HCS: Case study, Context and Complex interventions (TRIPLE C). SP was additionally funded by the University of Oxford's Higher Education Innovation Fund (HEIF).

Author information

Authors and affiliations.

Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK

Sara Paparini, Chrysanthi Papoutsi, Trish Greenhalgh & Sara Shaw

Wellcome Centre for Cultures & Environments of Health, University of Exeter, Exeter, UK

Judith Green

School of Health Sciences, University of East Anglia, Norwich, UK

Jamie Murdoch

Public Health, Environments and Society, London School of Hygiene & Tropical Medicin, London, UK

Mark Petticrew

Institute for Culture and Society, Western Sydney University, Penrith, Australia

Benjamin Hanckel

You can also search for this author in PubMed   Google Scholar

Contributions

JG, MP, SP, JM, TG, CP and SS drafted the initial paper; all authors contributed to the drafting of the final version, and read and approved the final manuscript.

Corresponding author

Correspondence to Sara Paparini .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Paparini, S., Green, J., Papoutsi, C. et al. Case study research for better evaluations of complex interventions: rationale and challenges. BMC Med 18 , 301 (2020). https://doi.org/10.1186/s12916-020-01777-6

Download citation

Received : 03 July 2020

Accepted : 07 September 2020

Published : 10 November 2020

DOI : https://doi.org/10.1186/s12916-020-01777-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative
  • Case studies
  • Mixed-method
  • Public health
  • Health services research
  • Interventions

BMC Medicine

ISSN: 1741-7015

case study with evaluation

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Case Study? | Definition, Examples & Methods

What Is a Case Study? | Definition, Examples & Methods

Published on May 8, 2019 by Shona McCombes . Revised on November 20, 2023.

A case study is a detailed study of a specific subject, such as a person, group, place, event, organization, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research.

A case study research design usually involves qualitative methods , but quantitative methods are sometimes also used. Case studies are good for describing , comparing, evaluating and understanding different aspects of a research problem .

Table of contents

When to do a case study, step 1: select a case, step 2: build a theoretical framework, step 3: collect your data, step 4: describe and analyze the case, other interesting articles.

A case study is an appropriate research design when you want to gain concrete, contextual, in-depth knowledge about a specific real-world subject. It allows you to explore the key characteristics, meanings, and implications of the case.

Case studies are often a good choice in a thesis or dissertation . They keep your project focused and manageable when you don’t have the time or resources to do large-scale research.

You might use just one complex case study where you explore a single subject in depth, or conduct multiple case studies to compare and illuminate different aspects of your research problem.

Prevent plagiarism. Run a free check.

Once you have developed your problem statement and research questions , you should be ready to choose the specific case that you want to focus on. A good case study should have the potential to:

  • Provide new or unexpected insights into the subject
  • Challenge or complicate existing assumptions and theories
  • Propose practical courses of action to resolve a problem
  • Open up new directions for future research

TipIf your research is more practical in nature and aims to simultaneously investigate an issue as you solve it, consider conducting action research instead.

Unlike quantitative or experimental research , a strong case study does not require a random or representative sample. In fact, case studies often deliberately focus on unusual, neglected, or outlying cases which may shed new light on the research problem.

Example of an outlying case studyIn the 1960s the town of Roseto, Pennsylvania was discovered to have extremely low rates of heart disease compared to the US average. It became an important case study for understanding previously neglected causes of heart disease.

However, you can also choose a more common or representative case to exemplify a particular category, experience or phenomenon.

Example of a representative case studyIn the 1920s, two sociologists used Muncie, Indiana as a case study of a typical American city that supposedly exemplified the changing culture of the US at the time.

While case studies focus more on concrete details than general theories, they should usually have some connection with theory in the field. This way the case study is not just an isolated description, but is integrated into existing knowledge about the topic. It might aim to:

  • Exemplify a theory by showing how it explains the case under investigation
  • Expand on a theory by uncovering new concepts and ideas that need to be incorporated
  • Challenge a theory by exploring an outlier case that doesn’t fit with established assumptions

To ensure that your analysis of the case has a solid academic grounding, you should conduct a literature review of sources related to the topic and develop a theoretical framework . This means identifying key concepts and theories to guide your analysis and interpretation.

There are many different research methods you can use to collect data on your subject. Case studies tend to focus on qualitative data using methods such as interviews , observations , and analysis of primary and secondary sources (e.g., newspaper articles, photographs, official records). Sometimes a case study will also collect quantitative data.

Example of a mixed methods case studyFor a case study of a wind farm development in a rural area, you could collect quantitative data on employment rates and business revenue, collect qualitative data on local people’s perceptions and experiences, and analyze local and national media coverage of the development.

The aim is to gain as thorough an understanding as possible of the case and its context.

In writing up the case study, you need to bring together all the relevant aspects to give as complete a picture as possible of the subject.

How you report your findings depends on the type of research you are doing. Some case studies are structured like a standard scientific paper or thesis , with separate sections or chapters for the methods , results and discussion .

Others are written in a more narrative style, aiming to explore the case from various angles and analyze its meanings and implications (for example, by using textual analysis or discourse analysis ).

In all cases, though, make sure to give contextual details about the case, connect it back to the literature and theory, and discuss how it fits into wider patterns or debates.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). What Is a Case Study? | Definition, Examples & Methods. Scribbr. Retrieved March 30, 2024, from https://www.scribbr.com/methodology/case-study/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, primary vs. secondary sources | difference & examples, what is a theoretical framework | guide to organizing, what is action research | definition & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

15.7 Evaluation: Presentation and Analysis of Case Study

Learning outcomes.

By the end of this section, you will be able to:

  • Revise writing to follow the genre conventions of case studies.
  • Evaluate the effectiveness and quality of a case study report.

Case studies follow a structure of background and context , methods , findings , and analysis . Body paragraphs should have main points and concrete details. In addition, case studies are written in formal language with precise wording and with a specific purpose and audience (generally other professionals in the field) in mind. Case studies also adhere to the conventions of the discipline’s formatting guide ( APA Documentation and Format in this study). Compare your case study with the following rubric as a final check.

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/writing-guide/pages/1-unit-introduction
  • Authors: Michelle Bachelor Robinson, Maria Jerskey, featuring Toby Fulwiler
  • Publisher/website: OpenStax
  • Book title: Writing Guide with Handbook
  • Publication date: Dec 21, 2021
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/writing-guide/pages/1-unit-introduction
  • Section URL: https://openstax.org/books/writing-guide/pages/15-7-evaluation-presentation-and-analysis-of-case-study

© Dec 19, 2023 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

This website may not work correctly because your browser is out of date. Please update your browser .

Using case studies to do program evaluation

  • Using case studies to do program evaluation File type PDF File size 79.49 KB

This paper, authored by Edith D. Balbach for the California Department of Health Services is designed to help evaluators decide whether to use a case study evaluation approach.

It also offers guidance on how to conduct a case study evaluation.

This resource was suggested to BetterEvaluation by Benita Williams.

  • Using a Case Study as an Evaluation Tool 3
  • When to Use a Case Study 4
  • How to Do a Case Study 6
  • Unit Selection 6
  • Data Collection 7
  • Data Analysis and Interpretation 12

Balbach, E. D. 9 California Department of Health Services, (1999).  Using case studies to do program evaluation . Retrieved from website: http://www.case.edu/affil/healthpromotion/ProgramEvaluation.pdf

'Using case studies to do program evaluation' is referenced in:

Back to top

© 2022 BetterEvaluation. All right reserved.

Case Study Evaluation: Past, Present and Future Challenges: Volume 15

Table of contents, case study evaluation: past, present and future challenges, advances in program evaluation, copyright page, list of contributors, introduction, case study, methodology and educational evaluation: a personal view.

This chapter gives one version of the recent history of evaluation case study. It looks back over the emergence of case study as a sociological method, developed in the early years of the 20th Century and celebrated and elaborated by the Chicago School of urban sociology at Chicago University, starting throughout the 1920s and 1930s. Some of the basic methods, including constant comparison, were generated at that time. Only partly influenced by this methodological movement, an alliance between an Illinois-based team in the United States and a team at the University of East Anglia in the United Kingdom recast the case method as a key tool for the evaluation of social and educational programmes.

Letters from a Headmaster ☆ Originally published in Simons, H. (Ed.) (1980). Towards a Science of the Singular: Essays about Case Study in Educational Research and Evaluation. Occasional Papers No. 10. Norwich, UK: Centre for Applied Research, University of East Anglia.

Story telling and educational understanding ☆ previously published in occasional papers #12, evaluation centre, university of western michigan, 1978..

The full ‘storytelling’ paper was written in 1978 and was influential in its time. It is reprinted here, introduced by an Author's reflection on it in 2014. The chapter describes the author’s early disenchantment with traditional approaches to educational research.

He regards educational research as, at best, a misnomer, since little of it is preceded by a search . Entitled educational researchers often fancy themselves as scientists at work. But those whom they attempt to describe are often artists at work. Statistical methodologies enable educational researchers to measure something, but their measurements can neither capture nor explain splendid teaching.

Since such a tiny fraction of what is published in educational research journals influences school practitioners, professional researchers should risk trying alternative approaches to uncovering what is going on in schools.

Story telling is posited as a possible key to producing insights that inform and ultimately improve educational practice. It advocates openness to broad inquiry into the culture of the educational setting.

Case Study as Antidote to the Literal

Much programme and policy evaluation yields to the pressure to report on the productivity of programmes and is perforce compliant with the conditions of contract. Too often the view of these evaluations is limited to a literal reading of the analytical challenge. If we are evaluating X we look critically at X1, X2 and X3. There might be cause for embracing adjoining data sources such as W1 and Y1. This ignores frequent realities that an evaluation specification is only an approximate starting point for an unpredictable journey into comprehensive understanding; that the specification represents only that which is wanted by the sponsor, and not all that may be needed ; and that the contractual specification too often insists on privileging the questions and concerns of a few. Case study evaluation proves an alternative that allows for the less-than-literal in the form of analysis of contingencies – how people, phenomena and events may be related in dynamic ways, how context and action have only a blurred dividing line and how what defines the case as a case may only emerge late in the study.

Thinking about Case Studies in 3-D: Researching the NHS Clinical Commissioning Landscape in England

What is our unit of analysis and by implication what are the boundaries of our cases? This is a question we grapple with at the start of every new project. We observe that case studies are often referred to in an unreflective manner and are often conflated with geographical location. Neat units of analysis and clearly bounded cases usually do not reflect the messiness encountered during qualitative fieldwork. Others have puzzled over these questions. We briefly discuss work to problematise the use of households as units of analysis in the context of apartheid South Africa and then consider work of other anthropologists engaged in multi-site ethnography. We have found the notion of ‘following’ chains, paths and threads across sites to be particularly insightful.

We present two examples from our work studying commissioning in the English National Health Service (NHS) to illustrate our struggles with case studies. The first is a study of Practice-based Commissioning groups and the second is a study of the early workings of Clinical Commissioning Groups. In both instances we show how ideas of what constituted our unit of analysis and the boundaries of our cases became less clear as our research progressed. We also discuss pressures we experienced to add more case studies to our projects. These examples illustrate the primacy for us of understanding interactions between place, local history and rapidly developing policy initiatives. Understanding cases in this way can be challenging in a context where research funders hold different views of what constitutes a case.

The Case for Evaluating Process and Worth: Evaluation of a Programme for Carers and People with Dementia

A case study methodology was applied as a major component of a mixed-methods approach to the evaluation of a mobile dementia education and support service in the Bega Valley Shire, New South Wales, Australia. In-depth interviews with people with dementia (PWD), their carers, programme staff, family members and service providers and document analysis including analysis of client case notes and client database were used.

The strengths of the case study approach included: (i) simultaneous evaluation of programme process and worth, (ii) eliciting the theory of change and addressing the problem of attribution, (iii) demonstrating the impact of the programme on earlier steps identified along the causal pathway (iv) understanding the complexity of confounding factors, (v) eliciting the critical role of the social, cultural and political context, (vi) understanding the importance of influences contributing to differences in programme impact for different participants and (vii) providing insight into how programme participants experience the value of the programme including unintended benefits.

The broader case of the collective experience of dementia and as part of this experience, the impact of a mobile programme of support and education, in a predominately rural area grew from the investigation of the programme experience of ‘individual cases’ of carers and PWD. Investigation of living conditions, relationships, service interactions through observation and increased depth of interviews with service providers and family members would have provided valuable perspectives and thicker description of the case for increased understanding of the case and strength of the evaluation.

The Collapse of “Primary Care” in Medical Education: A Case Study of Michigan’s Community/University Health Partnerships Project

This chapter describes a case study of a social change project in medical education (primary care), in which the critical interpretive evaluation methodology I sought to use came up against the “positivist” approach preferred by senior figures in the medical school who commissioned the evaluation.

I describe the background to the study and justify the evaluation approach and methods employed in the case study – drawing on interviews, document analysis, survey research, participant observation, literature reviews, and critical incidents – one of which was the decision by the medical school hierarchy to restrict my contact with the lay community in my official evaluation duties. The use of critical ethnography also embraced wider questions about circuits of power and the social and political contexts within which the “social change” effort occurred.

Central to my analysis is John Gaventa’s theory of power as “the internalization of values that inhibit consciousness and participation while encouraging powerlessness and dependency.” Gaventa argued, essentially, that the evocation of power has as much to do with preventing decisions as with bringing them about. My chosen case illustrated all three dimensions of power that Gaventa originally uncovered in his portrait of self-interested Appalachian coal mine owners: (1) communities were largely excluded from decision making power; (2) issues were avoided or suppressed; and (3) the interests of the oppressed went largely unrecognized.

The account is auto-ethnographic, hence the study is limited by my abilities, biases, and subject positions. I reflect on these in the chapter.

The study not only illustrates the unique contribution of case study as a research methodology but also its low status in the positivist paradigm adhered to by many doctors. Indeed, the tension between the potential of case study to illuminate the complexities of community engagement through thick description and the rejection of this very method as inherently “flawed” suggests that medical education may be doomed to its neoliberal fate for some time to come.

‘Lead’ Standard Evaluation

This is a personal narrative, but I trust not a self-regarding one. For more years than I care to remember I have been working in the field of curriculum (or ‘program’) evaluation. The field by any standards is dispersed and fragmented, with variously ascribed purposes, roles, implicit values, political contexts, and social research methods. Attempts to organize this territory into an ‘evaluation theory tree’ (e.g. Alkin, M., & Christie, C. (2003). An evaluation theory tree. In M. Alkin (Ed.), Evaluation roots: Tracing theorists’ views and influences (pp. 12–65). Thousand Oaks, CA: Sage) have identified broad types or ‘branches’, but the migration of specific characteristics (like ‘case study’) or individual practitioners across the boundaries has tended to undermine the analysis at the level of detail, and there is no suggestion that it represents a cladistic taxonomy. There is, however, general agreement that the roots of evaluation practice tap into a variety of cultural sources, being grounded bureaucratically in (potentially conflicting) doctrines of accountability and methodologically in discipline-based or pragmatically eclectic formats for systematic social enquiry.

In general, this diversity is not treated as problematic. The professional evaluation community has increasingly taken the view (‘let all the flowers grow’) that evaluation models can be deemed appropriate across a wide spectrum, with their appropriateness determined by the nature of the task and its context, including in relation to hybrid studies using mixed models or displaying what Geertz (Geertz, C. (1980/1993). Blurred genres: The refiguration of social thought. The American Scholar , 49(2), 165–179) called ‘blurred genres’. However, from time to time historic tribal rivalries re-emerge as particular practitioners feel the need to defend their modus operandi (and thereby their livelihood) against paradigm shifts or governments and other sponsors of program evaluation seeking for ideological reasons to prioritize certain types of study at the expense of others. The latter possibility poses a potential threat that needs to be taken seriously by evaluators within the broad tradition showcased in this volume, interpretive qualitative case studies of educational programs that combine naturalistic description (often ‘thick’; Geertz, C. (1973). Thick description: Towards an interpretive theory of culture. In The interpretation of culture (pp. 3–30). New York, NY: Basic Books.) description with a values-orientated analysis of their implications. Such studies are more likely to seek inspiration from anthropology or critical discourse analysis than from the randomly controlled trials familiar in medical research or laboratory practice in the physical sciences, despite the impressive rigour of the latter in appropriate contexts. It is the risk of ideological allegiance that I address in this chapter.

Freedom from the Rubric

Twice-told tales how public inquiry could inform n of 1 case study research.

This chapter considers the usefulness and validity of public inquiries as a source of data and preliminary interpretation for case study research. Using two contrasting examples – the Bristol Inquiry into excess deaths in a children’s cardiac surgery unit and the Woolf Inquiry into a breakdown of governance at the London School of Economics (LSE) – I show how academics can draw fruitfully on, and develop further analysis from, the raw datasets, published summaries and formal judgements of public inquiries.

Academic analysis of public inquiries can take two broad forms, corresponding to the two main approaches to individual case study defined by Stake: instrumental (selecting the public inquiry on the basis of pre-defined theoretical features and using the material to develop and test theoretical propositions) and intrinsic (selecting the public inquiry on the basis of the particular topic addressed and using the material to explore questions about what was going on and why).

The advantages of a public inquiry as a data source for case study research typically include a clear and uncontested focus of inquiry; the breadth and richness of the dataset collected; the exceptional level of support available for the tasks of transcribing, indexing, collating, summarising and so on; and the expert interpretations and insights of the inquiry’s chair (with which the researcher may or may not agree). A significant disadvantage is that whilst the dataset collected for a public inquiry is typically ‘rich’, it has usually been collected under far from ideal research conditions. Hence, while public inquiries provide a potentially rich resource for researchers, those who seek to use public inquiry data for research must justify their choice on both ethical and scientific grounds.

Evaluation as the Co-Construction of Knowledge: Case Studies of Place-Based Leadership and Public Service Innovation

This chapter introduces the notion of the ‘Innovation Story’ as a methodological approach to public policy evaluation, which builds in greater opportunity for learning and reflexivity.

The Innovation Story is an adaptation of the case study approach and draws on participatory action research traditions. It is a structured narrative that describes a particular public policy innovation in the personalised contexts in which it is experienced by innovators. Its construction involves a discursive process through which involved actors tell their story, explain it to others, listen to their questions and co-construct knowledge of change together.

The approach was employed to elaborate five case studies of place-based leadership and public service innovation in the United Kingdom, The Netherlands and Mexico. The key findings are that spaces in which civic leaders come together from different ‘realms’ of leadership in a locality (community, business, professional managers and political leaders) can become innovation zones that foster inventive behaviour. Much depends on the quality of civic leadership, and its capacity to foster genuine dialogue and co-responsibility. This involves the evaluation seeking out influential ideas from below the level of strategic management, and documenting leadership activities of those who are skilled at ‘boundary crossing’ – for example, communicating between sectors.

The evaluator can be a key player in this process, as a convenor of safe spaces for actors to come together to discuss and deliberate before returning to practice. Our approach therefore argues for a particular awareness of the political nature of policy evaluation in terms of negotiating these spaces, and the need for politically engaged evaluators who are skilled in facilitating collective learning processes.

Evaluation Noir: The Other Side of the Experience

What are the boundaries of a case study, and what should new evaluators do when these boundaries are breached? How does a new evaluator interpret the breakdown of communication, how do new evaluators protect themselves when the evaluation fails? This chapter discusses the journey of an evaluator new to the field of qualitative evaluative inquiry. Integrating the perspective of a senior evaluator, the authors reflect on three key experiences that informed the new evaluator. The authors hope to provide a rare insight into case study practice as emotional issues turn out to be just as complex as the methodology used.

About the Editors

About the authors.

  • Jill Russell
  • Trisha Greenhalgh
  • Saville Kushner

We’re listening — tell us what you think

Something didn’t work….

Report bugs here

All feedback is valuable

Please share your general feedback

Join us on our journey

Platform update page.

Visit emeraldpublishing.com/platformupdate to discover the latest news and updates

Questions & More Information

Answers to the most commonly asked questions here

Site logo

  • MONITORING AND EVALUATION APPROACHES
  • Protected: Learning Center

Monitoring and Evaluation Approaches

Monitoring and evaluation (M&E) are two essential components of project management that help organizations assess the progress and effectiveness of their programs. Monitoring and evaluation approaches are essential for any organization for measuring the progress and success of any project or program. Evaluation approaches have often been developed to address specific evaluation questions or challenges and they refer to an integrated package of methods and processes.

Table of contents

Results-based monitoring and evaluation approach

Participatory monitoring and evaluation approach, theory-based evaluation approach.

  • Utilization-focused evaluation approach

M&E for learning

  • Gender-responsive evaluation

Case study evaluation approach

Process monitoring and evaluation approach, impact evaluation approach.

  • Evaluation approaches versus evaluation methods

Conclusion on monitoring and evaluation approaches

This approach involves setting specific, measurable, achievable, relevant, and time-bound (SMART) indicators for a project and tracking progress against these indicators. It emphasizes the importance of measuring outcomes and impact rather than just activities. Results-based monitoring and evaluation (M&E) approaches can provide the insight needed to evaluate performance and strategy. Results-based M&E involves collecting and analyzing data to assess the impact of programs and identify areas for improvement. It helps organizations understand where they need to focus their resources, and allows them to ensure that projects are meeting established goals. Results-based M&E is an invaluable tool for ensuring efficiency, effectiveness and accountability in any organization’s operations. Read more .

This approach involves involving stakeholders, including beneficiaries, in the monitoring and evaluation process. It can help ensure that the evaluation is sensitive to the needs of those who are intended to benefit from the project. It provides an insight into the progress of the program or project and helps to identify problems that need immediate attention. Participatory monitoring and evaluation approaches help to ensure that all stakeholders are engaged in the evaluation process, bringing a wider perspective and enabling more effective feedback. Through this method, progress and impact can be better understood, allowing for better decisions in order to reach desired outcomes. Participatory approaches are therefore an important part of monitoring and evaluation for any project or program. Read more .

This approach involves examining the underlying theory of change that a project is based on to determine whether the assumptions about how the project will work are valid. It can help identify what changes are likely to occur and how they can be measured. The Theory-based Evaluation approach is a powerful monitoring and evaluation tool that can help organizations make informed decisions about their programs and services. This approach focuses on the underlying theories of change that drive program implementation and outcomes, and helps to identify and address gaps in the program’s effectiveness. It also serves as a way to measure the progress of a program and its impact on the target population. Theory-based evaluation is a comprehensive approach that considers both qualitative and quantitative data, and is useful for understanding the complex relationships between program activities and outcomes. It is an important tool for organizations to ensure that their programs are achieving their intended goals and objectives. Read more.

Utilisation-focused evaluation approach

The Utilisation-focused Evaluation approach is an effective tool for monitoring and evaluation users. It is a user-oriented approach that focuses on the utilisation of evaluation results by intended users and stakeholders. This approach encourages users to be actively involved in the evaluation process, from planning to implementation to reporting. It enables users to assess the impact of the evaluation results on their decision-making and practice. The Utilisation-focused Evaluation approach also encourages users to use the results for further improvement and refinement of their strategies and practices. This approach helps users to identify areas for improvement and to develop strategies to address them. In addition, it helps users to determine the most effective ways to use the evaluation results in order to achieve their desired outcomes. Read more.

Monitoring and Evaluation (M&E) for learning is an approach that prioritizes learning and program improvement, as opposed to solely focusing on accountability and reporting to external stakeholders. It is an iterative process that involves continuous monitoring, feedback, and reflection to enable learning and adaptation. By engaging stakeholders in the evaluation process, M&E for learning can identify strengths, weaknesses, and areas for improvement, and use this information to guide program design and implementation. Ultimately, the goal of M&E for learning is to create a culture of continuous learning within organizations, where learning and adaptation are integrated into every aspect of program design and implementation. Read more .

Gender Responsive Evaluation

A gender-responsive evaluation is an approach to understanding the impacts of a project, policy, or program on women, men and gender diverse populations. It is a valuable tool to assess how different gender groups are affected by a particular project, as well as how to ensure that the project meets its objectives in a way that is equitable and beneficial to all genders. Gender-responsive evaluations also provide useful information on how different gender groups interact and participate in projects or policies, which can help identify any potential inequities in access or outcomes.  Read more .

The case study evaluation approach is a powerful tool for monitoring and evaluating the success of a program or initiative. It allows researchers to look at the impact of a program from multiple perspectives, including the behavior of participants and the effectiveness of interventions. By using a case study evaluation approach, researchers can develop a comprehensive picture of the program’s strengths and weaknesses, identify areas for improvement, and make recommendations for future action. This approach is particularly useful for programs that involve multiple stakeholders, as it allows for the examination of both individual and collective outcomes. Furthermore, it is a valuable tool for assessing the program’s effectiveness over time, as it enables researchers to compare the results of different interventions and track changes in program outcomes. Read more.

This approach focuses on how a project is implemented, rather than the outcomes. It can help identify problems in project implementation, such as delays or budget overruns, and make recommendations for improvement. The process monitoring and evaluation approach is a systematic way of tracking and assessing the progress of a project or program. It involves regularly collecting, analyzing, and interpreting data to determine the effectiveness of a program and to identify areas for improvement. Monitoring and evaluation are two distinct but related functions. Monitoring is the continuous collection of information to track the progress of a program or project over time. Evaluation, on the other hand, is the periodic assessment of a program or project to determine its effectiveness and impact. The process monitoring and evaluation approach provides a comprehensive understanding of the program’s strengths and weaknesses, enabling decision-makers to make informed decisions about how to improve the program and ensure its success. Read more.

This approach involves assessing the causal impact of a project on its beneficiaries or the wider community. It can help determine whether a project has achieved its intended outcomes and whether the benefits outweigh the costs. The impact evaluation approach is a monitoring and evaluation technique used to assess the outcomes of a program or intervention. This approach helps to identify the changes that have occurred due to the program or intervention and measure the effectiveness of the program. It is used to evaluate the impact of the program on the target population, such as whether the program has achieved its desired objectives. The impact evaluation approach helps to identify areas of improvement and assess the cost-effectiveness of the program. It also helps to determine whether the program has met its goals and objectives, and if not, what changes should be made in order to achieve the desired results. This approach is a valuable tool for organizations to assess the success of their programs and interventions. Read more.

Evaluation Approaches versus Evaluation Methods

Evaluation Approaches versus Evaluation Methods

Evaluation approaches and evaluation methods are both used to assess the effectiveness and impact of programs, policies, or interventions. However, they refer to different aspects of the evaluation process.

Evaluation approaches refer to the overall framework or perspective that guides the evaluation. They define the philosophical, theoretical, and methodological principles that underpin the evaluation.

Evaluation methods, on the other hand, are the specific techniques and tools used to collect and analyze data to evaluate the program. Methods can be quantitative (e.g., surveys, experiments, statistical analysis) or qualitative (e.g., interviews, focus groups, content analysis), and may vary depending on the evaluation approach used.

In summary, evaluation approaches define the overall framework and principles that guide the evaluation, while evaluation methods are the specific techniques and tools used to collect and analyze data to evaluate the program.

An effective monitoring and evaluation approach can help to identify whether an organization’s goals are being achieved in a timely manner.

Overall, organizations can use one or more of these approaches to monitoring and evaluation, depending on the needs of their project and the resources available to them. Although there are many different types of monitoring and evaluation approaches available, they all share the same goal – to understand the impact of an organization’s programs and projects on its stakeholders.

' data-src=

This is so detailed and simple to understand. Thanks EvalCommunity for your contribution towards monitoring and evaluation. I always love your resources, thank you!

Leave a Comment Cancel Reply

Your email address will not be published.

Featured Jobs

Primary health care advisor (locals only), usaid uganda, sudan monitoring project (smp): third party monitoring coordinator, democracy, rights, and governance specialist – usaid ecuador, senior human resources associate.

  • United States

Digital MEL Manager – Digital Strategy Implementation Mechanism (DSIM) Washington, DC

  • Washington, DC, USA

Senior Accounting Associate

Evaluation consultancy: interculturality for a liberating higher education.

  • SAIH (Norwegian Students’ and Academics’ International Assistance Fund)

Program Associate, MERL

Senior monitoring, evaluation, and learning (mel) specialist, data & report coordinator, land a better career with members' services.

case study with evaluation

How strong is my Resume?

Only 2% of resumes land interviews.

READY TO LAND M&E JOB YOU LOVE?

Get our FREE walkthrough guide to landing a job in International Development

We will never spam or sell your data! You can unsubscribe at any time.

Services you might be interested in

Write my resume for M&E sector

Useful Guides ...

Masters, PhD and Certificate in M&E

What is Evaluation?

What is the difference between Monitoring and Evaluation?

Types of Evaluation

Monitoring, Evaluation, Accountability, and Learning (MEAL)

LAND A JOB REFERRAL IN 2 WEEKS (NO ONLINE APPS!)

Sign Up & To Get My Free Referral Toolkit Now:

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.311(7002); 1995 Aug 12

Case study evaluation.

Case study evaluations, using one or more qualitative methods, have been used to investigate important practical and policy questions in health care. This paper describes the features of a well designed case study and gives examples showing how qualitative methods are used in evaluations of health services and health policy.

Full text is available as a scanned copy of the original print version. Get a printable copy (PDF file) of the complete article (951K), or click on a page image below to browse page by page. Links to PubMed are also available for Selected References .

icon of scanned page 444

Images in this article

p446-a on p.446

Click on the image to see a larger version.

Selected References

These references are in PubMed. This may not be the complete list of references from this article.

  • Pollitt C, Harrison S, Hunter DJ, Marnoch G. No hiding place: on the discomforts of researching the contemporary policy process. J Soc Policy. 1990 Apr; 19 (2):169–190. [ PubMed ] [ Google Scholar ]
  • Mays N, Pope C. Qualitative research: Observational methods in health care settings. BMJ. 1995 Jul 15; 311 (6998):182–184. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Mays N, Pope C. Rigour and qualitative research. BMJ. 1995 Jul 8; 311 (6997):109–112. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 33, Issue 4
  • How to co-design a prototype of a clinical practice tool: a framework with practical guidance and a case study
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0003-4249-1947 Matthew Woodward 1 ,
  • http://orcid.org/0000-0002-5915-0041 Mary Dixon-Woods 1 ,
  • http://orcid.org/0000-0002-0934-3806 Wendy Randall 2 ,
  • Caroline Walker 2 ,
  • Chloe Hughes 2 ,
  • Sarah Blackwell 2 ,
  • Louise Dewick 3 ,
  • Rachna Bahl 3 , 4 ,
  • http://orcid.org/0000-0002-1825-4864 Tim Draycott 3 , 5 ,
  • Cathy Winter 6 ,
  • Akbar Ansari 1 ,
  • http://orcid.org/0000-0003-2524-5357 Alison Powell 1 ,
  • http://orcid.org/0000-0002-7886-3223 Janet Willars 1 ,
  • http://orcid.org/0000-0003-4393-0956 Imogen A F Brown 1 ,
  • http://orcid.org/0000-0003-3792-0575 Annabelle Olsson 1 ,
  • http://orcid.org/0000-0001-5673-751X Natalie Richards 1 ,
  • http://orcid.org/0000-0002-6093-8709 Joann Leeding 1 ,
  • http://orcid.org/0000-0002-6082-3151 Lisa Hinton 1 ,
  • http://orcid.org/0000-0002-0037-274X Jenni Burt 1 ,
  • http://orcid.org/0000-0002-2099-006X Giulia Maistrello 7 ,
  • Charlotte Davies 7 ,
  • Thiscovery Authorship Group ,
  • ABC Contributor Group ,
  • Jan W van der Scheer 1
  • 1 THIS Institute (The Healthcare Improvement Studies Institute) , Department of Public Health and Primary Care, University of Cambridge , Cambridge , UK
  • 2 The Royal College of Midwives , London , UK
  • 3 Royal College of Obstetricians and Gynaecologists , London , UK
  • 4 University Hospitals Bristol and Weston NHS Foundation Trust , Bristol , UK
  • 5 North Bristol NHS Trust , Westbury on Trym , UK
  • 6 PROMPT Maternity Foundation , Bristol , UK
  • 7 RAND Europe , Cambridge , UK
  • Correspondence to Dr Jan W van der Scheer, THIS Institute (The Healthcare Improvement Studies Institute), University of Cambridge, Cambridge, CB1 8RN, UK; jan.vanderscheer{at}thisinstitute.cam.ac.uk

Clinical tools for use in practice—such as medicine reconciliation charts, diagnosis support tools and track-and-trigger charts—are endemic in healthcare, but relatively little attention is given to how to optimise their design. User-centred design approaches and co-design principles offer potential for improving usability and acceptability of clinical tools, but limited practical guidance is currently available. We propose a framework (FRamework for co-dESign of Clinical practice tOols or ‘FRESCO’) offering practical guidance based on user-centred methods and co-design principles, organised in five steps: (1) establish a multidisciplinary advisory group; (2) develop initial drafts of the prototype; (3) conduct think-aloud usability evaluations; (4) test in clinical simulations; (5) generate a final prototype informed by workshops. We applied the framework in a case study to support co-design of a prototype track-and-trigger chart for detecting and responding to possible fetal deterioration during labour. This started with establishing an advisory group of 22 members with varied expertise. Two initial draft prototypes were developed—one based on a version produced by national bodies, and the other with similar content but designed using human factors principles. Think-aloud usability evaluations of these prototypes were conducted with 15 professionals, and the findings used to inform co-design of an improved draft prototype. This was tested with 52 maternity professionals from five maternity units through clinical simulations. Analysis of these simulations and six workshops were used to co-design the final prototype to the point of readiness for large-scale testing. By codifying existing methods and principles into a single framework, FRESCO supported mobilisation of the expertise and ingenuity of diverse stakeholders to co-design a prototype track-and-trigger chart in an area of pressing service need. Subject to further evaluation, the framework has potential for application beyond the area of clinical practice in which it was applied.

  • Healthcare quality improvement
  • Human factors
  • Obstetrics and gynecology
  • Quality improvement methodologies
  • Trigger tools

This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/ .

https://doi.org/10.1136/bmjqs-2023-016196

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Key messages

Much research and debate focuses on the validity and reliability of clinical tools for practice, but far less attention has been given to how to optimise their design and usability.

We propose a framework (FRamework for co-dESign of Clinical practice tOols or ‘FRESCO’) offering practical guidance for developing prototype clinical tools, drawing on user-centred design methods and co-design principles.

FRESCO successfully supported co-design of a prototype chart for detecting and responding to possible fetal deterioration during labour.

By codifying existing methods and principles into a single framework, FRESCO has potential to facilitate pragmatic, flexible and inclusive co-design of clinical practice tools, but will require further evaluation.

Clinical practice tools—ranging from medicine reconciliation charts through to diagnosis support tools and track-and-trigger charts—are endemic in healthcare. 1–3 While much research and debate focus on the validity and reliability of such tools, 4 far less attention has been given to how to optimise their design. 5–8 Yet, features of design, including usability, 9–11 are among the most important influences on effective implementation. 3 7 12 It is now clear that merely meeting technical specifications is insufficient. 6 Critical to the effective deployment, implementation, and impact of clinical practice tools is early and continued engagement with end-users and broader stakeholders so that their priorities are addressed through design processes. 6 13 Currently, however, thinking about how to optimise design of clinical practice tools either does not happen at all, or is deferred until far too late in the process of tool development, leading to a high level of waste associated with non-adoption or poor implementation. 14 15 Though a range of design methods is available and widely used in other industries, 8 their use in development of clinical practice tools has been strikingly limited. 5 6 12 In this article, we propose that practical, action-oriented guidance could help to address this problem.

User-centred design is among the most well established of the various approaches that can support better usability of clinical tools, 16 and is already a staple in the development of medical devices. 6 9 10 17 18 Seeking to enhance usability of products and systems through a focus on user needs and perspectives, 16 user-centred methods are distinguished by their systematic and typically iterative approach to optimising design through consideration of contexts of use, usability goals, user characteristics, environment, tasks, and workflow. 16 19–21 By taking into account human capacities and limitations such as effects of stress on cognition, influence of fatigue, overload through multitasking, and limited memory capacity, 9 19 21 user-centred approaches have potential to enable systematic consideration of safety, effectiveness, and efficiency when designing clinical practice tools. 22–24

A user-centred approach to development of clinical practice tools is valuably complemented by co-design principles. 6 18 25 Such principles encourage developers and users—including, for example, healthcare professionals, patients, human factors engineers, and graphic designers—to nurture collective creativity and to work in partnership. 25–27 Application of these principles to development of clinical practice tools could strengthen or expand user-centred approaches, 6 18 in particular by emphasising the need for early and continued engagement of end-users and broader stakeholders throughout the design process, 6 17 18 28 29 and mobilisation of multiple forms of expertise. 6 17 One established methodological framework for co-design describes involvement of users and developers across pre-design , generative, evaluative, and post-design phases ( table 1 ). 28 Some evidence has already demonstrated the usefulness of this approach to developing products and systems for healthcare. 29 30

  • View inline

The methodological framework for co-design by Sanders and Stappers 28

One example of a pressing need for improving usability and design processes is found in track-and-trigger charts for detecting and responding to patient deterioration. 12 31 32 These widely used charts are based on the principle that there may be periods during which clinical deterioration is detectable by ‘tracking’ a predefined set of clinical parameters over time, with specific thresholds ‘triggering’ action. 33 Track-and-trigger charts are particularly likely to benefit from user-centred design, since they seek to support clinical decision-making and action in often pressurised situations where clarity around responding to a potentially deteriorating situation is essential, and where human capacities and limitations (eg, memory capacity, effects of stress on cognition) are key influences on patient safety. 12 34 Despite their potential value, track-and-trigger charts have been challenged by issues in acceptability, adoption, and use. 35–37 These issues are likely to be linked to suboptimal design, 12 34 including inadequate user involvement prior to implementation. 32 37 38

Despite burgeoning literature on both user-centred design (and variants, including human-centred design) and co-design, 17 18 22–24 26–30 39 practical guidance for combined use of these methods and principles in the development of clinical practice tools remains limited. In this article, we address this gap. We propose a five-step framework with recommended actions for each step, and we describe a case study of its application in developing a prototype track-and-trigger chart.

The FRamework for co-dESign of Clinical practice tOols (FRESCO) we propose seeks to codify existing user-centred design methods and co-design principles into a single framework to guide the development of clinical practice tools to the point of readiness for large-scale testing ( table 2 ). FRESCO recognises that development of tools usually benefits from iterative prototyping. Accordingly, it includes user-centred methods for formal prototype testing, 11 13 19 40 and application of the co-design principle of using prototypes as tools for discovery, understanding, and learning. 28 41 42

The FRamework for co-dESign of Clinical tOols

Using five steps outlined in table 2 , FRESCO aims to facilitate a process of collective creativity through structured co-design activities, 17 18 29 39 with involvement of users, developers, and other stakeholders in roles of design partners, informants, or testers. 18 This process is informed throughout by findings from systematic user-centred evaluations (see steps 2–4 in table 2 ).

The first step is to establish a multidisciplinary advisory group that offers voice to a diversity of experience and expertise throughout the process (see step 1). Following a pre-design phase of co-design (see steps 1 and 2), FRESCO facilitates proceeding through a generative phase (including gathering ideas from users based on concept prototypes produced by developers, see steps 2 and 3) to an evaluative co-design phase (including testing of co-designed prototypes, see steps 4 and 5). The movements from pre-design to generative to evaluative co-design phases align with, and are informed by, key user-centred design techniques such as heuristic evaluation (see step 2), think-aloud usability evaluations (see step 3) and simulations (see step 4). The last step of FRESCO aligns with completing the evaluative phase of co-design (see step 5), working towards a final prototype ready for further testing in real-life settings as part of the post-design phase (see table 1 ).

We used FRESCO in a case study, aiming to co-design a track-and-trigger chart for detection and response to suspected intrapartum fetal deterioration ( Box 1 ).

Case study: Avoiding Brain Injury in Childbirth (ABC) programme

In 2021, the UK’s Department of Health and Social Care commissioned the Avoiding Brain Injury in Childbirth (ABC) programme, a collaboration between the Royal College of Midwives (RCM), Royal College of Obstetricians and Gynaecologists (RCOG), and The Healthcare Improvement Studies Institute at the University of Cambridge. Colleagues from these institutions formed the ABC programme team.

A key aim of the ABC Programme was to co-design a standardised approach for detecting and responding to possible fetal deterioration during labour, including a track-and-trigger chart. The need for this work had been identified as critical and urgent because problems in intrapartum monitoring and response remain major and persistent hazards in maternity care, contributing to poor outcomes at birth and clinical negligence claims.

Current approaches to fetal monitoring during labour focus primarily on assessment of fetal heart rate features, which can be done either using intermittent auscultation (for lower-risk labours) or electronic fetal monitoring with cardiotocography (for higher-risk labours). A key innovation of the ABC programme was to combine monitoring of fetal heart rate features with other evidence-based intrapartum risk factors into a track-and-trigger tool, informed by earlier work of a task force of the RCM and RCOG. The intention of the ABC programme was to co-design an improved prototype tool, ready for deployment in a future national programme of testing, implementation, and evaluation.

Below, we explain how each step of the framework guided the Avoiding Brain Injury in Childbirth (ABC) programme’s co-design of a prototype chart for detecting and responding to suspected fetal deterioration during labour ( figure 1 ).

  • Download figure
  • Open in new tab
  • Download powerpoint

Step 1: establish a multidisciplinary advisory group

Optimising a tool for detecting and responding to possible fetal deterioration during labour requires access to a range of expertise and experience, including scientific knowledge, clinical expertise, lived experience of labour and using maternity services, graphic design, human factors/ergonomics, and social science. We identified individuals with one or more of these forms of expertise or experience using intentional outreach and inclusive methods of recruitment. 43 We sought to be purposeful in ensuring diversity as well as addressing the potential for power imbalances. 43 44 For example, we included a mix of seniority among the maternity professionals and ensured that service user representation included multiple viewpoints. As detailed in online supplemental file 1 , the group included the following:

Supplemental material

twelve maternity professionals (six midwives and six obstetricians),

five maternity service users (representing a range of maternity experiences and experience of advocating for improvement and inclusion of under-represented voices), and

five other specialists (human factors engineer with expertise in user-centred design, graphic designer, consensus-building specialist, and two specialists in facilitating patient and public involvement [PPI]).

As part of the pre-design phase (including preparation of the group for the co-design process), 28 roles and responsibilities across different stages of work were explicitly allocated to support efficient and effective decision-making. 45 46 This, for example, meant that not all advisory group members needed to be involved in all activities of all steps, as further detailed in figure 1 and across steps 2–5 below.

The group’s specialists in human factors engineering, consensus-building, and PPI facilitated or led exchanges, meetings, workshops, and other co-design activities. 47 48 The PPI facilitators were particularly important in ensuring that everyone’s voices could be heard during meetings, 43 44 as well as facilitating separate activities for maternity service users, in the interests of addressing potential power imbalances. The activities of the advisory group that were part of the generative and evaluative co-design phases 28 are further detailed across steps 2–5 below.

Step 2: develop initial drafts of the prototype

In accordance with the pre-design and generative phase of co-design, 28 41 we set out to understand stakeholders’ experiences of the work system under consideration. 19–21 49 This included developing two alternative prototypes of the track-and-trigger tool to explore which design elements worked best for maternity professionals. 13 49

Prior to the programme, a task force from two national bodies ( Box 1 ) had developed an initial draft prototype (‘Design 1’, see figure 1 and online supplemental file 2 ). It included the evidence-based clinical information required for detection and responding to possible fetal deterioration, but was focused more on clinical content than on design. A human factors engineer evaluated the draft prototype against usability heuristics, 34 50 51 while alert to the contexts of use 20 such as intended or expected users, tasks, physical environment, social and organisational milieu, and technical and environmental constraints. 19–21

A second prototype (‘Design 2’, see figure 1 and online supplemental file 2 ) was then developed with support of the graphic designer, based on the clinical information and heuristic evaluation of Design 1 as well as the factors identified in analysis of the context of use description of the tool (see overview in online supplemental file 3 ). 20 Design 2 applied established user-interface design principles (detailed in online supplemental file 3 ), such as the need to be attentive to limitations of memory and attention while executing a task, 51–53 and the need for consistent use of colour coding and for grouping-related information together. 34 50–54

Though the clinical content of both designs was the same, they used alternative page formats, colours, font types and sizes, ways of recording observations, visuals indicating actions and information structures (see details in online supplemental file 3 ). These alternatives were designed to enable comparison and to prompt discussions with participants about preferences in subsequent think-aloud usability evaluation (see step 3 below). 55

Step 3: conduct think-aloud usability evaluations

As part of the generative co-design phase, 13 28 41 we conducted think-aloud formative usability sessions with 15 maternity professionals of varied backgrounds ( online supplemental file 4 ). 10 11 19 Asking people to work with designs 1 and 2 (see figure 1 ), the sessions were aimed at understanding processes of cognition and identifying usability flaws (and their causes) with a small group of representative end-users. 11 19 46

In advance of the session, participants received print-outs of the two draft prototypes, examples of the drafts with recorded observations, and clinical scenarios. Each session took part during a video call hosted on an online platform 1 , and was facilitated by a moderator (trained interviewer or human factors engineer). Sessions were organised so that designs 1 and 2 were covered in a sequence counterbalanced across participants to mitigate order effects. The moderator started with an exercise to encourage the participant to think aloud in describing their experiences while interacting with the prototype, 10 11 19 with prompts used to elicit experiences of particular design elements. Following this think-aloud exercise, the moderator used a semistructured interview guide (see online supplemental file 4 ) to ask about preferences for elements of one of the designs, elicit further views on design aspects that could be improved and use of the chart in practice. 6 10 29 56

The sessions took about one hour each. They were audio-recorded and transcribed verbatim. Analysis focused on preferences for elements of the two designs and identification of design principles to guide future prototype iterations. 56–58 Participants preferred the detail contained within Design 1, but found Design 2 easier to complete and interpret ( table 3 ). These findings reflected the tension between high data density and information overload. 54 They highlight that a particular consideration in developing clinical practical tools is striking a balance between: (1) including sufficient information to support task completion, and (2) preventing high data density that can increase search times and mental workload, particularly if information is poorly structured. 59 Further qualitative analysis identified five requirements to inform further prototype iterations and future implementation ( table 4 ), such as the need for optimising flow of information. 54

Examples of analysis on the number of participants who preferred specific design elements of Design 1 versus Design 2 (see online supplemental file 1 for details on design elements)

Identified requirements to inform further prototype iterations based on qualitative analysis of the think-aloud exercises and follow-up interviews

The analysis informed a set of co-design activities with advisory group maternity professionals for the next prototype iteration. 13 41 42 46 This included structured email exchange and online meetings facilitated by the human factors engineer or consensus facilitator to reach a professional consensus on which elements of designs 1 and 2 to incorporate in an improved draft prototype (‘Design 3’, see figure 1 ). Design 3 combined these elements—guided by the heuristic evaluation of step 2, the think-aloud evaluation findings of step 3 and clinical expertise—to feature:

selective use of colour to indicate trigger values and trend lines used for recording observations,

showing a ‘start of labour risk assessment’ on the same page as the intrapartum risk factors recorded during labour,

an A3-sized format to improve legibility while accommodating for the content detail preferred by participants, and

implementation of a simplified action diagram for escalation.

Step 4: test the prototype in clinical simulations

The evaluative phase of co-design included clinical simulation, 13 28 41 which is increasingly valued for its ability to support quality improvement in health systems. 60 61 Simulations have a role in both user-centred 7 40 60 61 and co-design approaches. 28 41 One key advantage of clinical simulation is that rare but potentially catastrophic events or conditions can be reproduced. 62 Design 3 (see figure 1 ) was tested in close approximations of real-life settings, since this is critical to safety checks, understanding how a tool might be used in practice and identifying how to improve work systems. 61

We conducted clinical simulations involving 52 maternity professionals from five different National Health Service maternity units ( online supplemental file 5 ). These units were recruited through convenience sampling based on availability. The simulations were designed as quality improvement activities (see Ethics approval below) guided by the ‘TEACH Sim’ framework, 63 focusing on: (1) testing usability of the Design 3 prototype, (2) comparing care with the prototype with usual care with the unit’s existing documentation, and (3) informing the next iteration of the prototype. TEACH Sim helped to specify the simulation’s objectives, audience, scenario script, equipment, actors, and team composition. 63

As simulation is especially well suited for conducting controlled tests exposing one group but not the other to a new prototype, 60 64 we employed the same clinical scenario twice in each round of simulation: first with a team using usual care and the second time with a different team using the Design 3 prototype ( figure 1 ). Facilitated by an experienced midwife from the advisory group, simulations in two units took place in situ, that is, in the participants’ own clinical settings where care is routinely performed. 60 61 Due to clinical pressures, simulations in other units took place in a dedicated simulation laboratory or a clinical teaching setting.

Simulations were audio and video-recorded, with one camera fixed above the desk to capture participants making recordings on intrapartum tools. A trained ethnographer 65 used a fieldnote form to record observations on aspects such as teamwork, professional roles and boundaries, communication, and social atmosphere during the simulation, with a focus on use of the intrapartum tools.

Each simulation was followed by an audio-recorded, verbatim-transcribed debrief 7 66 67 and focus group 6 10 65 session with the participants, facilitated by the ethnographer and an advisory group midwife using a topic guide ( online supplemental file 5 ). The debriefing and focus group discussions with the teams involved in the simulation aided learning, through reflecting on experiences of the scenario, contextual and environment issues, safety concerns, acceptability and usability of the usual care and prototype tools, as well as identifying opportunities for better teamwork, equipment set-ups, escalation systems, and design of tools. 61 67 The focus group discussions also helped generate further ideas to improve the draft prototype. 6 10 65

Following the focus group, participants completed the Ottawa Acceptability of Decision Rules Instrument. 68 This validated survey instrument further complemented assessment of reported usability, 10 and comparison between groups providing usual care and prototype care. 68 69

Analysis 57 70 focused on four areas: (1) recording errors and corrections made on the prototype charts, (2) if triggers during the simulation safety checks consistently led to the required actions for safe care, (3) the role of the intrapartum tools in communication (both within the team and with those in labour and their birth partners), and (4) suggestions for improving the usability of the draft prototype. Data analysis used narrative summaries and observational checklists to code the behaviour of simulation participants based on video recordings or direct observations of the sociotechnical system during the simulations. 70 Quantitative usability analysis assessed use errors and corrections on the charts. 19 58 71

The findings of the simulations ( table 5 ) informed meetings with maternity professionals from the advisory group, facilitated by the human factors engineer and consensus specialist. One key discussion point was the need to support transfer across settings, that is, from low-risk settings where intermittent auscultation is used to higher-risk settings where cardiotocography is used. The group reached a consensus on a single prototype (‘Design 4’, see figure 1 ). Design 4 required users to refer to a second page for actions (compared with the original single-page format—see online supplemental files 2 and 3 ). The group viewed this as an acceptable trade-off given that the single prototype would support transfer across settings.

Examples and key findings of the analysis of the simulations

Step 5: generating a final prototype using co-design workshops

Step 4 identified a need for further input to (i) improve use of the prototype in terms of communication with the person in labour and (ii) finalise the action diagram. To complete the evaluative co-design phase, 28 this was addressed with the advisory group through workshops. These have shown potential for practical and effective ways to finalise a prototype. 13 17 18 39 41 46 To address potential power imbalances, workshops were organised with subgroups ( figure 1 ). Facilitators supported agenda-setting, procedures, and consensus rules, 72 73 and were mindful of power dynamics. 44

The PPI facilitators introduced Design 4 to the five service user representatives and gained feedback on it during three discussion workshops ( figure 1 ), exploring in particular how the prototype might impact communication with those in labour. The maternity professionals and human factors engineer joined the discussions upon invitation by the PPI facilitators or service users. These workshops led to the inclusion of an additional item—‘is the woman concerned?’—in the final prototype, as this was a key proposal made by the representatives.

To address the identified use difficulties with the flow chart actions, an alternative grid format (vs the original flow chart format) was developed. 10 46 The human factors engineer facilitated three workshops with maternity professionals from the advisory group, in which the flow chart and grid formats were used alongside each other with reference to written clinical scenarios ( figure 1 ). 10 They reached a consensus that the grid layout provided better usability—through its better conveyance of the data 34 54 —and should be implemented in the final prototype. The final prototype (see online supplemental file 6 ) was prepared by the human factors engineer and graphic designer, reviewed by the advisory group and considered ready for use in large-scale testing.

Clinical practice tools have not routinely benefited from systematic combination of user-centred design methods and co-design principles applied to their development, 6 17 18 despite the availability of well-established techniques with a good track record in improving design and usability in a range of clinical applications. 11 13 16 29 30 One likely reason for this is the limited practical guidance about how to deploy these approaches in a pragmatic yet systematic manner for development of clinical practice tools. The framework (FRESCO) proposed in this article codifies existing user-centred design methods and co-design principles into practical guidance for enabling mobilisation of multiple forms of expertise for development of clinical practice tools. Our case study illustrates application of the framework in an area of pressing need, leading to a viable track-and-trigger prototype tool ready for large-scale testing. The study also helps to address the call for better reporting of healthcare improvement activities that align with principles of co-design. 18 39 74

FRESCO builds on an established co-design framework ( table 1 ), 28–30 including use of pre-design, generative, and evaluative phases that can inform future post-design implementation phases with the produced prototype. One of its contributions is in sensitising developers of clinical practice tools to systematic consideration of the needs and priorities of users—through application of principles of collective creativity and inclusivity central to co-design into a series of actionable steps 25–27 —while employing a user-centred design approach that supports safety, effectiveness, and efficiency. 22–24 The case study also illustrates that employment of FRESCO is consistent with a design process moving from medium to high structural restrictiveness . 55 The generative phase started with various concept prototypes that encouraged the co-design group to explore alternative ideas, which helped prevent the risk of premature closure around one solution. During the evaluative phase, the best elements of the concept prototypes were then integrated through iterative cycles into a single prototype, using high structural restrictiveness to increase decision-making precision. 55

Findings of the case study suggest that FRESCO supports inclusive ways of co-designing prototype clinical practice tools and enabled improvements based on voices that are often under-represented in development of clinical practice tools. As an example, a novel prompt—‘is the woman concerned?’—was included in the prototype to help ensure optimal communication with those in labour and their birth partners, following input of service user representatives. This helped address the imperative to include patient/family concern in track-and-trigger systems 75 as well as the broader concern to listen better to families and involve them in their own care. 76 The in situ simulations helped to understand how the prompt could be best used in practice. Key to achieving co-design in this way is commitment to inclusion, facilitation that focuses on hearing everyone’s voices and managing power dynamics through, for example, organising separate activities for service users when needed. These findings suggest that FRESCO can contribute to the need for effective ways of co-design with patients, as called for in models for co-creation of healthcare services. 77

Strengths and limitations of the framework

While FRESCO helped develop a prototype track-and-trigger tool, further evaluation will be needed to determine clinical and service user experience, efficiency, implementability, sustainability of change, impact on clinical outcomes, and any unintended consequences. 78 79 Piloting and large-scale, national testing will be important in supporting this. Further examples of use cases outside of this context would help to refine and test the framework, for example to: (1) determine whether clinical practice tools produced using the framework offer advantages over others, (2) establish the resourcing needed for minimal and optimal execution of each step, and (3) assess the extent to which steps may need to be adapted for use in lower-resourced settings. There is also a need to generate learning on how to sustain engagement and involvement of users in the design process.

Although the resource implications of using FRESCO are significant, so too are the costs of developing the technical components of clinical practice tools. 14 Moreover, deploying suboptimally designed tools introduces multiple risks and potential for waste. 3 5 12 71 Ultimately, FRESCO could help to prevent the characteristic dysfunctions associated with exclusively bottom-up or top-down innovation for quality improvement, 80 such as lack of access to specific expertise common in locally led, bottom-up approaches, 15 and risk of perverse incentives associated with top-down approaches. 81 For example, using the framework as a practical guide to developing a prototype clinical practice tool could help prevent suboptimal implementation owing to inadequate or absent exploration of usability or acceptability, 7 38 78 82 83 or waiting until the end of the development cycle when the sunk costs may limit improvement. 7 12 83

Limitations of the case study

The pandemic conditions in which the case study was conducted imposed some limitations, including the need to adapt established in-person think-aloud methods and conduct of observations. These adaptations did highlight the flexibility inherent to our proposed framework. Ongoing pressures caused by the pandemic also required the use of convenience sampling of units for the simulations and use of clinical simulation laboratories instead of in situ settings in some units, so representativeness was difficult to determine.

The proposed framework (FRESCO), combining user-centred design methods and co-design principles, was successfully deployed to develop a prototype clinical practice tool for detecting and responding to possible fetal deterioration during labour. By codifying existing methods and principles into a single actionable framework, FRESCO has potential to facilitate pragmatic, flexible, and inclusive co-design of clinical practice tools using methods that can be standardised, replicated, and potentially scaled when needed, but will require further evaluation. Future work can also help identify the kinds of applications the framework works best for and where its limits lie.

Ethics statements

Patient consent for publication.

Not required.

Ethics approval

This study involves human participants. The usability evaluations received ethics approval from the University of Cambridge Psychology Research Ethics Committee (PRE.2021.067). All participants provided written informed consent and were invited to join the ABC authorship group. The UK’s Health Research Authority decision tool ( http://www.hra-decisiontools.org.uk/research/ ) showed that ethics approval was not required for the simulations, as they were classified as quality improvement activities, in which all of the participants provided written informed consent and were invited to join the ABC authorship group.

Acknowledgments

For recruitment and communications support, we thank the Avoiding Brain Injury in Childbirth (ABC) communications team including members from THIS Institute, RCOG and RCM. We are grateful for the many and varied contributions from colleagues across the ABC programme team and external to the team, including the graphic designers (Dan Gould Design, Soapbox) and video agency (Hobson Curtis).

  • Chapman SM ,
  • Oulton K , et al
  • Almanasreh E ,
  • Carayon P ,
  • Hoonakker P ,
  • Hundt AS , et al
  • Lugg-Widger FV , et al
  • Rodriguez NM ,
  • Burleson G ,
  • Linnes JC , et al
  • Doughty C ,
  • Arnold J , et al
  • Bitkina OV ,
  • Daniels J ,
  • Kushniruk A , et al
  • Jaspers MWM
  • Gale-Andrews L , et al
  • Coulentianos M ,
  • Rodriguez-Calero I ,
  • Daly S , et al
  • Wynants L ,
  • Rijnhart E , et al
  • Dixon-Woods M ,
  • Pronovost PJ
  • Kushniruk A ,
  • Woudstra K ,
  • Rovers M , et al
  • Göttgens I ,
  • Oertelt-Prigione S
  • Kushniruk AW ,
  • Holden RJ ,
  • ↵ International organization for standardization, IEC 62366-1:2015. Medical devices — part 2: guidance on the application of usability engineering to medical devices . 2021 . Available : https://www.iso.org/standard/63179.html
  • ↵ International organization for standardization, ISO 9241-210:2019. Ergonomics of human-system interaction — part 210: human-centred design for interactive systems . 2019 . Available : https://www.iso.org/standard/77520.html
  • ↵ International organization for standardization, IEC 62366-1:2015. Medical devices — part 1: application of usability engineering to medical devices . 2021 . Available : https://www.iso.org/standard/63179.html
  • Sanders EBN ,
  • Stappers PJ
  • Masterson D ,
  • Areskoug Josefsson K ,
  • Robert G , et al
  • Williams O , et al
  • Clausen C ,
  • Pedersen S , et al
  • Noorbergen TJ ,
  • Roxburgh M , et al
  • Credland N ,
  • Connolly F ,
  • Lydon S , et al
  • O’Connell A ,
  • Flabouris A ,
  • Thompson CH
  • Preece MHW ,
  • Horswill MS , et al
  • van Galen LS ,
  • Struik PW ,
  • Driesen BEJM , et al
  • Elliott D ,
  • McKinley S , et al
  • Mackintosh N ,
  • Rance S , et al
  • Perry L , et al
  • Grindell C ,
  • Croot L , et al
  • Saldana C ,
  • Craig K , et al
  • Sanders EB-N ,
  • Sanders E-N ,
  • Garfield S ,
  • Franklin BD , et al
  • Brewster L ,
  • Aveling E-L ,
  • Martin G , et al
  • Andersen PVK ,
  • Coupland C ,
  • Hignett S ,
  • Miller D , et al
  • Visser FS ,
  • Stappers PJ ,
  • van der Lugt R , et al
  • Johnson TR ,
  • Patel VL , et al
  • Shneiderman B
  • Dashevsky SG
  • Sanders MS ,
  • McCormick EJ
  • Bateman N ,
  • Bresciani S
  • Militello LG ,
  • Patel H , et al
  • Henshall C ,
  • Kenyon S , et al
  • Moacdieh NM ,
  • Dixon-Woods M
  • Gettrust L , et al
  • Benishek LE ,
  • Lazzara EH ,
  • Gaught WL , et al
  • Bonafide CP ,
  • Roberts KE ,
  • Weirich CM , et al
  • Phillips EC ,
  • Tallentire V , et al
  • Brehaut JC ,
  • Graham ID ,
  • Wood TJ , et al
  • Auerbach M ,
  • Hunt EA , et al
  • Khajouei R ,
  • Peute LWP ,
  • Hasman A , et al
  • Greenhalgh T ,
  • Finlay T , et al
  • Murphy MK ,
  • Lamping DL , et al
  • Bazzano AN ,
  • Martin J , et al
  • Massie S , et al
  • Batalden M ,
  • Batalden P ,
  • Margolis P , et al
  • Wherton J ,
  • Papoutsi C , et al
  • Collins GS ,
  • Reitsma JB ,
  • Altman DG , et al
  • van der Scheer JW ,
  • Woodward M ,
  • Ansari A , et al
  • Mukamel DB ,
  • Haeder SF ,
  • O’Malley D ,
  • Cithambaram K
  • Salwei ME ,
  • Carayon P , et al

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1

Twitter @MaryDixonWoods & @THIS_Institute, @LisaHinton4, @janvdscheer

Collaborators Thiscovery Authorship Group: Ruth Cousens, Jordan Moxey, Luke Steer, Andy Paterson, André Sartori. ABC Contributor Group: Aiesha Lake, Amar M Karia, Anna MA Croot, Bethan Everson, Bothaina Attal, Carlo Personeni, Charity Khoo, Charity LK Khoo, Charlotte Vale, Clare Shakespeare, Cossor Anwar, Daniel Wolstenholme, Daisy V Westaway, Emma Crookes, Evleen Price, Georgina Brehaut, Hannah K Twinney, Hannah Sharpe, Helena Bull, Ilaria Medda, Jayden J Mills, Jennifer Jardine, Julia F Bodle, Julie McKay, Karen Hooper, Katarina Tvarozkova, Katie Cornthwaite, Libby Shaw, Louise Houghton, Lucy M Saunders, M Nwandison, Margaret Blott, Mary Edmondson, Megan Gailey, Nina Johns, Pauline Hewitt, Phil Steer, Sophie Relph, Subhadeep Roy, Susanna Stanford, Theresa Fitzpatrick, Zeba Ismaeljibai, Zenab Barry.

Contributors All authors read and approved the final manuscript. Their specific contributions, following CRediT (Contributor Roles Taxonomy), are as follows: MW contributed to conceptualisation, formal analysis, investigation, methodology, project administration, writing (original draft preparation) and writing (review and editing). MD-W contributed to conceptualisation, funding acquisition, methodology, project administration, supervision, writing (original draft preparation) and writing (review and editing). WR contributed to investigation, methodology, project administration, resources, supervision and writing (review and editing). CW contributed to investigation, project administration, resources and writing (review and editing). CH contributed to investigation, project administration, resources and writing (review and editing). SB contributed to investigation, project administration, resources and writing (review and editing). LD contributed to investigation, project administration, resources and writing (review and editing). RB contributed to conceptualisation, methodology, supervision and writing (review and editing). TD contributed to conceptualisation, funding acquisition, methodology, project administration, supervision and writing (review and editing). CW contributed to methodology, resources and writing (review and editing). AP contributed to formal analysis, project administration, investigation, methodology and writing (review and editing). AA contributed to formal analysis, project administration and writing (review and editing). JW contributed to formal analysis, investigation, methodology and writing (review and editing). IAFB contributed to formal analysis and writing (review and editing). AO contributed to formal analysis and writing (review and editing). NR contributed to formal analysis and writing (review and editing). JL contributed to formal analysis, investigation, methodology, project administration and writing (review and editing). LH contributed to formal analysis, conceptualisation, investigation, methodology, project administration, supervision and writing (review and editing). JB contributed to conceptualisation, formal analysis, funding acquisition, methodology, project administration, supervision, writing (original draft preparation) and writing (review and editing). GM contributed to formal analysis, project administration and writing (review and editing). CD contributed to formal analysis and writing (review and editing). JWvdS contributed to conceptualisation, formal analysis, methodology, project administration, supervision, writing (original draft preparation) and writing (review and editing). The final version of the manuscript was also read and approved by the members of the Thiscovery Authorship Group and the ABC Contributor Group (see the Acknowledgements section). JWvdS is the guarantor of the study.

Funding The Department of Health and Social Care (UK) provided funding for the Avoiding Brain Injury in Childbirth (ABC) programme. The methodological work presented in this paper was supported by THIS Institute, which is funded by the Health Foundation (Grant/Award Number: RHZF/001 - RG88620), an independent charity committed to bringing about better health and health care for people in the UK. Contributions of Mary Dixon-Woods to the work were supported by the National Institute for Health and Care Research (MD-W was an NIHR Senior Investigator [NF-SI-0617-10026] during conduct of the study).

Competing interests None declared.

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

↵ https://thiscovery.org/about

Linked Articles

  • Editorial Effective use of interdisciplinary approaches in healthcare quality: drawing on operations and visual management Nicola Bateman BMJ Quality & Safety 2024; 33 216-219 Published Online First: 06 Mar 2024. doi: 10.1136/bmjqs-2023-016947

Read the full text or download the PDF:

case study with evaluation

Final dates! Join the tutor2u subject teams in London for a day of exam technique and revision at the cinema. Learn more →

Reference Library

Collections

  • See what's new
  • All Resources
  • Student Resources
  • Assessment Resources
  • Teaching Resources
  • CPD Courses
  • Livestreams

Study notes, videos, interactive activities and more!

Psychology news, insights and enrichment

Currated collections of free resources

Browse resources by topic

  • All Psychology Resources

Resource Selections

Currated lists of resources

Study Notes

Case Studies

Last updated 22 Mar 2021

  • Share on Facebook
  • Share on Twitter
  • Share by Email

Case studies are very detailed investigations of an individual or small group of people, usually regarding an unusual phenomenon or biographical event of interest to a research field. Due to a small sample, the case study can conduct an in-depth analysis of the individual/group.

Evaluation of case studies:

- Case studies create opportunities for a rich yield of data, and the depth of analysis can in turn bring high levels of validity (i.e. providing an accurate and exhaustive measure of what the study is hoping to measure).

- Studying abnormal psychology can give insight into how something works when it is functioning correctly, such as brain damage on memory (e.g. the case study of patient KF, whose short-term memory was impaired following a motorcycle accident but left his long-term memory intact, suggesting there might be separate physical stores in the brain for short and long-term memory).

- The detail collected on a single case may lead to interesting findings that conflict with current theories, and stimulate new paths for research.

- There is little control over a number of variables involved in a case study, so it is difficult to confidently establish any causal relationships between variables.

- Case studies are unusual by nature, so will have poor reliability as replicating them exactly will be unlikely.

- Due to the small sample size, it is unlikely that findings from a case study alone can be generalised to a whole population.

- The case study’s researcher may become so involved with the study that they exhibit bias in their interpretation and presentation of the data, making it challenging to distinguish what is truly objective/factual.

  • Case Studies

You might also like

A level psychology topic quiz - research methods.

Quizzes & Activities

Case Studies: Example Answer Video for A Level SAM 3, Paper 1, Q4 (5 Marks)

Topic Videos

Research Methods: MCQ Revision Test 1 for AQA A Level Psychology

Example answers for research methods: a level psychology, paper 2, june 2018 (aqa).

Exam Support

Our subjects

  • › Criminology
  • › Economics
  • › Geography
  • › Health & Social Care
  • › Psychology
  • › Sociology
  • › Teaching & learning resources
  • › Student revision workshops
  • › Online student courses
  • › CPD for teachers
  • › Livestreams
  • › Teaching jobs

Boston House, 214 High Street, Boston Spa, West Yorkshire, LS23 6AD Tel: 01937 848885

  • › Contact us
  • › Terms of use
  • › Privacy & cookies

© 2002-2024 Tutor2u Limited. Company Reg no: 04489574. VAT reg no 816865400.

March 26, 2024

MSU researchers create a new health equity evaluation tool for Genesee County and the city of Flint

Community-based organizations, nonprofits, policymakers and local residents will benefit from the first Health Equity Report Card , or HERC, for Genesee County and the city of Flint. The online tool helps people understand the overall landscape of community health by comparing 50 health-related indicators from 26 public sources. 

“There’s a big need for comprehensive data that’s useful and easy to understand, especially in public health,” said Heatherlun Uphold , assistant professor in the Charles Stewart Mott Department of Public Health and the Department of Translational Neuroscience in the College of Human Medicine at Michigan State University. “Increasing access to health information is an important step to improving health equity so everyone can have the opportunity for optimal health.”

Data included in the report card is specific to Genesee County and the city of Flint. It can be used to inform grant proposals, provide direction for new initiatives and guide funders who want to reinvest in existing efforts or expand their initiatives to address areas where there are high rates of disparity.

“The report card is an essential resource to tackle the disparities that are ever-present in our community,” said Athena McKay, executive director of Flint Innovative Solutions . “It identifies cracks and craters of missing information for our Black and Brown residents. It is a tremendous tool for advocating for policy change, program design and resourcing community health impact.”

Information in the HERC is organized by location (Flint, Genesee County, Michigan and the United States) and then by race (Black and white). Each indicator is organized into six categories: health services and access, socioeconomic status, physical health, mental health, maternal and child health, and health outcomes.

“The social determinants of health contribute to the differences in the results shown in this report card,” Uphold explained. “This data shows results that are influenced by where people live, work, worship, play and age. We look at access to food, health care, education and more. These variables affect health outcomes and quality of life.”

“As a community provider, having easily digestible data that identifies measurable problems is paramount in strategic planning,” said Kristen Senters Young, director of Women’s Specialty and Prevention Services and the Flint, Saginaw and Port Huron Odyssey Houses. “Whereas we may not understand the intricacies behind the data, we do understand the difference between receiving a letter grade on a report card. I can easily spot and understand an F. There is also the added benefit of being able to cut and paste a narrative description of the problem into a grant application or presentation slide.”

“In a community where trust is lacking, good information about that community comes at a premium,” said HERC community partner, Patrick McNeal, director of the North Flint Neighborhood Action Council . “That is why I believe that the Health Equity Report Card is vital for our community. It removes the barriers to accessing good information in a time of misinformation and disinformation. It seeks to make it easy enough for a resident to understand while assisting organizations that deal with our area’s disparities by having quality data.”

The HERC also identifies areas where more publicly accessible data is needed. For example, the mental health category lists four indicators, but we know that more information is necessary to tell the full story of mental health needs in our community, Uphold added.

“The findings in the Health Equity Report Card help identify areas with high disparity and those in need of intervention, like homicides by firearm, diabetes and life expectancy,” she said. “We hope to help other communities create similar report cards that focus on their important issues.” 

The project is funded by the Michigan Health Endowment Fund and the Blue Cross Blue Shield of Michigan Foundation . It is supported by partners from the Genesee County Health Department , the Greater Flint Health Coalition , Community Foundation of Greater Flint and the Greater Flint Taskforce on Racial Inequalities , among others.

By: Dalin Clark

Media Contacts

Nicole Szymczak & Dalin Clark

Related health stories

Ask the expert: what to know about pivotal case on abortion pill.

  • Growing the well-being of graduate students
  • Environmental pollution, social injustices and cognitive health

March 28, 2024

MSU shows ancient isolation’s impact on modern ecology

March 27, 2024

One night, two championships

Msutoday weekly update.

The MSUToday Weekly Update email showcases how Spartans are making a difference through academic excellence, research impact and community outreach. Get inspired by these stories of innovation, collaboration and determination. Plus, enjoy photos and videos of campus and more MSU content to help keep you connected to the Spartan community.

Connect With Us

  • Open access
  • Published: 26 March 2024

The effect of “typical case discussion and scenario simulation” on the critical thinking of midwifery students: Evidence from China

  • Yuji Wang 1   na1 ,
  • Yijuan Peng 1   na1 &
  • Yan Huang 1  

BMC Medical Education volume  24 , Article number:  340 ( 2024 ) Cite this article

116 Accesses

Metrics details

Assessment ability lies at the core of midwives’ capacity to judge and treat clinical problems effectively. Influenced by the traditional teaching method of “teacher-led and content-based”, that teachers involve imparting a large amount of knowledge to students and students lack active thinking and active practice, the clinical assessment ability of midwifery students in China is mostly at a medium or low level. Improving clinical assessment ability of midwifery students, especially critical thinking, is highly important in practical midwifery education. Therefore, we implemented a new teaching program, “typical case discussion and scenario simulation”, in the Midwifery Health Assessment course. Guided by typical cases, students were organized to actively participate in typical case discussions and to promote active thinking and were encouraged to practice actively through scenario simulation. In this study, we aimed to evaluate the effect of this strategy on the critical thinking ability of midwifery students.

A total of 104 midwifery students in grades 16–19 at the West China School of Nursing, Sichuan University, were included as participants through convenience sampling. All the students completed the Midwifery Health Assessment course in the third year of university. Students in grades 16 and 17 were assigned to the control group, which received routine teaching in the Midwifery Health Assessment, while students in grades 18 and 19 were assigned to the experimental group, for which the “typical case discussion and scenario simulation” teaching mode was employed. The Critical Thinking Disposition Inventory-Chinese Version (CTDI-CV) and Midwifery Health Assessment Course Satisfaction Questionnaire were administered after the intervention.

After the intervention, the critical thinking ability of the experimental group was greater than that of the control group (284.81 ± 27.98 and 300.94 ± 31.67, p  = 0.008). Furthermore, the experimental group exhibited higher scores on the four dimensions of Open-Mindedness (40.56 ± 5.60 and 43.59 ± 4.90, p  = 0.005), Analyticity (42.83 ± 5.17 and 45.42 ± 5.72, p  = 0.020), Systematicity (38.79 ± 4.70 and 41.88 ± 6.11, p  = 0.006), and Critical Thinking Self-Confidence (41.35 ± 5.92 and 43.83 ± 5.89, p  = 0.039) than did the control group. The course satisfaction exhibited by the experimental group was greater than that exhibited by the control group (84.81 ± 8.49 and 90.19 ± 8.41, p  = 0.002).

The “typical case discussion and scenario simulation” class mode can improve the critical thinking ability of midwifery students and enhance their curriculum satisfaction. This approach carries a certain degree of promotional significance in medical education.

Typical case discussion and scenario simulation can improve midwifery students’ critical thinking ability.

Typical case discussion and scenario simulation can enhance students’ learning interest and guide students to learn independently.

Midwifery students were satisfied with the new teaching mode.

Peer Review reports

Maternal and neonatal health are important indicators to measure of the level of development of a country’s economy, culture and health care. The positive impact of quality midwifery education on maternal and newborn health is acknowledged in the publication framework for action strengthening quality midwifery education issued by the World Health Organization (WHO) [ 1 ]. Extensive evidence has shown that skilled midwifery care is crucial for reducing preventable maternal and neonatal mortality [ 2 , 3 , 4 ]. Clinical practice features high requirements for the clinical thinking ability of midwives, which refers to the process by which medical personnel analyze and integrate data with professional medical knowledge in the context of diagnosis and treatment as well as discover and solve problems through logical reasoning [ 5 ]. Critical thinking is a thoughtful process that is purposeful, disciplined, and self-directed and that aims to improve decisions and subsequent actions [ 6 ]. In 1986, the American Association of Colleges of Nursing formulated the “Higher Education Standards for Nursing Specialty”, which emphasize the fact that critical thinking is the primary core competence that nursing graduates should possess [ 7 ]. Many studies have shown that critical thinking can help nurses detect, analyze and solve problems creatively in clinical work and is a key factor in their ability to make correct clinical decisions [ 8 , 9 , 10 ].

However, the traditional teaching method used for midwifery students in China is “teacher-led and content-based”, and it involves efficiently and conveniently imparting a large amount of knowledge to students over a short period. Students have long failed to engage in active thinking and active practice, and the cultivation of critical thinking has long been ignored [ 5 ]. As a result, the critical thinking ability of midwifery students in China is mostly at a medium or low level [ 5 ]. Therefore, it is necessary to develop a new teaching mode to improve the critical thinking ability of midwifery students.

In 2014, Professor Xuexin Zhang of Fudan University, Shanghai, China, proposed a novel teaching method: the divided class mode. The basic idea of this approach is to divide the class time into two parts. The teachers explain the theoretical knowledge in the first lesson, and the students discuss that knowledge in the second lesson. This approach emphasizes the guiding role of teachers and encourages and empowers students to take responsibility for their studies [ 11 ]. Research has shown that the divided class mode can improve students’ enthusiasm and initiative as well as teaching effectiveness [ 12 ].

The problem-originated clinical medical curriculum mode of teaching was first established at McMaster University in Canada in 1965. This model is based on typical clinical cases and a problem-oriented heuristic teaching model [ 13 ]. The process of teaching used in this approach is guided by typical cases with the goal of helping students combine theoretical knowledge and practical skills. This approach can enhance the enthusiasm and initiative of students by establishing an active learning atmosphere. Students are encouraged to discuss and analyze typical cases to promote their ability to digest and absorb theoretical knowledge. Research has shown that the problem-originated clinical medical curriculum teaching mode can enhance students’ confidence and improve their autonomous learning and exploration ability. Scenario simulation teaching can provide students with real scenarios, allowing them to practice and apply their knowledge in a safe environment [ 14 ], which can effectively improve their knowledge and clinical skills and enhance their self-confidence [ 15 , 16 ].

Based on the teaching concept of divided classes, our research team established a new teaching model of “typical case discussion and scenario simulation”. Half of the class time is allocated for students to discuss typical cases and carry out scenario simulations to promote their active thinking and active practice. The Midwifery Health Assessment is the final professional core course that midwifery students must take in our school before clinical practice. All students must complete the course in Grade 3. Teaching this course is important for cultivating the critical thinking and clinical assessment ability of midwifery students. Therefore, our team adopted the new teaching mode of "typical case discussion and scenario simulation" in the teaching of this course. This study explored the teaching mode’s ability to improve the critical thinking ability of midwifery students.

Study design

The study employed a semiexperimental design.

Participants

A convenience sample of 104 third-year midwifery students who were enrolled in the Midwifery Health Assessment course volunteered to participate in this research at a large public university in Sichuan Province from February 2019 to June 2022 (grades 16 to 19). All the students completed the course in the third year of university. Students in grades 16 and 17 were assigned to the control group, which received the traditional teaching mode. Students in grades 18 and 19 were assigned to the experimental group, in which context the “typical case discussion and scenario simulation” class mode was used. The exclusion criteria for midwifery students were as follows: (1) dropped out of school during the study, (2) took continuous leave from school for more than two weeks, or (3) were unable to complete the questionnaire. The elimination criterion for midwifery students was that all the items were answered in the same way. No significant differences in students’ scores in their previous professional courses (Midwifery) were observed between the two groups. Textbooks, teachers, and teaching hours were the same for both groups.

Development of the “typical case discussion and scenario simulation” class mode

This study is based on the implementation of the new century higher education teaching reform project at Sichuan University. With the support of Sichuan University, we first established a “typical case discussion and scenario simulation” class mode team. The author of this paper was the head of the teaching reform project and served as a consultant, and the first author is responsible for supervising the implementation of the project. Second, the teaching team discussed and developed a standard process for the “typical case discussion and scenario simulation” class mode. Third, the entire team received intensive training in the standard process for the “typical case discussion and scenario simulation” class mode.

Implementation of the “typical case discussion and scenario simulation” class mode

Phase i (before class).

Before class, in accordance with the requirements for evaluating different periods of pregnancy, the teacher conceptualized typical cases and then discussed those cases with the teaching team and made any necessary modifications. After the completion of the discussion, the modified cases were released to the students through the class group. To ensure students’ interest, they were guided through the task of discovering and solving relevant problems using an autonomous learning approach.

Phase II (the first week)

Typical case discussion period. The Midwifery Health Assessment course was taught by 5 teachers and covered 5 health assessment periods, namely, the pregnancy preparation, pregnancy, delivery, puerperium and neonatal periods. The health assessment course focused on each period over 2 consecutive teaching weeks, and 2 lessons were taught per week. The first week focused on the discussion of typical cases. In the first lesson, teachers introduced typical cases, taught key knowledge or difficult evaluation content pertaining to the different periods, and explored the relevant knowledge framework. In the second lesson, teachers organized group discussions, case analyses and intergroup communications for the typical cases. They were also responsible for coordinating and encouraging students to participate actively in the discussion. After the discussion, teachers and students reviewed the definitions, treatments and evaluation points associated with the typical cases. The teachers also encouraged students to internalize knowledge by engaging in a process of summary and reflection to achieve the purpose of combining theory with practice.

Phase III (the second week)

Scenario simulation practice period. The second week focused on the scenario simulation practice period. In the first lesson, teachers reviewed the focus of assessment during the different periods and answered students’ questions. In the second lesson, students performed typical case assessment simulations in subgroups. After the simulation, the teachers commented on and summarized the students’ simulation evaluation and reviewed the evaluation points of typical cases to improve the students’ evaluation ability.

The organizational structure and implementation of the “typical case discussion and scenario simulation” class mode showed in Fig.  1 .

figure 1

“Typical case discussion and scenario simulation” teaching mode diagram

A demographic questionnaire designed for this purpose was used to collect relevant information from participants, including age, gender, single-child status, family location, experience with typical case discussion or scenario simulation and scores in previous professional courses (Midwifery).

The Critical Thinking Disposition Inventory-Chinese Version (CTDI-CV) was developed by Peng et al. to evaluate the critical thinking ability of midwifery students [ 17 ]. The scale contains 70 items across a total of seven dimensions, namely, open-mindedness, truth-seeking, analytical ability, systematic ability, self-confidence in critical thinking, thirst for knowledge, and cognitive maturity. Each dimension is associated with 10 items, and each item is scored on a 6-point Likert scale, with 1 indicating “extremely agree” and 6 representing “extremely disagree”. The scale includes 30 positive items, which receive scores ranging from “extremely agree” to “extremely disagree” on a scale of 6 to 1, and 40 negative items, which receive scores ranging from “extremely agree” to “extremely disagree” on a scale of 1 to 6. A total score less than 210 indicates negative critical thinking ability, scores between 211 and 279 indicate an unclear meaning, scores of 280 or higher indicate positive critical thinking ability, and scores of 350 or higher indicate strong performance. The score range of each trait is 10–60 points; a score of 30 points or fewer indicates negative trait performance, scores between 31 and 39 points indicate that the trait meaning is incorrect, scores of 40 points or higher indicate positive trait performance, and scores of 50 points or higher indicate extremely positive trait performance. The Cronbach’s α coefficient of the scale was 0.90, thus indicating good content validity and structure. The higher an individual’s score on this measure is, the better that individual’s critical thinking ability.

The evaluation of teaching results was based on a questionnaire used to assess undergraduate course satisfaction, and the researchers deleted and modified items in the questionnaire to suit the context of the “typical case discussion and scenario simulation” teaching mode. Two rounds of discussion were held within the study group to form the final version of the Midwifery Health Assessment satisfaction questionnaire. The questionnaire evaluates the effect of teaching in terms of three dimensions, namely, curriculum content, curriculum teaching and curriculum evaluation. The questionnaire contains 21 items, each of which is scored on a 5-point Likert scale, with 1 indicating “extremely disagree” and 5 representing “extremely agree”. The higher the score is, the better the teaching effect.

Data collection and statistical analysis

We input the survey data into the “Wenjuanxing” platform ( https://www.wjx.cn/ ), which specializes in questionnaire services. At the beginning of the study, an electronic questionnaire was distributed to the students in the control group via student WeChat and QQ groups for data collection. After the intervention, an electronic questionnaire was distributed to the students in the experimental group for data collection in the final class of the Midwifery Health Assessment course. All the data were collected by the first author (Yuji Wang). When students had questions about the survey items, the first author (Yuji Wang) immediately explained the items in detail. To ensure the integrity of the questionnaire, the platform required all the items to be answered before submission.

Statistical Package for Social Sciences Version 26.0 (SPSS 26.0) software was used for data analysis. The Shapiro‒Wilk test was used to test the normality of the data. The measurement data are expressed as the mean ± standard deviation (X ± S), and an independent sample t test was used for comparisons among groups with a normal distribution. The data presented as the number of cases (%), and the chi-square test was performed. A P value < 0.05 indicated that a difference was statistically significant.

Ethical considerations

The study was funded by the New Century Teaching Reform Project of Sichuan University and passed the relevant ethical review. Oral informed consent was obtained from all individual participants in the study.

Characteristics of the participants

A total of 104 third-year midwifery students were enrolled from February 2019 to June 2022, and 98.1% (102/144) of these students completed the survey. Two invalid questionnaires that featured the same answers for each item were eliminated. A total of 100 participants were ultimately included in the analysis. Among the participants, 48 students were assigned to the control group, and 52 students were assigned to the experimental group. The age of the students ranged from 19 to 22 years, and the mean age of the control group was 20.50 years (SD = 0.61). The mean age of the experimental group was 20.63 years (SD = 0.65). Of the 100 students who participated in the study, the majority (96.0%) were women. No significant differences were observed between the intervention and control groups in terms of students’ demographic information (i.e., age, gender, status as an only child, or family location), experience with scenario simulation or typical case discussion and scores in previous Midwifery courses (Table  1 ).

Examining the differences in critical thinking ability between the two groups

The aim of this study was to evaluate the effect of the new teaching mode of “typical case discussion and scenario simulation” on improving the critical thinking ability of midwifery students. Independent sample t tests were used to examine the differences in critical thinking ability between the two groups (Table  2 ). The results showed that the total critical thinking scores obtained by the experimental group were greater than those obtained by the control group (284.81 ± 27.98 and 300.94 ± 31.67, p  = 0.008). The differences in four dimensions (Open-Mindedness (40.56 ± 5.60 and 43.59 ± 4.90, p  = 0.005), Analyticity (42.83 ± 5.17 and 45.42 ± 5.72, p  = 0.020), Systematicity (38.79 ± 4.70 and 41.88 ± 6.11, p  = 0.006), and Critical Thinking Self-Confidence (41.35 ± 5.92 and 43.83 ± 5.89, p  = 0.039)) were statistically significant.

Examining the differences in curriculum satisfaction between the two groups

To evaluate the effect of the new teaching mode of “the typical case discussion and scenario simulation” on the course satisfaction of midwifery students. Independent sample t tests were used to examine the differences in course satisfaction between the two groups (Table  3 ). The results showed that the curriculum satisfaction of the experimental group was greater than that of the control group (84.81 ± 8.49 and 90.19 ± 8.41, p  = 0.002). Independent sample t tests were used to examine the differences in the three dimensions of curriculum satisfaction between the two groups (Table  3 ). The results showed that the average scores of the intervention group on the three dimensions were significantly greater than those of the control group (curricular content: 20.83 ± 1.96 and 22.17 ± 2.23, p  = 0.002; curriculum teaching: 34.16 ± 3.89 and 36.59 ± 3.66, p  = 0.002; curriculum evaluation: 29.81 ± 3.27 and 31.42 ± 3.19, p  = 0.015).

Midwifery is practical and intensive work. To ensure maternal and child safety, midwives must make decisions and take action quickly. Therefore, midwives should have both critical thinking ability and clinical decision-making ability [ 18 ]. In addition, the Australian Nursing and Midwifery Accreditation Council (ANMAC) regulates the educational requirements for the programs required for registration as a midwife. According to these standards, education providers must incorporate learning activities into curricula to encourage the development and application of critical thinking and reflective practice [ 19 ]. Therefore, the challenge of cultivating the critical thinking ability of midwifery students is an urgent problem that must be solved. However, influenced by the traditional teaching method of “teacher-led and content-based”, the critical thinking ability of midwifery students in China is mostly at a medium or low level. In order to improve the critical thinking ability of midwifery students. Our research team has established a new teaching model, the “typical case discussion and scenario simulation” class model. And applied to the midwifery core curriculum Midwifery Health Assessment. This study aimed to investigate the implementation of a novel systematic and structured teaching model for midwifery students and to provide evidence regarding how to improve the critical thinking ability of midwives.

The results showed that the total CTDI-CV score obtained for the experimental group was greater than that obtained for the control group. These findings indicate that the “typical case discussion and scenario simulation” class mode had a positive effect on the cultivation of students’ critical thinking ability, a conclusion which is similar to the findings of Holdsworth et al. [ 20 ], Lapkin et al. [ 21 ] and Demirören M et al. [ 22 ]. We indicate the following reasons that may explain these results.The core aim of the typical case discussion teaching mode is to raise questions based on typical clinical cases and to provide heuristic teaching to students [ 23 ]. This approach emphasizes asking questions based on specific clinical cases, which enables students to engage in targeted learning. Moreover, scenario simulation allows students to attain certain inner experiences and emotions and actively participate in curriculum practice, which can enhance their ability to remember and understand knowledge [ 24 ]. Through the divided class mode, half of the class time was divided into the students. This method emphasizes the guiding role of teachers and encourages and empowers students to assume learning responsibilities. In addition, students can think, communicate and discuss actively [ 22 , 23 ]. Furthermore, this approach created opportunities for students to analyze and consider problems independently and give students sufficient time to internalize and absorb knowledge and deepen their understanding of relevant knowledge, which can increase their confidence in their ability to address such problems and improve their critical thinking ability [ 12 , 25 , 26 ].

In addition, the results showed that except for Truth-Seeking and Systematicity, the other five dimensions were all positive. These findings are similar to the results reported by Atakro et al.. and Sun et al. [ 27 , 28 ]. Through the intervention, the Systematicity scores became positive, suggesting that the new teaching mode can help students deal with problems in an organized and purposeful way. However, Truth-Seeking still did not become positive; this notion focuses on intellectual honesty, i.e., the disposition to be courageous when asking questions and to be honest and objective in the pursuit of knowledge even when the topics under investigation do not support one’s self-interest [ 29 ]. Studies have shown that this factor is related to the traditional teaching mode used [ 30 ]. The traditional teaching mode focuses on knowledge infusion, helps students remember the greatest possible amount of knowledge in a short time, and does not focus on guiding students to seek knowledge with sincerity and objectivity. Therefore, in future educational practice, we should focus on cultivating students’ ability to seek truth and engage in systematization.

Student evaluative feedback is an important way to test the effectiveness teaching mode. Therefore, understanding students’ evaluations of the effects of classroom teaching is key to promoting teaching reform and improving teaching quality. Therefore, we distributed a satisfaction questionnaire pertaining to the midwifery health assessment curriculum, which was based on the “typical case discussion and scenario simulation” class mode, with the goal of investigating curriculum satisfaction in terms of three dimensions (curriculum content, curriculum teaching and curriculum evaluation). The results showed that the satisfaction scores for each dimension increased significantly. This finding suggests that the new teaching method can enrich the teaching content, diversify the teaching mode and improve students’ curriculum evaluations.

In summary, the “typical case discussion and scenario simulation” class mode focuses on typical cases as its main content. Students’ understanding of this content is deepened through group discussion and scenario simulation. The subjectivity of students in curriculum learning should be accounted for. Students can be encouraged to detect, analyze and solve problems with the goal of improving their critical thinking ability. Moreover, this approach can also enhance curriculum satisfaction. It is recommended that these tools should be used continuously in future curriculum teaching.

This study has several limitations. First, the representativeness of the sample may be limited since the participants were recruited from specific universities in China. Second, we used historical controls, which are less effective than simultaneous controlled trials. Third, online self-report surveys are susceptible to response biases, although we included quality control measurements in the process of data collection. Fourth, we did not use the same critical thinking instrument, CTDI-CV, to investigate the critical thinking of the students in the experimental group or the control group before intervention but used professional course grades from the Midwifery for substitution comparison. This may not be a sufficient substitute. However, these comparisons could be helpful since those grades included some sort of evaluation of critical thinking. In light of these limitations, future multicenter simultaneous controlled studies should be conducted. Nonetheless, this study also has several strengths. First, no adjustment of teachers or change in learning materials occurred since the start of the midwifery health assessment, thus ensuring that the experimental and control groups featured the same teaching materials, teachers and teaching hours. In addition, to ensure the quality of the research, the first author of this paper participated in the entirety of the course teaching.

The “typical case discussion and scenario simulation” class mode can improve the critical thinking of midwifery students, which is helpful for ensuring maternal and child safety. Students are highly satisfied with the new teaching mode, and this approach has a certain degree of promotional significance. However, this approach also entails higher requirements for both teachers and students.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

World Health Organisation, Strengthening quality midwifery education for Universal Health Coverage2030,2019, https://www.who.int/maternal_child_adolescent/topics/quality-of-care/midwifery/strengthening-midwifery-education/en/ (accessed 21.01.20).

Akombi BJ, Renzaho AM. Perinatal mortality in Sub-Saharan Africa: a meta-analysis of demographic and health surveys. Ann Glob Health. 2019;85(1):106.

Article   Google Scholar  

Campbell OM, Graham WJ. Strategies for reducing maternal mortality: getting on with what works. Lancet. 2006;368(9543):1284–99.

Gage AD, Carnes F, Blossom J, Aluvaala J, Amatya A, Mahat K, Malata A, Roder-DeWan S, Twum-Danso N, Yahya T, et al. In low- and middle-income countries, is delivery in high-quality obstetric facilities geographically feasible? Health Aff (Millwood). 2019;38(9):1576–84.

Xing C, Zhou Y, Li M, Wu Q, Zhou Q, Wang Q, Liu X. The effects of CPBL + SBAR teaching mode among the nursing students. Nurse Educ Today. 2021;100:104828.

Carter AG, Creedy DK, Sidebotham M. Critical thinking evaluation in reflective writing: development and testing of Carter assessment of critical thinking in midwifery (Reflection). Midwifery. 2017;54:73–80.

Yeh SL, Lin CT, Wang LH, Lin CC, Ma CT, Han CY. The Outcomes of an Interprofessional simulation program for new graduate nurses. Int J Environ Res Public Health. 2022;19(21):13839.

Chang MJ, Chang YJ, Kuo SH, Yang YH, Chou FH. Relationships between critical thinking ability and nursing competence in clinical nurses. J Clin Nurs. 2011;20(21–22):3224–32.

Shoulders B, Follett C, Eason J. Enhancing critical thinking in clinical practice: implications for critical and acute care nurses. Dimens Crit Care Nurs. 2014;33(4):207–14.

Jalalpour H, Jahani S, Asadizaker M, Sharhani A, Heybar H. The impact of critical thinking training using critical thinking cards on clinical decision-making of CCU nurses. J Family Med Prim Care. 2021;10(10):3650–6.

Xuexin Z. PAD class: a new attempt in university teaching reform. Fudan Educ Forum. 2014;12(5):5–10 [in Chinese].

Google Scholar  

Zhai J, Dai L, Peng C, Dong B, Jia Y, Yang C. Application of the presentation-assimilation-discussion class in oral pathology teaching. J Dent Educ. 2022;86(1):4–11.

Colliver JA. Effectiveness of problem-based learning curricula: research and theory. Acad Med. 2000;75(3):259–66.

Bryant K, Aebersold ML, Jeffries PR, Kardong-Edgren S. Innovations in simulation: nursing leaders’ exchange of best practices. Clin Simul Nurs. 2020;41:33-40.e31.

Cicero MX, Whitfill T, Walsh B, Diaz MC, Arteaga G, Scherzer DJ, Goldberg S, Madhok M, Bowen A, Paesano G, et al. 60 seconds to survival: a multisite study of a screen-based simulation to improve prehospital providers disaster triage skills. AEM Educ Train. 2018;2(2):100–6.

Lee J, Lee H, Kim S, Choi M, Ko IS, Bae J, Kim SH. Debriefing methods and learning outcomes in simulation nursing education: a systematic review and meta-analysis. Nurse Educ Today. 2020;87:104345.

Peng G, Wang J, Chen M, Chen H, Bai S, Li J, Li Y, Cai J, Wang L. Yin Validity and reliability of the Chinese critical thinking disposition inventory Chin. J Nurs. 2004;39(09):7–10 [in Chinese].

Papathanasiou IV, Kleisiaris CF, Fradelos EC, Kakou K, Kourkouta L. Critical thinking: the development of an essential skill for nursing students. Acta Inform Med. 2014;22(4):283–6.

Australian Nursing and Midwifery Accreditation Council Midwife accreditation standards, 2014 ANMAC, Canberra. 2014. https://anmac.org.au/document/midwife-accreditation-standards-2014 .

Holdsworth C, Skinner EH, Delany CM. Using simulation pedagogy to teach clinical education skills: a randomized trial. Physiother Theory Pract. 2016;32(4):284–95.

Lapkin S, Fernandez R, Levett-Jones T, Bellchambers H. The effectiveness of using human patient simulation manikins in the teaching of clinical reasoning skills to undergraduate nursing students: a systematic review. JBI Libr Syst Rev. 2010;8(16):661–94.

Demirören M, Turan S, Öztuna D. Medical students’ self-efficacy in problem-based learning and its relationship with self-regulated learning. Med Educ Online. 2016;21:30049.

Spaulding WB, Neufeld VR. Regionalization of medical education at McMaster University. Br Med J. 1973;3(5871):95–8.

Rossler KL, Kimble LP. Capturing readiness to learn and collaboration as explored with an interprofessional simulation scenario: A mixed-methods research study. Nurse Educ Today. 2016;36:348–53.

Yang YL, Luo L, Qian Y, Yang F. Cultivation of undergraduates’ self-regulated learning ability in medical genetics based on PAD class. Yi Chuan. 2020;42(11):1133–9.

Felton A, Wright N. Simulation in mental health nurse education: the development, implementation and evaluation of an educational innovation. Nurse Educ Pract. 2017;26:46–52.

Atakro CA, Armah E, Menlah A, Garti I, Addo SB, Adatara P, Boni GS. Clinical placement experiences by undergraduate nursing students in selected teaching hospitals in Ghana. BMC Nurs. 2019;18:1.

Sun Y, Yin Y, Wang J, Ding Z, Wang D, Zhang Y, Zhang J, Wang Y. Critical thinking abilities among newly graduated nurses: a cross-sectional survey study in China. Nurs Open. 2023;10(3):1383–92.

Wangensteen S, Johansson IS, Björkström ME, Nordström G. Critical thinking dispositions among newly graduated nurses. J Adv Nurs. 2010;66(10):2170–81.

Salsali M, Tajvidi M, Ghiyasvandian S. Critical thinking dispositions of nursing students in Asian and non-Asian countries: a literature review. Glob J Health Sci. 2013;5(6):172–8.

Download references

Acknowledgements

Not applicable.

The study was supported by Sichuan University’s New Century Education and Teaching Reform Project (SCU9316).

Author information

Yuji Wang and Yijuan Peng are co-first authors.

Authors and Affiliations

Department of Nursing, West China Second University Hospital, Sichuan University/West China School of Nursing, Sichuan University/Key Laboratory of Birth Defects and Related Diseases of Women and Children (Sichuan University), No. 20 Third Section, Renmin South Road, Chengdu, Sichuan Province, 610041, China

Yuji Wang, Yijuan Peng & Yan Huang

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Yuji Wang, Yijuan Peng and Yan Huang. The first draft of the manuscript were written by Yuji Wang and Yijuan Peng, and all authors commented on previous versions of the manuscript.

Corresponding author

Correspondence to Yan Huang .

Ethics declarations

Ethics approval and consent to participate.

This study was supported by Sichuan University. And it was approved by the Ethics Review Committee of West China School of Nursing, Sichuan University. As it is a teaching research with no harm to samples, we only obtained oral informed consents from the participants including teachers and midwifery students and it was approved by the Ethics Review Committee of West China School of Nursing, Sichuan University(approval number 2021220). We comfirm that all methods were performed in accordance with the relevant guidelines and regulations in Ethics Approval and Consent to participate in Declarations.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Wang, Y., Peng, Y. & Huang, Y. The effect of “typical case discussion and scenario simulation” on the critical thinking of midwifery students: Evidence from China. BMC Med Educ 24 , 340 (2024). https://doi.org/10.1186/s12909-024-05127-5

Download citation

Received : 19 November 2022

Accepted : 02 February 2024

Published : 26 March 2024

DOI : https://doi.org/10.1186/s12909-024-05127-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Medical education
  • Critical thinking
  • Nurse midwives

BMC Medical Education

ISSN: 1472-6920

case study with evaluation

ORIGINAL RESEARCH article

Factors influencing marine ecological environment governance toward sustainability: a case study of zhejiang province provisionally accepted.

  • 1 Ningbo University, China

The final, formatted version of the article will be published soon.

At present, the marine ecological environment is facing enormous pressure from human activities, and there is an urgent need for coordinated governance by multiple entities to ensure that the marine ecological environment can continuously meet the needs of sustainable development. Marine ecological environmental governance plays multiple roles in the sustainable development of the ocean characteristics. Most existing studies have explored this field from the perspective of the government and public, while failing to adequately account for the factors influencing enterprises' participation in marine ecological environmental governance. This paper is an effort to provide some empirical research on the influencing factors of enterprises' participation in marine ecological environmental governance. Based on existing literature, empirical research (213 middle managers were surveyed from 68 coastal enterprises in Zhejiang, China) , this study extracts eight core factors that influence corporate participation in marine ecosystems and uses the Fuzzy Decision-making Trial and Evaluation Laboratory approach (Fuzzy DEMATEL). Furthermore, experts from Chinese backgrounds elucidated the complex interdependencies among the factors, based on which key influencing factors were identified. The empirical results indicate that government attention and support, legal and regulatory requirements, and cost-benefit accounting have a positive net effect on corporate participation in marine ecosystem management; when these factors are improved, they drive improvements in other factors(Corporate Capital Capability, Corporate Social Responsibility, Government Enforcement and Appraisal, The Attention of Corporate Leaders, Corporate Internal Management System). Additionally, interviews with Chinese business people support the robustness of the findings and suggest that policymakers cannot ignore government enforcement and assessment efforts. Overall, the study findings can help advance corporate participation in marine environmental governance. KEYWAORD sustainable development, corporate involvement, marine ecosystem, influencing factors, decision-making 字体: (默认)Palatino Linotype, (中文)

Keywords: sustainable development, corporate involvement, Marine ecosystem, Influencing factors, decision-making

Received: 28 Dec 2023; Accepted: 29 Mar 2024.

Copyright: © 2024 Chen, He and He. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Mr. Yuejun He, Ningbo University, Ningbo, China

People also looked at

Coupling coordination between agriculture and tourism in the Qinba Mountain area: a case study of Shanyang County, Shanxi Province

  • Published: 29 March 2024

Cite this article

  • Ruilong Gao 1 &
  • Siyu Zheng   ORCID: orcid.org/0009-0002-8395-9817 1  

The integrated development of agriculture and tourism is one of the important ways to promote the revitalization of rural industries. This paper measures the coupling and coordination level of agricultural and tourism development in Shanyang County, Qinba Mountain area, for the period of 2011–2020, by constructing an index system and a coupling coordination model. The study reveals the following findings: (1) From 2011 to 2020, the evaluation index of agriculture development in Shanyang County exhibited significant fluctuations, with an overall downward trend. In contrast, the evaluation index of tourism development experienced rapid growth and surpassed agriculture from 2018 onwards. (2) There is a significant coupling relationship between the agricultural and tourism industry systems in Shanyang County. The coupling coordination has slowly increased, undergoing an evolution from being nearly uncoordinated to weak coordination and to reaching a primary level of coordination. Therefore, Shanyang County should attach importance to the fundamental role of agriculture in the coordinated development of agricultural tourism, seek joint regional development, and benefit the whole people.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA) Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

case study with evaluation

Data availability statement

The data that support the findings of this study are openly available in China's economic and social big data research platform at https://data.cnki.net/yearBook?type=type&code=A .

Ammirato, S., Felicetti, A. M., Raso, C., Pansera, B. A., & Violi, A. (2020). Agritourism and sustainability: What we can learn from a systematic literature review. Sustainability , 12 (22).

An analysis of Shanyang County’s rural economic performance in the first half of 2019. (2019). Shangluo Bureau of Statistics .

Chang, J., Li, J., Li, Z., & Wu, B. (2020). Research on coupling coordination of tourism resource distribution and tourist economy in Qinba Mountain area. Journal of Shaanxi Normal University (Natural Science Edition) , 48 (01), 1–10.

Chen, L., Huang, S., & Li, X. (2021). The impact of mechanised agriculture on farmers’ income: An empirical analysis based on systematic GMM model and intermediary effect model. Rural Economy, 06 , 41–49.

Google Scholar  

Cui, D., Chen, X., Xue, Y., Li, R., & Zeng, W. (2019). An integrated approach to investigate the relationship of coupling coordination between social economy and water environment on urban scale: A case study of Kunming. Journal of Environmental Management , 234 , 189–199. https://doi.org/10.1016/j.jenvman.2018.12.091

Evgrafova, L. V., Ismailova, A. Z., & Kalinichev, V. L. (2020). Agrotourism as a factor of sustainable rural development. IOP Conference Series: Earth and Environmental Science, 421 (2), 22058.

Guo, H., & Han, F. (2010). Review on the development of rural tourism in China. Progress in Geography, 29 (12), 1597–1605.

Guo, J., & Wang, Y. (2022). Study on coupling coordination relationship between poverty alleviation and rural vitalization: A case study of Qin-Ba Mountainous area. Journal of Northwest Minzu University (Philosophy and Social Sciences) , 01 , 117–129. https://doi.org/10.14084/j.cnki.cn62-1185/c.20220117.005

Hold High the Great Banner of Socialism with Chinese Characteristics and Strive in Unity to Build a Modern Socialist Country in All Respects. (2022). People’s Daily , p. 001.

Huang, F. (2019). Study on the coordination development of agriculture and tourism in Henan Province. Chinese Journal of Agricultural Resources and Regional Planning, 40 (11), 274–281.

Jiang, J., Wang, X., & Yang, Q. (2022). Coupling degree evaluation and countermeasures of Agriculture-Tourism integration—Take Shizhu Tujia Autonomous County in Chongqing as an example. Journal of Southwest University (Natural Science Edition) , 44 (08), 158–167.

Li, M., & Wang, X. (2022). The model and path optimization of rural tourism integration under the background of rural revitalization: Reflection on industrial integration development in rural areas. Guizhou Social Sciences, 3 , 153–159.

Li, X., Ma, L., & Chen, L. (2016). Global tourism development: Logic and focus. Tourism Tribune, 31 (09), 22–24.

CAS   Google Scholar  

Li, Z. (2016). Breakthroughs in developing rural tourism for precision poverty alleviation. China Tourism News , 001.

Liu, H., Zhou, C., & Wang, T. (2022). Research on the coordinated development and influencing factors of Shanxi agricultural tourism industry under the background of rural revitalization. Agricultural Economy, 04 , 74–76.

Liu, J. (2017). Agricultural“12th five-year plan” achievements and“13th five-year plan” prospects. Shanyang County Government of the People’s Republic of China .

Liu, J. (2021). The main indicators of economic development during the 13th five-year plan period and the main expectations of economic development during the 14th five-year plan period. Shanyang County Government of the People’s Republic of China .

Liu, S., & Yang, Y. (2015). The industrial coordinated development mechanism: Based on the development of agriculture and tourism of Yin-qiao Town in Dali City. Journal of Central China Normal University (Humanities and Social Sciences) , 54 (3), 44–52.

Ma, Y., Xu, Y., Xie, J., & Chen, Y. (2023). Analysis of spatial-temporal evolution of and factors influencing the integration of the agricultural and tourism industries in Fujian Province: Based on the production-living-ecological function theory. Journal of Agricultural Resources and Environment, 40 (3), 716–727.

Mao, H. (2018). Theories and methods of optimal control of human-earth system. Acta Geographica Sinica, 73 (4), 608–619.

National Rural Industry Development Plan (2020–2025). (2020). Animal Agriculture , 12 , 6–16.

Podovac, M., Đorđević, N., & Milićević, S. (2019). Rural tourism in the function of life quality improvement of rural population on Goč mountain. Ekonomika Poljoprivrede, 66 (1), 205–220.

Article   Google Scholar  

Qiu, P., Zhou, Z., & Kim, D.-J. (2021). A new path of sustainable development in traditional agricultural areas from the perspective of open innovation: A coupling and coordination study on the agricultural industry and the tourism industry. Journal of Open Innovation: Technology, Market, and Complexity, 7 (1), 16.

Shangluo bureau of statistics. (2023). https://tjj.shangluo.gov.cn/ . Accessed 7 Nov 2023.

Shen, J. (2019). The rural tourism development model of our city has been selected as a typical case of national rural tourism. Shangluo Bureau of Culture and tourism .

Shen, Y., You, Y., & Zhou, P. (2023). On the effect of rural tourism integration on farmers’ income: Mechanism and Test. West Forum on Economy and Management , 34 (2), 39–48, 63.

Shi, P. (2011). Path and key points of integration development between tourism industry and other industries. Tourism Tribune, 26 (5), 9–10.

Shi, X., Yao, G., Xu, J., & Zhang, S. (2022). Study on the coupling and coordinated development of leisure agriculture and rural tourism industry under the background of rural revitalization: Take the Yangtze River Delta region as an example. Journal of Chinese Agricultural Mechanization, 43 (7), 230–236.

Shu, X., Gao, Y., Zhang, Y., & Yang, C. (2015). Study on the coupling relationship and coordinative development between tourism industry and eco-civilization city. China Population, Resources and Environment, 25 (3), 82–90.

Susila, I., Dean, D., Harismah, K., Priyono, K. D., Setyawan, A. A., & Maulana, H. (2023). Does interconnectivity matter? An integration model of Agro-tourism development. Asia Pacific Management Review .

The Shangluo Government of the People’s Republic of China local records office. (2020). Shangluo yearbook .

Wang, J., Zhou, F., Xie, A., & Shi, J. (2022). Impacts of the integral development of agriculture and tourism on agricultural eco-efficiency: A case study of two river basins in China. Environment, Development and Sustainability .

Wang, J., Sun, J., Lei, T., Lu, G., Zhang, H., & Yuan, J. (2022b). Coupling mechanism and spatiotemporal differentiation between grain production efficiency and tourism development in China. Journal of Natural Resources, 37 (10), 2651.

Wang, L. (2018). The dynamic mechanism and development path for integration of agriculture and tourism in Shanxi Province. Journal of Agrotechnical Economics, 4 , 136–144.

Wang, Q., & Zhang, J. (2013). Research on the integration development of tourism and agriculture at home and abroad. The World of Survey and Research, 3 , 61–65.

Wanner, A., Pröbstl-Haider, U., & Feilhammer, M. (2021). The future of Alpine pastures: Agricultural or tourism development? Experiences from the German Alps. Journal of Outdoor Recreation and Tourism, 35 , 100405.

Xia, J., & Xu, J. (2016). An empirical analysis on the integrated development of tourism and agriculture industry in China. Research on Economics and Management, 37 (1), 77–83.

Xin, L., & An, X. (2019). Construction and empirical analysis of agriculture high-quality development evaluation system in China. Economic Review Journal, 05 , 109–118.

Xinhua News Agency. (2022). https://www.gov.cn .

Yan, X., & Li, Y. (2004). High-level combination and well interaction between agriculture and tourism. Soft Science, 18 (4), 42–45.

Yan, Xuxian, Fan, L., & Shi, J. (2020). Temporal and spatial distribution characteristics and the influence of the air quality comprehensive index in China: Based on the analysis of 31 major tourist cities. Journal of Shanxi Normal University (Philosophy and Social Sciences Edition) , 49 (2), 125–138.

Yang, G., Zhou, C., & Yang, G. (2020). Analysis of the relationship between industrial integration of agriculture and tourism industry and alleviation of rural poverty. Statistics & Decision, 36 (5), 81–86.

Yang, H., & Pu, Y. (2009). A new path for sustainable development in underdeveloped areas: Coupling ecological agriculture and ecological tourism industries. Journal of Management World, 04 , 169–170.

Ye, L., Li, Y., Liang, W., & Zhang, H. (2018). Integration and edvelopment of agriculture and tourism in Hainan Province from the prospective of input and output. Chinese Journal of Tropical Agriculture, 38 (05), 103–108.

Yuan, K., & He, T. (2022). Research on dynamic evaluation of county resource and environment comprehensive carrying capacity in Qinba Mountain Area: Based on Chengkou County, Chongqin. Journal of Chongqing Normal University (Natural Science) , 39 (04), 51–60.

Zhang, J., & Zhang, Y. (2020). Measurement and analysis of coupling coordination degree of eco-agriculture and eco-tourism based on grey system theory: Taking Hunan Province as an example. Ecological Economy, 36 (02), 122–126.

Zhang, M. (2013). The analysis of Shanxi Province Shanyang County cultural tourism development background. Journal of Xianyang Normal University, 28 (2), 54–57.

Zhang, Y., Chen, J., & Xiong, Y. (2015). Research on the coupling relationship between tourism and agriculture: A case study of Zhangjiajie City, Hunan Province. Journal of South-Central Minzu University (Humanities and Social Sciences) , 35 (6), 109–113.

Zhao, Y., Wu, Y., Wu, X., & Zhang, X. (2023). Coupling coordination between ecological civilization construction and the tourism industry in the Yellow River Basin. Environment, Development and Sustainability .

Zheng, S., & Lin, G. (2017). Coordination of leisure agriculture, rural tourism and new countryside construction in Fujian Province. Fujian Journal of Agricultural Sciences, 32 (3), 324–331.

Zhong, Y., Tang, L., & Hu, P. (2020). The mechanism and empirical analysis of the integration of agriculture and tourism to promote the optimization and upgrading of rural industrial structure: A case study of national demonstration counties of leisure agriculture and rural tourism. Chinese Rural Economy, 7 , 80–98.

Zhou, G. (2018). The coupling between ECO-Agriculture and Eco-Tourism in Jiangsu Province. Chinese Journal of Agricultural Resources and Regional Planning, 39 (04), 226–231.

Zhou, P., Shen, Y., & Li, A. (2021). Can agri-tourism integration promote High-quality agricultural development? An empirical test based on provincial panel data. Macroeconomics, 10 , 117–130.

Zhou, W., Liu, Q., & Li, Q. (2023). The coupling of rural revitalization and rural tourism in former deep poverty areas—Taking Didigu Village, Ebian Yi Autonomous County as an example. Journal of Resources and Ecology , 14 (5).

Zu, J., Hao, J., Chen, L., Zhang, Y., Wang, J., Kang, L., & Guo, J. (2018). Analysis on trinity connotation and approach to protect quantity, quality and ecology of cultivated land. Journal of China Agricultural University, 23 (07), 84–95.

Download references

Acknowledgements

Funding was provided by Research Project on Major Theoretical and Practical Problems in the Social Sciences in Shanxi Province (Grant No. 2022ZD0804).

Author information

Authors and affiliations.

Xi’an University of Architecture and Technology, Xi’an, China

Ruilong Gao & Siyu Zheng

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Siyu Zheng .

Ethics declarations

Conflict of interest.

Authors have no conflict of interest to declare.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Gao, R., Zheng, S. Coupling coordination between agriculture and tourism in the Qinba Mountain area: a case study of Shanyang County, Shanxi Province. Environ Dev Sustain (2024). https://doi.org/10.1007/s10668-024-04747-7

Download citation

Received : 25 June 2023

Accepted : 06 March 2024

Published : 29 March 2024

DOI : https://doi.org/10.1007/s10668-024-04747-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Agriculture
  • Coupling coordination degree
  • Shanyang County
  • Find a journal
  • Publish with us
  • Track your research

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Supplements
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 9, Issue 3
  • Midwife-led birthing centres in Bangladesh, Pakistan and Uganda: an economic evaluation of case study sites
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0001-7233-6804 Emily J Callander 1 ,
  • Vanessa Scarf 1 ,
  • Andrea Nove 2 ,
  • http://orcid.org/0000-0002-7454-3011 Caroline Homer 3 ,
  • http://orcid.org/0000-0002-5711-0417 Alayna Carrandi 4 ,
  • Abu Sayeed Abdullah 5 ,
  • Sheila Clow 6 ,
  • Abdul Halim 5 ,
  • Scovia Nalugo Mbalinda 7 ,
  • Rose Chalo Nabirye 8 ,
  • AKM Fazlur Rahman 5 ,
  • Saad Ibrahim Rasheed 9 ,
  • Arslan Munir Turk 9 ,
  • Oliva Bazirete 2 , 10 ,
  • Sabera Turkmani 1 , 3 ,
  • Mandy Forrester 11 ,
  • Shree Mandke 11 ,
  • Sally Pairman 11 ,
  • Martin Boyce 2
  • 1 Faculty of Health , University of Technology Sydney , Sydney , New South Wales , Australia
  • 2 Novametrics Ltd , Duffield , UK
  • 3 Burnet Institute , Melbourne , Victoria , Australia
  • 4 Monash University School of Public Health and Preventive Medicine , Melbourne , Victoria , Australia
  • 5 Centre for Injury Prevention and Research , Dhaka , Bangladesh
  • 6 University of Cape Town , Cape Town , South Africa
  • 7 Makerere University , Kampala , Uganda
  • 8 Busitema University , Tororo , Uganda
  • 9 Research and Development Solutions , Islamabad , Pakistan
  • 10 University of Rwanda , Kigali , Rwanda
  • 11 International Confederation Of Midwives , The Hague , The Netherlands
  • Correspondence to Professor Emily J Callander; Emily.callander{at}uts.edu.au

Introduction Achieving the Sustainable Development Goals to reduce maternal and neonatal mortality rates will require the expansion and strengthening of quality maternal health services. Midwife-led birth centres (MLBCs) are an alternative to hospital-based care for low-risk pregnancies where the lead professional at the time of birth is a trained midwife. These have been used in many countries to improve birth outcomes.

Methods The cost analysis used primary data collection from four MLBCs in Bangladesh, Pakistan and Uganda (n=12 MLBC sites). Modelled cost-effectiveness analysis was conducted to compare the incremental cost-effectiveness ratio (ICER), measured as incremental cost per disability-adjusted life-year (DALY) averted, of MLBCs to standard care in each country. Results were presented in 2022 US dollars.

Results Cost per birth in MLBCs varied greatly within and between countries, from US$21 per birth at site 3, Bangladesh to US$2374 at site 2, Uganda. Midwife salary and facility operation costs were the primary drivers of costs in most MLBCs. Six of the 12 MLBCs produced better health outcomes at a lower cost (dominated) compared with standard care; and three produced better health outcomes at a higher cost compared with standard care, with ICERs ranging from US$571/DALY averted to US$55 942/DALY averted.

Conclusion MLBCs appear to be able to produce better health outcomes at lower cost or be highly cost-effective compared with standard care. Costs do vary across sites and settings, and so further exploration of costs and cost-effectiveness as a part of implementation and establishment activities should be a priority.

  • Maternal health
  • Health economics

Data availability statement

No data are available. Ethics approval prohibits data sharing.

This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See:  https://creativecommons.org/licenses/by/4.0/ .

https://doi.org/10.1136/bmjgh-2023-013643

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

Midwife-led birth centres (MLBCs) have promising clinical evidence to support their implementation in low-income and middle-income countries, but there is an absence of evidence for costs and cost-effectiveness of implementing MLBCs relative to standard care.

WHAT THIS STUDY ADDS

This economic evaluation is the first study to quantify the real-word operation costs of MLBCs outside of high-income country settings. Our findings from Bangladesh, Pakistan and Uganda showed MLBCs can be cost-saving or cost-effective relative to standard care, and thus appear to be broadly consistent with results from other high-income country settings.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

Our methodology, including a codesigned data collection tool with country researchers, highlighted the importance of close collaboration with local health service teams to identify the context of expenditure. The implementation of MLBCs in low-income and middle-income countries could be cost saving and cost-effective at small or larger scales, once contextual factors are considered.

Introduction

The United Nations has set targets within the Sustainable Development Goals (SDGs) to reduce maternal and neonatal mortality. 1 Also featured in the SDGs is universal access to healthcare—ensuring all people, regardless of location, have access to affordable and appropriate healthcare. 1 Achieving these dual goals is a challenge for all countries, particularly low-income and middle-income countries (LMICs), where maternal and neonatal mortality is highest, 2 3 as this will generally require improving service access and quality, alongside expanding services.

Increasing and promoting facility-based birth has been the main strategy for reducing maternal and neonatal mortality in many LMICs. 4 However, increased rates of births in a facility do not directly translate to reduced mortality if the facilities provide poor-quality care. 5 Regional and global disparities in maternity care across wealth quintiles and geographical locations, 5 alongside service challenges regarding funding and resources (including staffing and training), 6 pose significant hurdles to upscaling access to safe, high-quality maternity care.

High-income countries have increasingly taken a medicalised approach to maternity care. 7 8 While this approach sees low mortality rates, 8 there is a concern that the pendulum has swung too far. High rates of medical intervention during childbirth, such as caesarean birth and labour inductions, have led to short-term and long-term harms 9–12 and high and rapidly increasing costs per birth, 10 12 13 which may be becoming unaffordable even in high-income countries. 13 While many lessons can be learnt from models of care in high-income countries, 8 these may not represent the most effective and efficient path forward to achieving the SDGs in LMICs.

Midwife-led birth centres (MLBCs), where the lead healthcare professional at the time of birth is a midwife, are often seen as an alternative to hospital-based care for low-risk pregnancies and have been used in many countries. 14 This model of care been associated with increased rates of maternity service utilisation and reported satisfaction among women, strengthened networks of care and reduced rates of unnecessary interventions during childbirth. 14 As such, MLBCs may offer an appropriate option for providing maternity care in LMICs for women with uncomplicated pregnancies. There is, however, an absence of evidence about the costs associated with the establishment and operation of MLBCs and estimates around their cost-effectiveness relative to standard care in LMICs.

The objective of this study was to identify the costs of operating MLBCs in real-world LMIC settings, and to estimate their cost-effectiveness relative to standard care. We used a case study approach, with 4 MLBC sites in Bangladesh, Pakistan and Uganda (12 sites in total) to collect data on costs and outcomes of MLBCs and conduct a modelled cost-effectiveness analysis. The purpose of the study was to inform decision-making about the expansion of this model of care. The decision-making questions were as follows: (1) what would it cost to operate additional MLBCs in LMICs and (2) what would be the cost-effectiveness of additional MLBCs in LMICs?

Study setting and location

Bangladesh, Pakistan and Uganda were selected to participate in this study, based on the findings of a global literature review and survey 15 and consultation with the project’s advisory group. The advisory group consisted of experts in MLBCs from high-income, middle-income and low-income contexts and representatives of the International Confederation of Midwives, WHO, United Nations Population Fund, Bill & Melinda Gates Foundation and World Bank. The main inclusion criteria were as follows: (a) the country was classed by the World Bank in 2022 as low-income, lower-middle-income or upper-middle-income; (b) there was evidence from the literature and the survey that the country had at least four MLBCs that were either in the public sector or well integrated within the national health system; (c) good research capacity within the country and (d) data were expected to be available for this economic analysis. Each country that met the inclusion criteria was invited to participate through the national Ministry of Health and through the International Confederation of Midwives member association(s). National research teams were recruited by the International Confederation of Midwives, and these teams identified four MLBC sites per country for inclusion. Site selection was based on a combination of representativeness and feasibility and informed by a desk review of the literature and consultation with the national Ministry of Health, the International Confederation of Midwives member association, the national research team, the site manager(s) and other relevant stakeholders.

Study population

For the purposes of this study, we adopted the following definition of an MLBC: a dedicated space offering childbirth care, in which midwives take primary clinical responsibility for birthing care. Antenatal and postpartum care may also have been provided, but this was not essential for classification as an MLBC. Most of the 12 MLBC sites (n=10 sites), including all the Ugandan MLBCs, were freestanding, that is, on a site separate from a health facility to which the MLBC could refer women if needed. The remaining MLBCs (n=2 sites) were on the same site as a referral facility ( online supplemental appendix 1 ). Most MLBCs (n=8 sites) were in the private sector (including for-profit and not-for-profit), two were public–private partnerships (ie, public-sector facilities supported by non-governmental organisations) and two were in the public sector.

Supplemental material

The comparison was current ‘standard care’ in each country. This could have included a combination of hospital-based birth and home birth. As the decision-making question was concerned with expansion of MLBCs within the local setting, this heterogeneity in comparison was considered appropriate.

Study design

We conducted a cost analysis of MLBCs using primary data collection from routine data captured by the MLBCs in each of the four country sites within each of the three countries. Data were collected between October and December 2022. The data collection tool covered costs of operating the MLBC and outcomes of women and was codesigned with study teams from each country to ensure data availability. Data items related to costs of facility operation included utilities, staff salaries, staff training and equipment purchase and hire ( online supplemental appendix 2 ). The included costs represent the annual costs of operating an MLBC. Facility purchase costs were considered sunk costs and not included. Data related to health outcomes included transfer to other facilities, caesarean birth at other facilities, morbidity (eg, incidence of haemorrhage, third or fourth degree tears or other serious morbidities), maternal mortality, stillbirth, neonatal mortality and any costs paid by the women or their families.

A modelled cost-effectiveness analysis was then conducted, comparing MLBCs with standard care. This took the form of a decision analysis tree with 1000 hypothetical women ( online supplemental appendix 3 ). Women entered the model immediately prior to birth. In the MLBC arm, they then were either transferred or gave birth at the facility. All women who were transferred then had either a vaginal birth or caesarean birth and then had either no morbidities or morbidities. All women who gave birth in an MLBC had a vaginal birth and then had either no morbidities or morbidities. Data for the MLBC arm were collected from primary data from study sites. For current standard care within each country, rates of caesarean birth, stillbirth and neonatal death were obtained from the UNICEF Data Warehouse. 16 Rates of maternal mortality were obtained from WHO modelled estimates, 2 and maternal morbidity rates were obtained from the literature ( online supplemental appendix 4 ). 17 18

Per-woman costs for the operation of the MLBC were added to women in the MLBC arm, based on reported costs from the primary data collection. A cost for transfer was obtained from the primary data collection and applied to those who were transferred from the MLBC to another facility after onset of labour. For women in both arms who had a caesarean birth in a non-MLBC institution (MLBCs do not offer caesarean sections, because this procedure is not within the scope of practice of a midwife), costs per caesarean birth were sought for each country from the literature. Costs were separated into costs paid by the health service and out of pocket costs incurred by women ( online supplemental appendix 5 ).

Disability-adjusted life-years (DALYs) were allocated based on morbidity rates for the women and mortality rates for women and newborns. Categories were no maternal morbidity, maternal morbidity, maternal mortality and stillbirth or neonatal death ( online supplemental appendix 6 ). Disability weights for maternal morbidity were obtained from the average of the following conditions, identified from the Global Burden of Disease Study 19 and were calculated based on the average weight for maternal haemorrhage, pregnancy-related sepsis, hypertensive disorders, obstructed labour, rectal fistula and vesicovaginal fistula.

Patient and public involvement in research

The cost data collection tool was codesigned by the research team and the national researchers. The national researchers engaged with each of the MLBC sites to identify typical annual expenditures. After initial data collection, a series of meetings were held between the analysis team and the national research teams to validate the data provided.

Time horizon and discount rate

We adopted a health funder perspective. The time horizon for the cost and cost-effectiveness analysis was 1 year, and as such no discounting was required. The short 1-year time horizon is considered conservative and underestimates the value of health outcomes produced over a lifetime; however, due to the absence of primary data collection for health outcomes this was considered necessary to avoid introducing additional uncertainty.

Currency, price date and conversion

All costs are presented in 2022 US dollars. Costs were inflated to 2022 dollars based on published inflation rates and converted from original currency to US dollars based on the average exchange rates for the 2022 calendar year. 20

Reporting followed the Consolidated Health Economic Evaluation Reporting Standards 2022 ( online supplemental appendix 7 ). 21 A detailed reflexivity statement exploring the authorship of this piece is presented in online supplemental appendix 8 .

Data analysis

Data for each facility were presented separately based on primary data collected. Where ranges were reported, a midpoint was selected. Staff salary costs were calculated by multiplying reported full-time salary by the number of full-time equivalent staff. Based on the discussions between the research team and national researchers, the approximate average midwife salary of US$200 per month, identified by the country liaison researchers, was applied to Uganda due to the variability of costs reported by sites. Similarly, for site 3 in Pakistan, an average midwife salary from the other three sites was applied.

Costs of MLBCs were presented as total annual costs for the facility, and these were also divided by the number of births to present a cost per birth for each facility. For the cost analysis, the total health service and total user costs were identified and summed to present a total cost for each model of care. Total DALYs lost were also summed for each model of care. An incremental cost-effectiveness ratio (ICER) was identified by dividing the difference in the total costs of MLBCs and standard care by the difference in DALYs lost from MLBCs and standard care. All results were presented separately for each site and were designed to describe the costs and cost-effectiveness compared with standard care of MLBCs based on that site’s operation. All analyses were conducted using Microsoft Excel.

Uncertainty analysis

We conducted one-way sensitivity analysis based on cost data reported as zero in the study countries. This included facility, midwife salary, medical officer salary, recruitment and training, and transport costs.

The annual number of births for the four selected MLBC sites in Bangladesh ranged from 101 to 2189 per year. Total annual costs ranged from US$5068 (site 2; 101 births per year) to US$117 662 (site 4; 337 births per year) ( online supplemental appendix 9 ). Total costs per birth were highest at site 4—US$349 per birth; and lowest at site 3—US$21 per birth. Costs were mostly driven by staff salaries and facility operation costs. Facility operation costs per woman were generally higher in smaller facilities as were the midwife salary costs per woman. Site 2, which had 101 births per year, only reported midwife salary costs, with no costs for other staff.

In the modelled cost-effectiveness analysis of MLBCs compared with standard care in Bangladesh, total costs of care (including costs associated with transfers and caesarean births in other facilities) for MLBCs ranged from US$23 439 to US$469 100 ( table 1 ). Costs for standard care were US$314 754 for 1000 women. Sites 1, 2 and 3 had better health outcomes in the total number of DALYs lost than standard care. Sites 1, 2 and 3 produced better outcomes at a lower cost than standard care. Site 4 produced comparable health outcomes to standard care, at higher cost. These additional costs are largely due to the site being extremely remote and it being necessary to pay higher salaries to recruit and retain staff.

  • View inline

Modelled cost-effectiveness of midwife-led birth centre sites compared with current standard care, Bangladesh, hypothetical cohort of 1000 women

The annual number of births for MLBC sites in Pakistan ranged from 95 to 5183 per year. Total annual costs ranged from US$4907 (site 3; 95 births per year) to US$288 649 (site 1; 544 births per year) ( online supplemental appendix 10 ). Total costs per birth were highest at site 1—US$531 per birth; and lowest at site 4—US$34 per birth. Costs were mostly driven facility operation, equipment purchase and other staff costs. Midwife staffing costs ranged from US$6 per woman at site 2 to US$42 per woman at site 1, however, this was less than the amount spent on other staff at sites 1, 2 and 4.

In the modelled cost-effectiveness analysis of MLBCs compared with standard care, total costs of care for MLBCs in Pakistan ranged from US$36 519 to US$693 521 ( table 2 ). Costs for standard care were US$176 057 for 1000 women. Sites 2 and 4 produced better outcomes at lower cost than standard care. Site 3 produced poorer outcomes and was more costly than standard care. Based on costs and outcomes of site 1, MLBCs would cost an additional US$7392 per DALY averted.

Modelled cost-effectiveness of midwife-led birth centre sites compared with current standard care, Pakistan, hypothetical cohort of 1000 women

The annual number of births for MLBC sites in Uganda ranged from 12 to 1242 per year. Total annual costs ranged from US$7922 (site 4; 64 births per year) to US$348 000 (site 3; 1242 births per year) ( online supplemental appendix 11 ). Total costs per birth were highest at site 2—US$2374 per birth, although this site cannot be considered typical. Site 2 is in a remote area and is supported by wealthy donors prepared to pay for equipment and four full-time midwives, even though there were only 12 births in the past twelve months. Total costs per birth were lowest at site 4—US$124 per birth. Costs were mostly driven by facility operations costs, midwife salaries and other staff salaries. Midwife staffing costs ranged from US$10 per woman at site 3 with 1242 births per annum to US$800 per woman at site 2 with just 12 births. Other staff salary costs ranged from US$232 per woman (site 3) to US$23 per woman (site 1). Sites 1 and 3 did not report any equipment costs.

In the modelled cost-effectiveness analysis for Uganda, total costs of care ranged from US$147 273 (site 4) to US$2 458 750 (site 2) ( table 3 ). Costs for standard care were US$277 012 for 1000 women. In terms of cost-effectiveness, sites 1, 2 and 3 MLBCs would lead to better health outcomes than standard care. Site 1 delivered better outcomes at a lower cost than standard care, site 3 had a small ICER of US$571 per DALY saved, and site 2 had a larger ICER of US$55 942 per DALY saved. Site 4 demonstrated lower costs, but poorer health outcomes compared with standard care.

Modelled cost-effectiveness of midwife-led birth centre sites compared with current standard care, Uganda, hypothetical cohort of 1000 women

Cross-country comparison

There was no discernible pattern between facility size and cost per birth, with both larger and smaller facilities reporting low costs per birth ( online supplemental appendix 12 ). Midwife salary costs and facility costs were generally the largest contributor to overall costs across each of the sites and countries ( figure 1 ). Sites 3 and 4 in Bangladesh and site 3 in Uganda were notable exceptions to this, with most costs being attributable to other staff salaries. From the modelled cost-effectiveness analysis, all public and public–private partnership MLBCs produced better health outcomes and were less costly than standard care ( figure 2 ). In total, 9 of the 12 (75%) of the sites produced better health outcomes than standard care, as measured by DALYs; and half (6 of the 12 MLBCs) produced better health outcomes and were cost saving.

  • Download figure
  • Open in new tab
  • Download powerpoint

Proportion of total costs attributable to facility costs, midwife salaries, other staff salaries, recruitment and training for midwife-led birth centres in Bangladesh, Pakistan and Uganda in 2022 US dollars. BGD, Bangladesh; PAK, Pakistan; UGA, Uganda.

Incremental cost-effectiveness of midwife-led birth centres compared with standard care, Bangladesh, Pakistan, Uganda in 2022 US dollars. BGD, Bangladesh; PAK, Pakistan; UGA, Uganda.

Results of the sensitivity analysis are presented in online supplemental appendix 13 . Replacing data reported as having zero costs with country averages did not substantially change the ICERs produced.

Using a case study approach, this economic evaluation identified the range of reported costs of operating MLBCs in 12 sites in Bangladesh, Pakistan and Uganda, and estimated their cost-effectiveness relative to standard care. Costs of operating MLBCs within the countries varied greatly. Midwife salaries and annual facility operation costs were consistent cost drivers in all countries. In the modelled cost-effectiveness analysis, 6 of the 12 MLBCs were ‘dominant’, producing both better health outcomes and lower costs compared with standard care. Two of the remaining sites had an ICER of less than US$8000 per DALY averted, meaning it would cost less than US$8000 to prevent one additional DALY using MLBCs.

Our study is the first to quantify the costs of MLBCs outside of high-income country settings, making identification of comparable studies difficult due to differences in health systems characteristics. Nonetheless, our findings are consistent with a retrospective cohort study of more than 364 000 births in Australia between 2001 and 2012, which also found MLBCs resulted in lower costs than other models of care. 22 The Birthplace in England study also found similar results. 23 Other studies have also found that expanding access to midwife-led care can substantially reduce maternal and neonatal mortality and morbidity rates and improve maternal and newborn health and well-being. 24 25 Our findings show MLBCs can be cost-saving or cost-effective relative to other models of care, and thus appear to be broadly consistent with results from other settings. We did note that costs of operation and cost-effectiveness varied widely between and within countries, and cost-effectiveness does appear to be dependent on the unique local site characteristics as opposed to general characteristics such as size or rurality. Our methodology also highlighted the importance of close collaboration with local health service team to identify the context of expenditure. We identified MLBCs that demonstrated better health outcomes and cost savings in all three countries, private, public and public–private partnerships, rural and urban settings, and in freestanding as well as in those onsite with or alongside referral facilities.

MLBCs can help meet the growing demand for facility-based birth for low-risk women and might be particularly beneficial in LMICs where universal access to higher level facility-based care is limited. 26 Shifting the main strategy for reducing maternal and neonatal mortality in many LMICs from increasing the rate of deliveries within medical facilities 4 to focusing on the quality of healthcare may better translate to better maternal and neonatal health outcomes. 5 24 Clinical findings showing that care provided in MLBCs is as safe and effective as that in the obstetric units and results in less intervention justify the expansion of this model of care so that scarce resources can be used more effectively. 27 This study provides evidence for MLBCs in LMICs as an effective, evidence-based strategy to improve the quality, costs and experiences of maternity care. Further, there did not seem to be a clear scale efficiency effect, indicating that MLBCs could be cost-effective at small or larger scales in LMIC settings, once contextual factors are considered.

Strengths and limitations

A key strength of this study was that data were collected from a range of sites and countries in a real-world setting to identify variation in costs and outcomes. We codesigned our data collection tool with country researchers to comprehensively capture the range of operation costs. Nonetheless, our study was limited by the inability of some sites to identify some areas of expenditure—particularly equipment costs and midwife salaries. Furthermore, as all facilities were already established, we were unable to identify set-up costs. A key recommendation from this study is investment in prospective implementation analysis of the costs and outcomes produced when new MLBCs are established in LMICs, as well as investment in standardised data capture tools for identifying costs and outcomes. As such, the results of this study must be interpreted with caution as they are reliant on the accuracy of the reported data in a small number of sites.

Our analysis was also unable to capture some additional benefits of MLBCs beyond mortality and morbidity, particularly around women’s experiences and satisfaction with care—which are key to capturing the full value associated with midwife-led care. 28 A key component for all MLBCs and midwife-led care more broadly, is the woman-centred philosophy, continuity of care during pregnancy and after birth, and involvement of women in all decisions regarding perinatal care. 14 22 29–31 MLBCs seek to promote normal, physiological childbirth by recognising, respecting and safeguarding normal birth processes through individualised care, 32 as opposed to the typical hospital approach to labour which is much more time-oriented and standardised, and not infrequently, there is a pressure on midwives to accelerate the process by carrying out unnecessary medical intervention. 33 Consequently, women who give birth in an MLBC report feeling supported in their ability to participate in the decision-making process, greater autonomy, and, thus, greater acceptance of and satisfaction with perinatal care in this setting among pregnant women. 22 30–32 34 35 These additional benefits were unable to be captured in our study but are important to recognise when considering the value of MLBCs.

MLBCs offer a potentially cost-effective model of care for providing safe and high-quality care to women giving birth in LMICs. However, the cost of operating an MLBC varies greatly, and this does affect cost-effectiveness. Further research, including prospective evaluation of implementation of new MLBCs, is recommended to confirm the results produced in our study.

Ethics statements

Patient consent for publication.

Not applicable.

Ethics approval

This study involves human participants and ethical approval was obtained from the following ethics committees: Alfred Hospital Ethics Committee (Australia; Reference 381/22); the Centre for Injury Prevention and Research, Bangladesh (CIPRB) Institutional Review Board (Bangladesh; Reference CIPRB/ERC/2022/11); Research and Development Solutions (RADS) Institutional Review Board (Pakistan; Reference IRB00010843) and Mulago Hospital Research Ethics Committee (Uganda; Reference MHREC-2022-77). Participants gave informed consent to participate in the study before taking part.

  • United Nations
  • World Health Organization
  • Blossom J , et al
  • Puchalski Ritchie LM ,
  • Moore JE , et al
  • Johanson R ,
  • Newburn M ,
  • Macfarlane A
  • Goldenberg RL ,
  • Norman JE ,
  • Humphrey T ,
  • Abhyankar P ,
  • Callander E ,
  • Ellwood D , et al
  • Tataj-Puzyna U ,
  • Sys D , et al
  • Bazirete O ,
  • Hughes K , et al
  • De Silva M ,
  • Lindquist A , et al
  • Nakimuli A ,
  • Nakubulwa S ,
  • Kakaire O , et al
  • Center for the Evaluation of Value and Risk in Health
  • Husereau D ,
  • Drummond M ,
  • Augustovski F , et al
  • Fiebig DG ,
  • Scarf V , et al
  • Schroeder E ,
  • Patel N , et al
  • Renfrew MJ ,
  • Friberg IK ,
  • de Bernis L , et al
  • Allanson ER ,
  • Pontre J , et al
  • Normand C , et al
  • Callander EJ ,
  • Edmonds JK ,
  • Kafulafula U
  • Christensen LF ,
  • Overgaard C
  • Macfarlane AJ ,
  • Rocca-Ihenacho L ,
  • Overgaard C ,
  • Fenger-Grøn M ,
  • Percival P , et al

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1

Handling editor Lei Si

Twitter @VScarf, @LaneCarrandi, @ICM_CE

Contributors EJC led the overall study design, with input from VS, AN, CH and MB. Researchers from Bangladesh (ASA, AH and AKMFR) Uganda (SNM and RCN) and Pakistan (SIR and AMT) led the development of the data collection tools specifically designed for this study. These in-country researchers undertook all the data collection and assisted with data verification and analysis. EJC and VS undertook the analysis. All authors (EJC, VS, AN, CH, AC, ASA, SC, AH, SNM, RCN, AKMFR, SIR, AMT, OB, ST, MF, SM, SP and MB) contributed to the interpretation of data for the work. EJC led the drafting of the manuscript, with input from all authors. All authors (EJC, VS, AN, CH, AC, ASA, SC, AH, SNM, RCN, AKMFR, SIR, AMT, OB, ST, MF, SM, SP and MB) have read and approved the final manuscript version. All authors (EJC, VS, AN, CH, AC, ASA, SC, AH, SNM, RCN, AKMFR, SIR, AMT, OB, ST, MF, SM, SP and MB) agree to be accountable for all aspects of the work. EC acts as guarantor.

Funding The study was funded by a grant from the Bill & Melinda Gates Foundation (award number INV—033046).

Disclaimer The funding body was not involved in the study design or writing of this manuscript.

Competing interests None declared.

Patient and public involvement Patients and/or the public were involved in the design, or conduct, or reporting, or dissemination plans of this research. Refer to the Methods section for further details.

Provenance and peer review Not commissioned; internally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

IMAGES

  1. 49 Free Case Study Templates ( + Case Study Format Examples + )

    case study with evaluation

  2. 7+ Case Study Report Templates in Google Docs

    case study with evaluation

  3. Case Study Evaluation-tool (CaSE) checklist for essential components in

    case study with evaluation

  4. How to Create a Case Study + 14 Case Study Templates

    case study with evaluation

  5. what is a typical case study

    case study with evaluation

  6. (PDF) Textbook Evaluation: A Case Study

    case study with evaluation

VIDEO

  1. Case Study: Evaluation Model

  2. Pre Study Evaluation Visit Tips-June, 2023

  3. Case study: Mission Australia’s multidisciplinary approach to MEL

  4. Paper 2

  5. Paper 2

  6. How to solve case study

COMMENTS

  1. Case study

    There are different types of case studies, which can be used for different purposes in evaluation. The GAO (Government Accountability Office) has described six different types of case study: 1. Illustrative: This is descriptive in character and intended to add realism and in-depth examples to other information about a program or policy.

  2. PDF Using Case Studies to do Program Evaluation

    A case study evaluation for a program implemented in a turbulent environment should begin when program planning begins. A case study evaluation allows you to create a full, complex picture of what occurs in such environments. For example, ordinance work is pursued in political arenas, some of which are highly volatile.

  3. Designing process evaluations using case study to explore the context

    A well-designed process evaluation using case study should consider the following core components: the purpose; definition of the intervention; the trial design, the case, the theories or logic models underpinning the intervention, the sampling approach and the conceptual or theoretical framework. We describe each of these in detail and ...

  4. Case Study Evaluation Approach

    A case study evaluation approach is a great way to gain an in-depth understanding of a particular issue or situation. This type of approach allows the researcher to observe, analyze, and assess the effects of a particular situation on individuals or groups. An individual, a location, or a project may serve as the focal point of a case study's ...

  5. Case study evaluations

    Resource link. Case study evaluations - World Bank. This guide, written by Linda G. Morra and Amy C. Friedlander for the World Bank, provides guidance and advice on the use of case studies. The paper attempts to clarify what is and is not a case study, what is case study methodology, how they can be used, and how they should be written up for ...

  6. (PDF) Case study evaluation

    Case study evaluations, using one or more qualitative methods, have been used to investigate important practical and policy questions in health care. This paper describes the features of a well ...

  7. Case study research for better evaluations of complex interventions

    Background The need for better methods for evaluation in health research has been widely recognised. The 'complexity turn' has drawn attention to the limitations of relying on causal inference from randomised controlled trials alone for understanding whether, and under which conditions, interventions in complex systems improve health services or the public health, and what mechanisms might ...

  8. Designing process evaluations using case study to explore the context

    Realist evaluation can be used to explore the relationship between context, mechanism and outcome, but case study differs from realist evaluation by its focus on a holistic and in-depth understanding of the relationship between an intervention and the contemporary context in which it was implemented . Case study enables researchers to choose ...

  9. What Is a Case Study?

    Revised on November 20, 2023. A case study is a detailed study of a specific subject, such as a person, group, place, event, organization, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research. A case study research design usually involves qualitative methods, but quantitative methods are ...

  10. 15.7 Evaluation: Presentation and Analysis of Case Study

    Learning Outcomes. By the end of this section, you will be able to: Revise writing to follow the genre conventions of case studies. Evaluate the effectiveness and quality of a case study report. Case studies follow a structure of background and context, methods, findings, and analysis. Body paragraphs should have main points and concrete details.

  11. Using case studies to do program evaluation

    Resources. Using case studies to do program evaluation. PDF. 79.49 KB. This paper, authored by Edith D. Balbach for the California Department of Health Services is designed to help evaluators decide whether to use a case study evaluation approach. It also offers guidance on how to conduct a case study evaluation.

  12. Validity and generalization in future case study evaluations

    Validity and generalization continue to be challenging aspects in designing and conducting case study evaluations, especially when the number of cases being studied is highly limited (even limited to a single case). To address the challenge, this article highlights current knowledge regarding the use of: (1) rival explanations, triangulation ...

  13. PDF PREPARING A CASE STUDY: A Guide for Designing and Conducting a Case

    it may be difficult to hold a reader's interest if too lengthy. In writing the case study, care should be taken to provide the rich information in a digestible manner. Concern that case studies lack rigor:Case studies have been viewed in the evaluation and research fields as less rigorous than surveys or other methods. Reasons for this ...

  14. Writing a Case Study Analysis

    Identify the key problems and issues in the case study. Formulate and include a thesis statement, summarizing the outcome of your analysis in 1-2 sentences. Background. Set the scene: background information, relevant facts, and the most important issues. Demonstrate that you have researched the problems in this case study. Evaluation of the Case

  15. PDF Evaluation Models, Approaches, and Designs

    Case Study Design. When evaluations are conducted for the purpose of understanding the program's context, participants' perspectives, the inner dynamics of situations, and questions related to participants' experiences, and where generalization is not a goal, a case study design, with an emphasis

  16. Case Study Evaluation: Past, Present and Future Challenges:

    Case study evaluation proves an alternative that allows for the less-than-literal in the form of analysis of contingencies - how people, phenomena and events may be related in dynamic ways, how context and action have only a blurred dividing line and how what defines the case as a case may only emerge late in the study.

  17. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  18. Monitoring and evaluation approaches

    The case study evaluation approach is a powerful tool for monitoring and evaluating the success of a program or initiative. It allows researchers to look at the impact of a program from multiple perspectives, including the behavior of participants and the effectiveness of interventions. By using a case study evaluation approach, researchers can ...

  19. PDF Case Study Evaluations

    Case studies are appropriate for determining the effects of programs or projects and reasons for success or failure. OED does most impact evaluation case studies for this purpose. The method is often used in combination with others, such as sample surveys, and there is a mix of qualitative and quantitative data.

  20. Case study evaluation.

    Abstract. Case study evaluations, using one or more qualitative methods, have been used to investigate important practical and policy questions in health care. This paper describes the features of a well designed case study and gives examples showing how qualitative methods are used in evaluations of health services and health policy.

  21. How to co-design a prototype of a clinical practice tool: a framework

    Subject to further evaluation, the framework has potential for application beyond the area of clinical practice in which it was applied. ... We applied the framework in a case study to support co-design of a prototype track-and-trigger chart for detecting and responding to possible fetal deterioration during labour. This started with ...

  22. Case Studies

    Case studies are very detailed investigations of an individual or small group of people, usually regarding an unusual phenomenon or biographical event of interest to a research field. Due to a small sample, the case study can conduct an in-depth analysis of the individual/group. Evaluation of case studies: STRENGTHS

  23. Case 8-2024: A 55-Year-Old Man with Cardiac Arrest, Cardiogenic Shock

    A 55-year-old man had an out-of-hospital cardiac arrest. An evaluation showed 2-mm ST-segment elevations in the inferior leads on electrocardiography, cardiogenic shock, and a new systolic murmur. ...

  24. MSU researchers create a new health equity evaluation tool for Genesee

    Community-based organizations, nonprofits, policymakers and local residents will benefit from the first Health Equity Report Card, or HERC, for Genesee County and the city of Flint.The online tool helps people understand the overall landscape of community health by comparing 50 health-related indicators from 26 public sources.

  25. Evaluation and associated factors of public health emergency management

    The construction of healthy China requires prioritizing people's health, and effectively doing a good job in the prevention and control of infectious diseases and the response to public health emergencies [].A public health emergency refers to the sudden occurrence of severe infectious diseases, rapidly spreading diseases with unknown causes, widespread food and occupational poisoning, and ...

  26. Evaluating the economic sustainability of commercial ...

    Based on the CBA method, this study evaluated and compared the economic sustainability of installing VGS and GRS in the commercial complex in Singapore by quantifying and integrating the personal cost, personal and social benefits of the general and innovative evaluation solutions in their life cycle, referring to Wu's macro guidance on the ...

  27. The effect of "typical case discussion and scenario simulation" on the

    The aim of this study was to evaluate the effect of the new teaching mode of "typical case discussion and scenario simulation" on improving the critical thinking ability of midwifery students. Independent sample t tests were used to examine the differences in critical thinking ability between the two groups (Table 2 ).

  28. Frontiers

    Based on existing literature, empirical research (213 middle managers were surveyed from 68 coastal enterprises in Zhejiang, China) , this study extracts eight core factors that influence corporate participation in marine ecosystems and uses the Fuzzy Decision-making Trial and Evaluation Laboratory approach (Fuzzy DEMATEL).

  29. Coupling coordination between agriculture and tourism in the ...

    The study reveals the following findings: (1) From 2011 to 2020, the evaluation index of agriculture development in Shanyang County exhibited significant fluctuations, with an overall downward trend. In contrast, the evaluation index of tourism development experienced rapid growth and surpassed agriculture from 2018 onwards.

  30. Midwife-led birthing centres in Bangladesh, Pakistan and Uganda: an

    Using a case study approach, this economic evaluation identified the range of reported costs of operating MLBCs in 12 sites in Bangladesh, Pakistan and Uganda, and estimated their cost-effectiveness relative to standard care. Costs of operating MLBCs within the countries varied greatly. Midwife salaries and annual facility operation costs were ...