- Utility Menu


bok_logo_2-02_-_harvard_left.png

Statistical Sampling Case Study
To learn about sampling techniques in social science research, students practice tackling a real-world research problem through discussing a hypothetical case..
- To enable students to understand the benefits and drawbacks of various sampling techniques.
- To provide students with experience designing sampling methods to address a real-world research problem.
Class: Sociology 128: Models of Social Science Research
Introduction/Background: This course introduces sociology students to concepts and strategies in social science research. In week five, students learn about nonrandom and random sampling techniques (snowball sampling, simple random sampling, etc.). In discussion section later that week, students apply this knowledge to a hypothetical case study where a researcher aims to study the experiences of homeless people in the United States.
Before Class:
Students learned about the pros and cons of various sampling techniques in lecture.
During Class:
- In discussion section, students received a handout about various types of sampling techniques, as well as a hypothetical research scenario about researching a population of homeless people in New York City. The instructor and students reviewed the sampling techniques they had learned, including what types of social science research questions each technique would enable researchers to answer.
- Students were then broken up into groups of 2-3. Guided by four questions on the handout, students analyzed the research problem and discussed within their groups the pros and cons of various sampling techniques. The instructor moved between the groups and provided feedback as students were deciding how to answer each question.
- Once each group determined how they would approach the problem, they shared out their choice with the class. The class discussed the benefits and drawbacks to each group's choices.
Students left class with a deep understanding of a hypothetical research scenario and the various considerations they would have to take into account when deciding how to sample a population. They would later use this knowledge in developing their group projects at the end of the semester.
The research scenario prompt, attached
Submitted by Matthew Clair, Teaching Fellow, Harvard Department of Sociology
More activities like this
- Caminalcule Taxonomy Exercise
- Complex Stats, Simple Tech
- The Minimal Group Paradigm
- Let’s Try to Stop the Tokugawa Shogunate from Collapsing: Role Playing Historical Decisions
- Adaptive Scenario Exercise
Further Filter By
Activity type.
- Discussion (102) Apply Discussion filter
- Research (79) Apply Research filter
- Presentation (55) Apply Presentation filter
- Role Play (54) Apply Role Play filter
- Homework (44) Apply Homework filter
- Pair and Share (42) Apply Pair and Share filter
- Case Study (31) Apply Case Study filter
- Do Now (31) Apply Do Now filter
- Lab (30) Apply Lab filter
- Game (29) Apply Game filter
- Debate (24) Apply Debate filter
- Peer Instruction (23) Apply Peer Instruction filter
- Quick Write (23) Apply Quick Write filter
- Field Trip (21) Apply Field Trip filter
- Lecture (20) Apply Lecture filter
- Jigsaw (18) Apply Jigsaw filter
- Concept Map (14) Apply Concept Map filter
- Sequence Reconstruction (8) Apply Sequence Reconstruction filter
- Speed Dating (4) Apply Speed Dating filter
- Statement Correction (1) Apply Statement Correction filter
- General Education (44) Apply General Education filter
- Government (38) Apply Government filter
- Physics (20) Apply Physics filter
- English (13) Apply English filter
- Organismic and Evolutionary Biology (13) Apply Organismic and Evolutionary Biology filter
- Sociology (12) Apply Sociology filter
- Statistics (12) Apply Statistics filter
- Expository Writing (11) Apply Expository Writing filter
- Romance Languages and Literatures (10) Apply Romance Languages and Literatures filter
- Germanic Languages and Literatures (9) Apply Germanic Languages and Literatures filter
- History (9) Apply History filter
- Humanities (9) Apply Humanities filter
- Music (9) Apply Music filter
- Mathematics (8) Apply Mathematics filter
- Molecular and Cellular Biology (8) Apply Molecular and Cellular Biology filter
- Economics (7) Apply Economics filter
- Freshman Seminars (6) Apply Freshman Seminars filter
- Social Studies (6) Apply Social Studies filter
- Astronomy (5) Apply Astronomy filter
- Chemistry and Chemical Biology (5) Apply Chemistry and Chemical Biology filter
- Psychology (5) Apply Psychology filter
- Computer Science (4) Apply Computer Science filter
- East Asian Languages and Civilizations (4) Apply East Asian Languages and Civilizations filter
- History of Science (4) Apply History of Science filter
- Linguistics (4) Apply Linguistics filter
- Stem Cell and Regenerative Biology (4) Apply Stem Cell and Regenerative Biology filter
- Comparative Literature (3) Apply Comparative Literature filter
- History and Literature (3) Apply History and Literature filter
- African and African American Studies (2) Apply African and African American Studies filter
- Classics (2) Apply Classics filter
- Earth and Planetary Science (2) Apply Earth and Planetary Science filter
- Education (2) Apply Education filter
- History of Art and Architecture (2) Apply History of Art and Architecture filter
- Public Health (2) Apply Public Health filter
- Studies of Women, Gender, and Sexuality (2) Apply Studies of Women, Gender, and Sexuality filter
- Systems Biology (2) Apply Systems Biology filter
- Anthropology (1) Apply Anthropology filter
- Biomedical Engineering (1) Apply Biomedical Engineering filter
- Near Eastern Languages and Civilizations (1) Apply Near Eastern Languages and Civilizations filter
- Psychiatry (1) Apply Psychiatry filter
- Visual and Environmental Studies (1) Apply Visual and Environmental Studies filter
Learning Objective
- Collaborate (107) Apply Collaborate filter
- Develop Subject Specific Intuitions (101) Apply Develop Subject Specific Intuitions filter
- Make Real World Connections to Course Material (101) Apply Make Real World Connections to Course Material filter
- Interpret Primary Sources to Propose a Model or Argument (95) Apply Interpret Primary Sources to Propose a Model or Argument filter
- Learn Foundational Knowledge (82) Apply Learn Foundational Knowledge filter
- Develop Communication Skills (73) Apply Develop Communication Skills filter
- Defend a Position in a Model or Argument (55) Apply Defend a Position in a Model or Argument filter
- Evaluate and Critique a Model or Argument (46) Apply Evaluate and Critique a Model or Argument filter
- Reflect on the Learning Process (Metacognition) (44) Apply Reflect on the Learning Process (Metacognition) filter
- Compare the Strengths and Weaknesses of Different Methods (23) Apply Compare the Strengths and Weaknesses of Different Methods filter
Length of Activity
- Full Class (143) Apply Full Class filter
- 10 to 30 minutes (49) Apply 10 to 30 minutes filter
- Multiple Classes (31) Apply Multiple Classes filter
- up to 10 minutes (22) Apply up to 10 minutes filter
- Full Semester (19) Apply Full Semester filter
- For educators
Case Study Sampling
- Case Study Sampling Definition
Case study sampling involves decisions like making sampling strategies, deciding the number of case studies, and defining the unit of analysis.
- Overview of Case Study Sampling
Case study is a form of qualitative analysis. It is a very popular method and involves very careful and complete enumeration of an individual or a community or an institution etc. In this method, the researcher studies it in depth. Single-case, multi-case, and snowball are the sampling frames used for case study.
Got a question on this topic?
What you'll learn:, single-case study, multi-case study, snowball or network technique.
When a study is being done on only one unit, then it is known as single-case study. Complexities may then arise when other sub-units are embedded within the case that is being studied. This makes the process of choosing participants (units) for the analysis in a single-case design far more complex. Researchers would, however, still prefer choosing a single-case study having a lone unit of analysis rather than other modes of sampling, owing to the convenience of the process.
When the course of the study changes, once a researcher has already oriented the study toward a particular direction, problems may then arise. Considering other dimensions of the study or gaining new information or opinions may affect the design of the whole study in case of one-unit analysis. In these cases, the researcher may just accept the new twist and continue with the research or they may change this method of sampling and sample more participants of similar characteristics, instead.
The strategy of selecting individuals for research depends on multiple purposes of illuminating, interpreting, and understanding. A variety of sampling techniques is used for case study selection purposes.
Haphazard, accidental, or convenient sampling techniques can be used, but they do not produce effective samples; therefore, they are not usually recommended.
Quota sampling is a better version of the haphazard or accidental sampling. This method is preferable when researchers have to interview a group of participants where each individual has separate characteristics.
Stratified sampling is generally of use in quantitative studies. It can be used for single-case designs, wherein participants of very different characteristics are clustered under one group.
Judgmental sampling is generally utilized in qualitative research. This involves sampling participants for specific or specialized situations. The three circumstances where the judgmental sampling technique is helpful are
when researchers wish to select unique cases that are really informative,
when researchers wish to select members from specialized populations,
when researchers wish to identify particular types of cases for in-depth investigations.
The purpose is to understand deep into the particular cases and not to generalize the findings. Generalization is not an issue; the selection of individuals cannot be non-randomly done.
A researcher who likes to select individuals for research work from various stratified strata but with that also the non-biased stand should also be maintained in the selecting can use snowball or network technique. The word non-biased is very important. It depends upon the researcher why, how, where the snowball should be rolled. The researchers, Cleshne and Peshkin, stated that when a researcher wishes to use the technique, they would first have to make the initial contact and then, their recommendations could be used for future research. And as this snowball enlarges after every turn, the greater the number of participants being included in the study becomes. In 2009, the researcher, Neuman, stated that this snowball technique was actually a multi-stage technique since it commences with just one or two individual participants and slowly spreads out as cases are successively linked with the initial case, as depicted below:
The researcher knows that when the initial snowball is rolled, he should be reminded when to stop because the snowball gets bigger and heavier until it becomes too late. One thing is to be remembered is that each person in the sample is directly or indirectly related to the original sample and several people may have named the same people. A researcher should stop when there is no new name and it indicates a closed network or the researcher have to stop because the network is very large. For example, Kim in 1996 used snowball technique for studying friendship and student-teacher relationships among students. She began interviewing with one final year student and end up interviewing 36 students. Each student introduces others and each group independently grows and finally all the groups meet at one point. This procedure helps the interviewer to collect samples fast.
Keep Learning
What to learn next based on college curriculum

An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
- Advanced Search
- Journal List
- HHS Author Manuscripts

Purposeful sampling for qualitative data collection and analysis in mixed method implementation research
Lawrence a. palinkas.
1 School of Social Work, University of Southern California, Los Angeles, CA 90089-0411
Sarah M. Horwitz
2 Department of Child and Adolescent Psychiatry, New York University, New York, NY
Carla A. Green
3 Center for Health Research, Kaiser Permanente Northwest, Portland, OR
Jennifer P. Wisdom
4 George Washington University, Washington DC
Naihua Duan
5 New York State Neuropsychiatric Institute and Department of Psychiatry, Columbia University, New York, NY
Kimberly Hoagwood
Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research.
Recently there have been several calls for the use of mixed method designs in implementation research ( Proctor et al., 2009 ; Landsverk et al., 2012 ; Palinkas et al. 2011 ; Aarons et al., 2012). This has been precipitated by the realization that the challenges of implementing evidence-based and other innovative practices, treatments, interventions and programs are sufficiently complex that a single methodological approach is often inadequate. This is particularly true of efforts to implement evidence-based practices (EBPs) in statewide systems where relationships among key stakeholders extend both vertically (from state to local organizations) and horizontally (between organizations located in different parts of a state). As in other areas of research, mixed method designs are viewed as preferable in implementation research because they provide a better understanding of research issues than either qualitative or quantitative approaches alone ( Palinkas et al., 2011 ). In such designs, qualitative methods are used to explore and obtain depth of understanding as to the reasons for success or failure to implement evidence-based practice or to identify strategies for facilitating implementation while quantitative methods are used to test and confirm hypotheses based on an existing conceptual model and obtain breadth of understanding of predictors of successful implementation ( Teddlie & Tashakkori, 2003 ).
Sampling strategies for quantitative methods used in mixed methods designs in implementation research are generally well-established and based on probability theory. In contrast, sampling strategies for qualitative methods in implementation studies are less explicit and often less evident. Although the samples for qualitative inquiry are generally assumed to be selected purposefully to yield cases that are “information rich” (Patton, 2001), there are no clear guidelines for conducting purposeful sampling in mixed methods implementation studies, particularly when studies have more than one specific objective. Moreover, it is not entirely clear what forms of purposeful sampling are most appropriate for the challenges of using both quantitative and qualitative methods in the mixed methods designs used in implementation research. Such a consideration requires a determination of the objectives of each methodology and the potential impact of selecting one strategy to achieve one objective on the selection of other strategies to achieve additional objectives.
In this paper, we present different approaches to the use of purposeful sampling strategies in implementation research. We begin with a review of the principles and practice of purposeful sampling in implementation research, a summary of the types and categories of purposeful sampling strategies, and a set of recommendations for matching the appropriate single strategy or multistage strategy to study aims and quantitative method designs.
Principles of Purposeful Sampling
Purposeful sampling is a technique widely used in qualitative research for the identification and selection of information-rich cases for the most effective use of limited resources ( Patton, 2002 ). This involves identifying and selecting individuals or groups of individuals that are especially knowledgeable about or experienced with a phenomenon of interest ( Cresswell & Plano Clark, 2011 ). In addition to knowledge and experience, Bernard (2002) and Spradley (1979) note the importance of availability and willingness to participate, and the ability to communicate experiences and opinions in an articulate, expressive, and reflective manner. In contrast, probabilistic or random sampling is used to ensure the generalizability of findings by minimizing the potential for bias in selection and to control for the potential influence of known and unknown confounders.
As Morse and Niehaus (2009) observe, whether the methodology employed is quantitative or qualitative, sampling methods are intended to maximize efficiency and validity. Nevertheless, sampling must be consistent with the aims and assumptions inherent in the use of either method. Qualitative methods are, for the most part, intended to achieve depth of understanding while quantitative methods are intended to achieve breadth of understanding ( Patton, 2002 ). Qualitative methods place primary emphasis on saturation (i.e., obtaining a comprehensive understanding by continuing to sample until no new substantive information is acquired) ( Miles & Huberman, 1994 ). Quantitative methods place primary emphasis on generalizability (i.e., ensuring that the knowledge gained is representative of the population from which the sample was drawn). Each methodology, in turn, has different expectations and standards for determining the number of participants required to achieve its aims. Quantitative methods rely on established formulae for avoiding Type I and Type II errors, while qualitative methods often rely on precedents for determining number of participants based on type of analysis proposed (e.g., 3-6 participants interviewed multiple times in a phenomenological study versus 20-30 participants interviewed once or twice in a grounded theory study), level of detail required, and emphasis of homogeneity (requiring smaller samples) versus heterogeneity (requiring larger samples) ( Guest, Bunce & Johnson., 2006 ; Morse & Niehaus, 2009 ; Padgett, 2008 ).
Types of purposeful sampling designs
There exist numerous purposeful sampling designs. Examples include the selection of extreme or deviant (outlier) cases for the purpose of learning from an unusual manifestations of phenomena of interest; the selection of cases with maximum variation for the purpose of documenting unique or diverse variations that have emerged in adapting to different conditions, and to identify important common patterns that cut across variations; and the selection of homogeneous cases for the purpose of reducing variation, simplifying analysis, and facilitating group interviewing. A list of some of these strategies and examples of their use in implementation research is provided in Table 1 .
Purposeful sampling strategies in implementation research
Embedded in each strategy is the ability to compare and contrast, to identify similarities and differences in the phenomenon of interest. Nevertheless, some of these strategies (e.g., maximum variation sampling, extreme case sampling, intensity sampling, and purposeful random sampling) are used to identify and expand the range of variation or differences, similar to the use of quantitative measures to describe the variability or dispersion of values for a particular variable or variables, while other strategies (e.g., homogeneous sampling, typical case sampling, criterion sampling, and snowball sampling) are used to narrow the range of variation and focus on similarities. The latter are similar to the use of quantitative central tendency measures (e.g., mean, median, and mode). Moreover, certain strategies, like stratified purposeful sampling or opportunistic or emergent sampling, are designed to achieve both goals. As Patton (2002 , p. 240) explains, “the purpose of a stratified purposeful sample is to capture major variations rather than to identify a common core, although the latter may also emerge in the analysis. Each of the strata would constitute a fairly homogeneous sample.”
Challenges to use of purposeful sampling
Despite its wide use, there are numerous challenges in identifying and applying the appropriate purposeful sampling strategy in any study. For instance, the range of variation in a sample from which purposive sample is to be taken is often not really known at the outset of a study. To set as the goal the sampling of information-rich informants that cover the range of variation assumes one knows that range of variation. Consequently, an iterative approach of sampling and re-sampling to draw an appropriate sample is usually recommended to make certain the theoretical saturation occurs ( Miles & Huberman, 1994 ). However, that saturation may be determined a-priori on the basis of an existing theory or conceptual framework, or it may emerge from the data themselves, as in a grounded theory approach ( Glaser & Strauss, 1967 ). Second, there are a not insignificant number in the qualitative methods field who resist or refuse systematic sampling of any kind and reject the limiting nature of such realist, systematic, or positivist approaches. This includes critics of interventions and “bottom up” case studies and critiques. However, even those who equate purposeful sampling with systematic sampling must offer a rationale for selecting study participants that is linked with the aims of the investigation (i.e., why recruit these individuals for this particular study? What qualifies them to address the aims of the study?). While systematic sampling may be associated with a post-positivist tradition of qualitative data collection and analysis, such sampling is not inherently limited to such analyses and the need for such sampling is not inherently limited to post-positivist qualitative approaches ( Patton, 2002 ).
Purposeful Sampling in Implementation Research
Characteristics of implementation research.
In implementation research, quantitative and qualitative methods often play important roles, either simultaneously or sequentially, for the purpose of answering the same question through convergence of results from different sources, answering related questions in a complementary fashion, using one set of methods to expand or explain the results obtained from use of the other set of methods, using one set of methods to develop questionnaires or conceptual models that inform the use of the other set, and using one set of methods to identify the sample for analysis using the other set of methods ( Palinkas et al., 2011 ). A review of mixed method designs in implementation research conducted by Palinkas and colleagues (2011) revealed seven different sequential and simultaneous structural arrangements, five different functions of mixed methods, and three different ways of linking quantitative and qualitative data together. However, this review did not consider the sampling strategies involved in the types of quantitative and qualitative methods common to implementation research, nor did it consider the consequences of the sampling strategy selected for one method or set of methods on the choice of sampling strategy for the other method or set of methods. For instance, one of the most significant challenges to sampling in sequential mixed method designs lies in the limitations the initial method may place on sampling for the subsequent method. As Morse and Neihaus (2009) observe, when the initial method is qualitative, the sample selected may be too small and lack randomization necessary to fulfill the assumptions for a subsequent quantitative analysis. On the other hand, when the initial method is quantitative, the sample selected may be too large for each individual to be included in qualitative inquiry and lack purposeful selection to reduce the sample size to one more appropriate for qualitative research. The fact that potential participants were recruited and selected at random does not necessarily make them information rich.
A re-examination of the 22 studies and an additional 6 studies published since 2009 revealed that only 5 studies ( Aarons & Palinkas, 2007 ; Bachman et al., 2009 ; Palinkas et al., 2011 ; Palinkas et al., 2012 ; Slade et al., 2003) made a specific reference to purposeful sampling. An additional three studies ( Henke et al., 2008 ; Proctor et al., 2007 ; Swain et al., 2010 ) did not make explicit reference to purposeful sampling but did provide a rationale for sample selection. The remaining 20 studies provided no description of the sampling strategy used to identify participants for qualitative data collection and analysis; however, a rationale could be inferred based on a description of who were recruited and selected for participation. Of the 28 studies, 3 used more than one sampling strategy. Twenty-one of the 28 studies (75%) used some form of criterion sampling. In most instances, the criterion used is related to the individual’s role, either in the research project (i.e., trainer, team leader), or the agency (program director, clinical supervisor, clinician); in other words, criterion of inclusion in a certain category (criterion-i), in contrast to cases that are external to a specific criterion (criterion-e). For instance, in a series of studies based on the National Implementing Evidence-Based Practices Project, participants included semi-structured interviews with consultant trainers and program leaders at each study site ( Brunette et al., 2008 ; Marshall et al., 2008 ; Marty et al., 2007; Rapp et al., 2010 ; Woltmann et al., 2008 ). Six studies used some form of maximum variation sampling to ensure representativeness and diversity of organizations and individual practitioners. Two studies used intensity sampling to make contrasts. Aarons and Palinkas (2007) , for example, purposefully selected 15 child welfare case managers representing those having the most positive and those having the most negative views of SafeCare, an evidence-based prevention intervention, based on results of a web-based quantitative survey asking about the perceived value and usefulness of SafeCare. Kramer and Burns (2008) recruited and interviewed clinicians providing usual care and clinicians who dropped out of a study prior to consent to contrast with clinicians who provided the intervention under investigation. One study ( Hoagwood et al., 2007 ), used a typical case approach to identify participants for a qualitative assessment of the challenges faced in implementing a trauma-focused intervention for youth. One study ( Green & Aarons, 2011 ) used a combined snowball sampling/criterion-i strategy by asking recruited program managers to identify clinicians, administrative support staff, and consumers for project recruitment. County mental directors, agency directors, and program managers were recruited to represent the policy interests of implementation while clinicians, administrative support staff and consumers were recruited to represent the direct practice perspectives of EBP implementation.
Table 2 below provides a description of the use of different purposeful sampling strategies in mixed methods implementation studies. Criterion-i sampling was most frequently used in mixed methods implementation studies that employed a simultaneous design where the qualitative method was secondary to the quantitative method or studies that employed a simultaneous structure where the qualitative and quantitative methods were assigned equal priority. These mixed method designs were used to complement the depth of understanding afforded by the qualitative methods with the breadth of understanding afforded by the quantitative methods (n = 13), to explain or elaborate upon the findings of one set of methods (usually quantitative) with the findings from the other set of methods (n = 10), or to seek convergence through triangulation of results or quantifying qualitative data (n = 8). The process of mixing methods in the large majority (n = 18) of these studies involved embedding the qualitative study within the larger quantitative study. In one study (Goia & Dziadosz, 2008), criterion sampling was used in a simultaneous design where quantitative and qualitative data were merged together in a complementary fashion, and in two studies (Aarons et al., 2012; Zazelli et al., 2008 ), quantitative and qualitative data were connected together, one in sequential design for the purpose of developing a conceptual model ( Zazelli et al., 2008 ), and one in a simultaneous design for the purpose of complementing one another (Aarons et al., 2012). Three of the six studies that used maximum variation sampling used a simultaneous structure with quantitative methods taking priority over qualitative methods and a process of embedding the qualitative methods in a larger quantitative study ( Henke et al., 2008 ; Palinkas et al., 2010; Slade et al., 2008 ). Two of the six studies used maximum variation sampling in a sequential design ( Aarons et al., 2009 ; Zazelli et al., 2008 ) and one in a simultaneous design (Henke et al., 2010) for the purpose of development, and three used it in a simultaneous design for complementarity ( Bachman et al., 2009 ; Henke et al., 2008; Palinkas, Ell, Hansen, Cabassa, & Wells, 2011 ). The two studies relying upon intensity sampling used a simultaneous structure for the purpose of either convergence or expansion, and both studies involved a qualitative study embedded in a larger quantitative study ( Aarons & Palinkas, 2007 ; Kramer & Burns, 2008 ). The single typical case study involved a simultaneous design where the qualitative study was embedded in a larger quantitative study for the purpose of complementarity ( Hoagwood et al., 2007 ). The snowball/maximum variation study involved a sequential design where the qualitative study was merged into the quantitative data for the purpose of convergence and conceptual model development ( Green & Aarons, 2011 ). Although not used in any of the 28 implementation studies examined here, another common sequential sampling strategy is using criteria sampling of the larger quantitative sample to produce a second-stage qualitative sample in a manner similar to maximum variation sampling, except that the former narrows the range of variation while the latter expands the range.
Purposeful sampling strategies and mixed method designs in implementation research
Criterion-i sampling as a purposeful sampling strategy shares many characteristics with random probability sampling, despite having different aims and different procedures for identifying and selecting potential participants. In both instances, study participants are drawn from agencies, organizations or systems involved in the implementation process. Individuals are selected based on the assumption that they possess knowledge and experience with the phenomenon of interest (i.e., the implementation of an EBP) and thus will be able to provide information that is both detailed (depth) and generalizable (breadth). Participants for a qualitative study, usually service providers, consumers, agency directors, or state policy-makers, are drawn from the larger sample of participants in the quantitative study. They are selected from the larger sample because they meet the same criteria, in this case, playing a specific role in the organization and/or implementation process. To some extent, they are assumed to be “representative” of that role, although implementation studies rarely explain the rationale for selecting only some and not all of the available role representatives (i.e., recruiting 15 providers from an agency for semi-structured interviews out of an available sample of 25 providers). From the perspective of qualitative methodology, participants who meet or exceed a specific criterion or criteria possess intimate (or, at the very least, greater) knowledge of the phenomenon of interest by virtue of their experience, making them information-rich cases.
However, criterion sampling may not be the most appropriate strategy for implementation research because by attempting to capture both breadth and depth of understanding, it may actually be inadequate to the task of accomplishing either. Although qualitative methods are often contrasted with quantitative methods on the basis of depth versus breadth, they actually require elements of both in order to provide a comprehensive understanding of the phenomenon of interest. Ideally, the goal of achieving theoretical saturation by providing as much detail as possible involves selection of individuals or cases that can ensure all aspects of that phenomenon are included in the examination and that any one aspect is thoroughly examined. This goal, therefore, requires an approach that sequentially or simultaneously expands and narrows the field of view, respectively. By selecting only individuals who meet a specific criterion defined on the basis of their role in the implementation process or who have a specific experience (e.g., engaged only in an implementation defined as successful or only in one defined as unsuccessful), one may fail to capture the experiences or activities of other groups playing other roles in the process. For instance, a focus only on practitioners may fail to capture the insights, experiences, and activities of consumers, family members, agency directors, administrative staff, or state policy leaders in the implementation process, thus limiting the breadth of understanding of that process. On the other hand, selecting participants on the basis of whether they were a practitioner, consumer, director, staff, or any of the above, may fail to identify those with the greatest experience or most knowledgeable or most able to communicate what they know and/or have experienced, thus limiting the depth of understanding of the implementation process.
To address the potential limitations of criterion sampling, other purposeful sampling strategies should be considered and possibly adopted in implementation research ( Figure 1 ). For instance, strategies placing greater emphasis on breadth and variation such as maximum variation, extreme case, confirming and disconfirming case sampling are better suited for an examination of differences, while strategies placing greater emphasis on depth and similarity such as homogeneous, snowball, and typical case sampling are better suited for an examination of commonalities or similarities, even though both types of sampling strategies include a focus on both differences and similarities. Alternatives to criterion sampling may be more appropriate to the specific functions of mixed methods, however. For instance, using qualitative methods for the purpose of complementarity may require that a sampling strategy emphasize similarity if it is to achieve depth of understanding or explore and develop hypotheses that complement a quantitative probability sampling strategy achieving breadth of understanding and testing hypotheses ( Kemper et al., 2003 ). Similarly, mixed methods that address related questions for the purpose of expanding or explaining results or developing new measures or conceptual models may require a purposeful sampling strategy aiming for similarity that complements probability sampling aiming for variation or dispersion. A narrowly focused purposeful sampling strategy for qualitative analysis that “complements” a broader focused probability sample for quantitative analysis may help to achieve a balance between increasing inference quality/trustworthiness (internal validity) and generalizability/transferability (external validity). A single method that focuses only on a broad view may decrease internal validity at the expense of external validity ( Kemper et al., 2003 ). On the other hand, the aim of convergence (answering the same question with either method) may suggest use of a purposeful sampling strategy that aims for breadth that parallels the quantitative probability sampling strategy.

Purposeful and Random Sampling Strategies for Mixed Method Implementation Studies
- (1) Priority and sequencing of Qualitative (QUAL) and Quantitative (QUAN) can be reversed.
- (2) Refers to emphasis of sampling strategy.

Furthermore, the specific nature of implementation research suggests that a multistage purposeful sampling strategy be used. Three different multistage sampling strategies are illustrated in Figure 1 below. Several qualitative methodologists recommend sampling for variation (breadth) before sampling for commonalities (depth) ( Glaser, 1978 ; Bernard, 2002 ) (Multistage I). Also known as a “funnel approach”, this strategy is often recommended when conducting semi-structured interviews ( Spradley, 1979 ) or focus groups ( Morgan, 1997 ). This approach begins with a broad view of the topic and then proceeds to narrow down the conversation to very specific components of the topic. However, as noted earlier, the lack of a clear understanding of the nature of the range may require an iterative approach where each stage of data analysis helps to determine subsequent means of data collection and analysis ( Denzen, 1978 ; Patton, 2001) (Multistage II). Similarly, multistage purposeful sampling designs like opportunistic or emergent sampling, allow the option of adding to a sample to take advantage of unforeseen opportunities after data collection has been initiated (Patton, 2001, p. 240) (Multistage III). Multistage I models generally involve two stages, while a Multistage II model requires a minimum of 3 stages, alternating from sampling for variation to sampling for similarity. A Multistage III model begins with sampling for variation and ends with sampling for similarity, but may involve one or more intervening stages of sampling for variation or similarity as the need or opportunity arises.
Multistage purposeful sampling is also consistent with the use of hybrid designs to simultaneously examine intervention effectiveness and implementation. An extension of the concept of “practical clinical trials” ( Tunis, Stryer & Clancey, 2003 ), effectiveness-implementation hybrid designs provide benefits such as more rapid translational gains in clinical intervention uptake, more effective implementation strategies, and more useful information for researchers and decision makers ( Curran et al., 2012 ). Such designs may give equal priority to the testing of clinical treatments and implementation strategies (Hybrid Type 2) or give priority to the testing of treatment effectiveness (Hybrid Type 1) or implementation strategy (Hybrid Type 3). Curran and colleagues (2012) suggest that evaluation of the intervention’s effectiveness will require or involve use of quantitative measures while evaluation of the implementation process will require or involve use of mixed methods. When conducting a Hybrid Type 1 design (conducting a process evaluation of implementation in the context of a clinical effectiveness trial), the qualitative data could be used to inform the findings of the effectiveness trial. Thus, an effectiveness trial that finds substantial variation might purposefully select participants using a broader strategy like sampling for disconfirming cases to account for the variation. For instance, group randomized trials require knowledge of the contexts and circumstances similar and different across sites to account for inevitable site differences in interventions and assist local implementations of an intervention ( Bloom & Michalopoulos, 2013 ; Raudenbush & Liu, 2000 ). Alternatively, a narrow strategy may be used to account for the lack of variation. In either instance, the choice of a purposeful sampling strategy is determined by the outcomes of the quantitative analysis that is based on a probability sampling strategy. In Hybrid Type 2 and Type 3 designs where the implementation process is given equal or greater priority than the effectiveness trial, the purposeful sampling strategy must be first and foremost consistent with the aims of the implementation study, which may be to understand variation, central tendencies, or both. In all three instances, the sampling strategy employed for the implementation study may vary based on the priority assigned to that study relative to the effectiveness trial. For instance, purposeful sampling for a Hybrid Type 1 design may give higher priority to variation and comparison to understand the parameters of implementation processes or context as a contribution to an understanding of effectiveness outcomes (i.e., using qualitative data to expand upon or explain the results of the effectiveness trial), In effect, these process measures could be seen as modifiers of innovation/EBP outcome. In contrast, purposeful sampling for a Hybrid Type 3 design may give higher priority to similarity and depth to understand the core features of successful outcomes only.
Finally, multistage sampling strategies may be more consistent with innovations in experimental designs representing alternatives to the classic randomized controlled trial in community-based settings that have greater feasibility, acceptability, and external validity. While RCT designs provide the highest level of evidence, “in many clinical and community settings, and especially in studies with underserved populations and low resource settings, randomization may not be feasible or acceptable” ( Glasgow, et al., 2005 , p. 554). Randomized trials are also “relatively poor in assessing the benefit from complex public health or medical interventions that account for individual preferences for or against certain interventions, differential adherence or attrition, or varying dosage or tailoring of an intervention to individual needs” ( Brown et al., 2009 , p. 2). Several alternatives to the randomized design have been proposed, such as “interrupted time series,” “multiple baseline across settings” or “regression-discontinuity” designs. Optimal designs represent one such alternative to the classic RCT and are addressed in detail by Duan and colleagues (this issue) . Like purposeful sampling, optimal designs are intended to capture information-rich cases, usually identified as individuals most likely to benefit from the experimental intervention. The goal here is not to identify the typical or average patient, but patients who represent one end of the variation in an extreme case, intensity sampling, or criterion sampling strategy. Hence, a sampling strategy that begins by sampling for variation at the first stage and then sampling for homogeneity within a specific parameter of that variation (i.e., one end or the other of the distribution) at the second stage would seem the best approach for identifying an “optimal” sample for the clinical trial.
Another alternative to the classic RCT are the adaptive designs proposed by Brown and colleagues ( Brown et al, 2006 ; Brown et al., 2008 ; Brown et al., 2009 ). Adaptive designs are a sequence of trials that draw on the results of existing studies to determine the next stage of evaluation research. They use cumulative knowledge of current treatment successes or failures to change qualities of the ongoing trial. An adaptive intervention modifies what an individual subject (or community for a group-based trial) receives in response to his or her preferences or initial responses to an intervention. Consistent with multistage sampling in qualitative research, the design is somewhat iterative in nature in the sense that information gained from analysis of data collected at the first stage influences the nature of the data collected, and the way they are collected, at subsequent stages ( Denzen, 1978 ). Furthermore, many of these adaptive designs may benefit from a multistage purposeful sampling strategy at early phases of the clinical trial to identify the range of variation and core characteristics of study participants. This information can then be used for the purposes of identifying optimal dose of treatment, limiting sample size, randomizing participants into different enrollment procedures, determining who should be eligible for random assignment (as in the optimal design) to maximize treatment adherence and minimize dropout, or identifying incentives and motives that may be used to encourage participation in the trial itself.
Alternatives to the classic RCT design may also be desirable in studies that adopt a community-based participatory research framework ( Minkler & Wallerstein, 2003 ), considered to be an important tool on conducting implementation research ( Palinkas & Soydan, 2012 ). Such frameworks suggest that identification and recruitment of potential study participants will place greater emphasis on the priorities and “local knowledge” of community partners than on the need to sample for variation or uniformity. In this instance, the first stage of sampling may approximate the strategy of sampling politically important cases ( Patton, 2002 ) at the first stage, followed by other sampling strategies intended to maximize variations in stakeholder opinions or experience.
On the basis of this review, the following recommendations are offered for the use of purposeful sampling in mixed method implementation research. First, many mixed methods studies in health services research and implementation science do not clearly identify or provide a rationale for the sampling procedure for either quantitative or qualitative components of the study ( Wisdom et al., 2011 ), so a primary recommendation is for researchers to clearly describe their sampling strategies and provide the rationale for the strategy.
Second, use of a single stage strategy for purposeful sampling for qualitative portions of a mixed methods implementation study should adhere to the same general principles that govern all forms of sampling, qualitative or quantitative. Kemper and colleagues (2003) identify seven such principles: 1) the sampling strategy should stem logically from the conceptual framework as well as the research questions being addressed by the study; 2) the sample should be able to generate a thorough database on the type of phenomenon under study; 3) the sample should at least allow the possibility of drawing clear inferences and credible explanations from the data; 4) the sampling strategy must be ethical; 5) the sampling plan should be feasible; 6) the sampling plan should allow the researcher to transfer/generalize the conclusions of the study to other settings or populations; and 7) the sampling scheme should be as efficient as practical.
Third, the field of implementation research is at a stage itself where qualitative methods are intended primarily to explore the barriers and facilitators of EBP implementation and to develop new conceptual models of implementation process and outcomes. This is especially important in state implementation research, where fiscal necessities are driving policy reforms for which knowledge about EBP implementation barriers and facilitators are urgently needed. Thus a multistage strategy for purposeful sampling should begin first with a broader view with an emphasis on variation or dispersion and move to a narrow view with an emphasis on similarity or central tendencies. Such a strategy is necessary for the task of finding the optimal balance between internal and external validity.
Fourth, if we assume that probability sampling will be the preferred strategy for the quantitative components of most implementation research, the selection of a single or multistage purposeful sampling strategy should be based, in part, on how it relates to the probability sample, either for the purpose of answering the same question (in which case a strategy emphasizing variation and dispersion is preferred) or the for answering related questions (in which case, a strategy emphasizing similarity and central tendencies is preferred).
Fifth, it should be kept in mind that all sampling procedures, whether purposeful or probability, are designed to capture elements of both similarity and differences, of both centrality and dispersion, because both elements are essential to the task of generating new knowledge through the processes of comparison and contrast. Selecting a strategy that gives emphasis to one does not mean that it cannot be used for the other. Having said that, our analysis has assumed at least some degree of concordance between breadth of understanding associated with quantitative probability sampling and purposeful sampling strategies that emphasize variation on the one hand, and between the depth of understanding and purposeful sampling strategies that emphasize similarity on the other hand. While there may be some merit to that assumption, depth of understanding requires both an understanding of variation and common elements.
Finally, it should also be kept in mind that quantitative data can be generated from a purposeful sampling strategy and qualitative data can be generated from a probability sampling strategy. Each set of data is suited to a specific objective and each must adhere to a specific set of assumptions and requirements. Nevertheless, the promise of mixed methods, like the promise of implementation science, lies in its ability to move beyond the confines of existing methodological approaches and develop innovative solutions to important and complex problems. For states engaged in EBP implementation, the need for these solutions is urgent.

Multistage Purposeful Sampling Strategies
Acknowledgments
This study was funded through a grant from the National Institute of Mental Health (P30-MH090322: K. Hoagwood, PI).
No internet connection.
All search filters on the page have been cleared., your search has been saved..
- All content
- Dictionaries
- Encyclopedias
- Expert Insights
- Foundations
- How-to Guides
- Journal Articles
- Little Blue Books
- Little Green Books
- Project Planner
- Tools Directory
- Sign in to my profile No Name
- Sign in Signed in
- My profile No Name

Reader's guide
Entries a-z, subject index.
- Edited by: Albert J. Mills , Gabrielle Durepos & Elden Wiebe
- In: Encyclopedia of Case Study Research
- Chapter DOI: https:// doi. org/10.4135/9781412957397
- Subject: Anthropology , Business and Management , Criminology and Criminal Justice , Communication and Media Studies , Economics , Education , Geography , Health , Marketing , Nursing , Political Science and International Relations , Psychology , Social Policy and Public Policy , Social Work , Sociology
- Show page numbers Hide page numbers
Sampling in case study research involves decisions that the researchers make regarding sampling strategies, the number of case studies, and the definition of the unit of analysis. It is central to theory-building and ...
- Rival Explanations
- Scientific Method
- Case Study Research in Anthropology
- Before-and-After Case Study Design
- Action-Based Data Collection
- Activity Theory
- Case Study and Theoretical Science
- Analytic Generalization
- ANTi-History
- Case Study Research in Business and Management
- Blended Research Design
- Bayesian Inference and Boolean Logic
- Analysis of Visual Data
- Actor-Network Theory
- Chicago School
- Case Study as a Teaching Tool
- Case Study Research in Business Ethics
- Bounding the Case
- Authenticity and Bad Faith
- Anonymity and Confidentiality
- Colonialism
- Authenticity
- Case Study in Creativity Research
- Case Study Research in Education
- Case Selection
- Author Intentionality
- Case-to-Case Synthesis
- Anonymizing Data for Secondary Use
- Autoethnography
- Constructivism
- Concatenated Theory
- Case Study Research in Tourism
- Case Study Research in Feminism
- Causal Case Study: Explanatory Theories
- Archival Records as Evidence
- Base and Superstructure
- Critical Realism
- Conceptual Argument
- Case Study With the Elderly
- Case Study Research in Medicine
- Case Within a Case
- Contentious Issues in Case Study Research
- Chronological Order
- Audiovisual Recording
- Case Study as a Methodological Approach
- Critical Theory
- Conceptual Model: Causal Model
- Collective Case Study
- Case Study Research in Political Science
- Comparative Case Study
- Cultural Sensitivity and Case Study
- Coding: Axial Coding
- Autobiography
- Dialectical Materialism
- Conceptual Model: Operationalization
- Configurative-Ideographic Case Study
- Case Study Research in Psychology
- Critical Incident Case Study
- Dissertation Proposal
- Coding: Open Coding
- Case Study Database
- Class Analysis
- Epistemology
- Conceptual Model in a Qualitative Research Project
- Critical Pedagogy and Digital Technology
- Case Study Research in Public Policy
- Cross-Sectional Design
- Ecological Perspectives
- Coding: Selective Coding
- Case Study Protocol
- Existentialism
- Conceptual Model in a Quantitative Research Project
- Diagnostic Case Study Research
- Decision Making Under Uncertainty
- Cognitive Biases
- Case Study Surveys
- Codifying Social Practices
- Contribution, Theoretical
- Explanatory Case Study
- Deductive-Nomological Model of Explanation
- Masculinity and Femininity
- Cognitive Mapping
- Consent, Obtaining Participant
- Communicative Action
- Formative Context
- Credibility
- Exploratory Case Study
- Deviant Case Analysis
- Objectivism
- Communicative Framing Analysis
- Contextualization
- Community of Practice
- Frame Analysis
- Docile Bodies
- Inductivism
- Discursive Frame
- Comparing the Case Study With Other Methodologies
- Historical Materialism
- Equifinality
- Institutional Ethnography
- Healthcare Practice Guidelines
- Computer-Based Analysis of Qualitative Data: ATLAS.ti
- Consciousness Raising
- Interpretivism
- Instrumental Case Study
- Pedagogy and Case Study
- Pluralism and Case Study
- Computer-Based Analysis of Qualitative Data: CAITA (Computer-Assisted Interpretive Textual Analysis)
- Data Resources
- Contradiction
- Liberal Feminism
- Explanation Building
- Intercultural Performance
- Event-Driven Research
- Computer-Based Analysis of Qualitative Data: Kwalitan
- Depth of Data
- Critical Discourse Analysis
- Managerialism
- Extension of Theory
- Intrinsic Case Study
- Exemplary Case Design
- Power/Knowledge
- Computer-Based Analysis of Qualitative Data: MAXQDA 2007
- Diaries and Journals
- Critical Sensemaking
- Falsification
- Limited-Depth Case Study
- Extended Case Method
- Computer-Based Analysis of Qualitative Data: NVIVO
- Direct Observation as Evidence
- North American Case Research Association
- Functionalism
- Multimedia Case Studies
- Extreme Cases
- Researcher as Research Tool
- Concept Mapping
- Discourse Analysis
- Decentering Texts
- Generalizability
- Participatory Action Research
- Congruence Analysis
- Documentation as Evidence
- Deconstruction
- Paradigm Plurality in Case Study Research
- Genericization
- Participatory Case Study
- Holistic Designs
- Utilitarianism
- Constant Causal Effects Assumption
- Ethnostatistics
- Dialogic Inquiry
- Philosophy of Science
- Indeterminacy
- Content Analysis
- Fiction Analysis
- Discourse Ethics
- Indexicality
- Pracademics
- Integrating Independent Case Studies
- Conversation Analysis
- Field Notes
- Double Hermeneutic
- Postcolonialism
- Processual Case Research
- Cross-Case Synthesis and Analysis
- Postmodernism
- Macrolevel Social Mechanisms
- Program Evaluation and Case Study
- Longitudinal Research
- Going Native
- Ethnographic Memoir
- Postpositivism
- Middle-Range Theory
- Program-Logic Model
- Mental Framework
- Document Analysis
- Informant Bias
- Ethnography
- Poststructuralism
- Naturalistic Generalization
- Prospective Case Study
- Mixed Methods in Case Study Research
- Factor Analysis
- Ethnomethodology
- Poststructuralist Feminism
- Overdetermination
- Real-Time Cases
- Most Different Systems Design
- Eurocentrism
- Radical Empiricism
- Plausibility
- Retrospective Case Study
- High-Quality Analysis
- Iterative Nodes
- Radical Feminism
- Probabilistic Explanation
- Re-Use of Qualitative Data
- Multiple-Case Designs
- Language and Cultural Barriers
- Process Tracing
- Single-Case Designs
- Multi-Site Case Study
- Interactive Methodology, Feminist
- Multiple Sources of Evidence
- Spiral Case Study
- Naturalistic Inquiry
- Interpreting Results
- Narrative Analysis
- Front Stage and Back Stage
- Scientific Realism
- Reporting Case Study Research
- Storyselling
- Natural Science Model
- Socialist Feminism
- Rhetoric in Research Reporting
- Number of Cases
- Naturalistic Context
- Symbolic Interactionism
- Statistical Generalization
- Outcome-Driven Research
- Knowledge Production
- Nonparticipant Observation
- Governmentality
- Substantive Theory
- Paradigmatic Cases
- Method of Agreement
- Objectivity
- Grounded Theory
- Theory-Building With Cases
- Method of Difference
- Over-Rapport
- Hermeneutics
- Theory-Testing With Cases
- Multicollinearity
- Participant Observation
- Underdetermination
- Multidimensional Scaling
- Imperialism
- Polar Types
- Institutional Theory, Old and New
- Problem Formulation
- Pattern Matching
- Personality Tests
- Intertextuality
- Quantitative Single-Case Research Design
- Re-Analysis of Previous Data
- Isomorphism
- Quasi-Experimental Design
- Regulating Group Mind
- Questionnaires
- Langue and Parôle
- Quick Start to Case Study Research
- Relational Analysis
- Reflexivity
- Layered Nature of Texts
- Random Assignment
- Replication
- Life History
- Research Framework
- Reliability
- Logocentrism
- Research Objectives
- Repeated Observations
- Management of Impressions
- Research Proposals
- Secondary Data as Primary
- Researcher-Participant Relationship
- Means of Production
- Research Questions, Types of Retrospective Case Study
- Serendipity Pattern
- Situational Analysis
- Sensitizing Concepts
- Modes of Production
- Standpoint Analysis
- Subjectivism
- Multimethod Research Program
- Socially Distributed Knowledge
- Statistical Analysis
- Subject Rights
- Multiple Selfing
- Theoretical Saturation
- Native Points of View
- Statistics, Use of in Case Study
- Temporal Bracketing
- Triangulation
- Negotiated Order
- Textual Analysis
- Use of Digital Data
- Network Analysis
- Thematic Analysis
- Utilization
- One-Dimensional Culture
- Visual Research Methods
- Ordinary Troubles
- Theory, Role of
- Organizational Culture
- Webs of Significance
- Within-Case Analysis
- Performativity
- Phenomenology
- Practice-Oriented Research
- Primitivism
- Qualitative Analysis in Case Study
- Qualitative Comparative Analysis
- Self-Confrontation Method
- Self-Presentation
- Sensemaking
- Signifier and Signified
- Sign System
- Social-Interaction Theory
- Storytelling
- Structuration
- Symbolic Value
- Symbolic Violence
- Thick Description
- Writing and Difference
Sign in to access this content
Get a 30 day free trial, more like this, sage recommends.
We found other relevant content for you on other SAGE platforms.
Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches
- Sign in/register
Navigating away from this page will delete your results
Please save your results to "My Self-Assessments" in your profile before navigating away from this page.
Sign in to my profile
Sign up for a free trial and experience all Sage Research Methods has to offer.
You must have a valid academic email address to sign up.
Get off-campus access
- View or download all content my institution has access to.
- view my profile

Q: What would be the right sampling technique for a research involving multiple case studies?
Hi. This is my first attempt at conducting a research. My topic reads: The effects of supporting employee's personal development and growth on the organization's performance. It involves multiple case studies (about four companies). But I'm struggling with the right sampling technique to use. Kindly help.
Asked on 22 Oct, 2020
You have already chosen the companies. So, we assume the question is about choosing a sample of employees from each of them. First off, this is a case study, which itself implies that the findings are not expected to be generalizable across the board but are indicative, which, in turn, allows some leeway in choosing the samples.
That said, much depends on the size of the company and the numbers you are prepared to handle. If the companies are small, perhaps you can eliminate sampling altogether and interview everyone? If this is not feasible, you need to decide on what category of employees to choose: the management level, for example, or the number of years of serving the company. If the company has multiple locations, you may want to ensure that all locations are represented.
These considerations call for stratified sampling , and the numbers should be drawn in proportion to the size of each strata. For instance, if one location has many employees and another has only a few, you may choose the number of employees in proportion. The topic also implies a qualitative rather than a quantitative approach, which again means that a rigorous sampling exercise may not be required.
Given the limited information on your proposed research, any (more) specific recommendations will be out of place at best and misleading at worst. However, we hope you will think along the lines suggested to come up with the most suitable sampling technique. All the best!
For more insights on some of the key points discussed above, you may find the following resources useful:
- Which method can be used to obtain quantified data by comparing three groups within an organization though with a different sample size in each group?
- What kind of research method should I use for my thesis: qualitative or quantitative?
- Types of qualitative research methods
[With inputs from Yateendra Joshi ]

Answered by Editage Insights on 23 Oct, 2020
- Upvote this Answer

This content belongs to the Manuscript Writing Stage
Confirm that you would also like to sign up for free personalized email coaching for this stage.
Trending Searches
- Statement of the problem
- Background of study
- Scope of the study
- Types of qualitative research
- Rationale of the study
- Concept paper
- Literature review
- Introduction in research
- Under "Editor Evaluation"
- Ethics in research
Recent Searches
- Review paper
- Responding to reviewer comments
- Predatory publishers
- Scope and delimitations
- Open access
- Plagiarism in research
- Journal selection tips
- Editor assigned
- Types of articles
- "Reject and Resubmit" status
- Decision in process
- Conflict of interest

An official website of the United States government, Department of Justice.
Here's how you know
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
Statistical Approach to Drug Sampling: A Case Study
Additional details, no download available, availability, popular topics.
Have a language expert improve your writing
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
- Knowledge Base
- Methodology
- What Is a Case Study? | Definition, Examples & Methods
What Is a Case Study? | Definition, Examples & Methods
Published on May 8, 2019 by Shona McCombes . Revised on January 30, 2023.
A case study is a detailed study of a specific subject, such as a person, group, place, event, organization, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research.
A case study research design usually involves qualitative methods , but quantitative methods are sometimes also used. Case studies are good for describing , comparing, evaluating and understanding different aspects of a research problem .
Table of contents
When to do a case study, step 1: select a case, step 2: build a theoretical framework, step 3: collect your data, step 4: describe and analyze the case.
A case study is an appropriate research design when you want to gain concrete, contextual, in-depth knowledge about a specific real-world subject. It allows you to explore the key characteristics, meanings, and implications of the case.
Case studies are often a good choice in a thesis or dissertation . They keep your project focused and manageable when you don’t have the time or resources to do large-scale research.
You might use just one complex case study where you explore a single subject in depth, or conduct multiple case studies to compare and illuminate different aspects of your research problem.
Once you have developed your problem statement and research questions , you should be ready to choose the specific case that you want to focus on. A good case study should have the potential to:
- Provide new or unexpected insights into the subject
- Challenge or complicate existing assumptions and theories
- Propose practical courses of action to resolve a problem
- Open up new directions for future research
Unlike quantitative or experimental research , a strong case study does not require a random or representative sample. In fact, case studies often deliberately focus on unusual, neglected, or outlying cases which may shed new light on the research problem.
However, you can also choose a more common or representative case to exemplify a particular category, experience or phenomenon.
Prevent plagiarism. Run a free check.
While case studies focus more on concrete details than general theories, they should usually have some connection with theory in the field. This way the case study is not just an isolated description, but is integrated into existing knowledge about the topic. It might aim to:
- Exemplify a theory by showing how it explains the case under investigation
- Expand on a theory by uncovering new concepts and ideas that need to be incorporated
- Challenge a theory by exploring an outlier case that doesn’t fit with established assumptions
To ensure that your analysis of the case has a solid academic grounding, you should conduct a literature review of sources related to the topic and develop a theoretical framework . This means identifying key concepts and theories to guide your analysis and interpretation.
There are many different research methods you can use to collect data on your subject. Case studies tend to focus on qualitative data using methods such as interviews , observations , and analysis of primary and secondary sources (e.g., newspaper articles, photographs, official records). Sometimes a case study will also collect quantitative data.
The aim is to gain as thorough an understanding as possible of the case and its context.
In writing up the case study, you need to bring together all the relevant aspects to give as complete a picture as possible of the subject.
How you report your findings depends on the type of research you are doing. Some case studies are structured like a standard scientific paper or thesis , with separate sections or chapters for the methods , results and discussion .
Others are written in a more narrative style, aiming to explore the case from various angles and analyze its meanings and implications (for example, by using textual analysis or discourse analysis ).
In all cases, though, make sure to give contextual details about the case, connect it back to the literature and theory, and discuss how it fits into wider patterns or debates.
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
McCombes, S. (2023, January 30). What Is a Case Study? | Definition, Examples & Methods. Scribbr. Retrieved March 16, 2023, from https://www.scribbr.com/methodology/case-study/
Is this article helpful?
Shona McCombes
Other students also liked, primary vs. secondary sources | difference & examples, what is a theoretical framework | guide to organizing, what is action research | definition & examples, what is your plagiarism score.

- In-Store Product Sampling & Demos
- Retail Merchandising
- Beer, Wine & Liquor Samplings
- Trade Show Staffing & Models
- Brand Ambassador Program
- Street Teams
Five Fantastic Product Sampling Case Studies

Many companies are hesitant to do product sampling for the first time, and it's clear why. The costs are real: how much money are you spending on the samples? What if, after the sampling is done, no one actually buys the product? On the other hand, what is the cost of not sampling? Luckily, many companies have successfully sampled with great results. We've compiled the following four product sampling case studies that focus on what has worked and why.
Target Your Market Precisely: Mr. Drop Coffee
This article in Inc. profiles Gordon Grade and his Mr. Drop coffee. During finals week at NYU, the company handed out thousands of free samples of coffee to harried college students who were deprived of sleep as they prepared for finals. Is a tired college student interested in a caffeine boost? Yes. Are they likely to remember the product and consider buying it? It's likely.
One of the things we can learn from Grade's product sampling is that he defined his target market. Who wants coffee more than a fatigued college student? Grade introduced his samples not to the general public of Manhattan, but in a particular area near the campus where the participants likely felt like they needed the product. This case study is a great example of finding your audience and catering to them.

Use Social Media And Track Your Results: Texas Pete Hot Sauce
Depending on your company's offerings, a social media giveaway may be the perfect product sampling method. Not only can users "opt in" to receiving the product, but their online friends will be able to see their activity and interaction with your company. In 2009, Texas Pete Hot Sauce conducted a giveaway on Facebook to great success. Not only did their samples fly off of their digital shelves, but they were able to send recipients coupons that contained a special code. When customers redeemed these coupons, Texas Pete Hot Sauce could assess the rate of redemption.
Collecting data is valuable for any kind of marketing campaign, especially one that involves direct contact with customers like a social media giveaway. Product sampling is not just about giving your customers an experience of your product. It's also a technique to collect valuable information that has the capacity to inform all of your marketing campaigns. Before any samples are designed or distributed, consider how you will implement a mechanism to map your results.
Reward Existing Customers: Sephora
Sephora is well-known in some sales circles for including free product samples with every online purchase. This is a brilliant strategy for a couple of reasons. First, it encourages online sales because customers feel that they are getting a unique "freebee." Additionally, loyal customers have the experience of being rewarded for their business. Finally, because the sales are online and customers get to choose their own samples, Sephora can track whether those customers go on to buy the product during their next online shopping excursion.
There are a lot of rewards programs on the market these days and these programs enjoy varied levels of success. Sending product samples to existing customers does not place an extra burden on customers by making them carry a card or remember a code. It also builds a product sampling structure into the existing sales strategy instead of throwing product sampling in as an add-on.
Joining Dual Audiences: Snack Factory
This Upserve article chronicles Snack Factory, the maker of Pretzel Crisps, and their effective use of product sampling. This unique product is neither pretzel or chip, but a marriage of the two. Customers may be unlikely to try a product that they've never heard of before, but the typical American customer has tried a pretzel or a chip. By focusing on the existing markets for chips and for pretzels, Snack Factory was able to offer samples by building on these audiences' existing knowledge of such snack foods.
The article reports that almost 25% of individuals who sampled the Pretzel Chips during this promotion ended up purchasing the product.
The Heart of the Matter: Making Sampling Work for You
As with any marketing strategy, the key to product sampling is to use a strategy that works for your company. Studying the successes (and failures) of other companies can help you inform your approach. That being said, as an entrepreneur, you already have a lot on your plate. Sonas Marketing works with businesses like your to launch engagement marketing strategies for events and promotions. We apply our years of experience to making your product sampling promotion a success.
Tags: In-Store Sampling
Subscribe to Email Updates
Recent posts, posts by topic.
- In-Store Sampling (26)
- Trade Shows (16)
- Craft Beer Marketing (11)
- Alcohol Marketing (2)
- Brand Ambassadors (2)
- Convention Staffing (2)
- Event Staffing (1)
- Insider (1)
- Strategy (1)
- Street Teams (1)
ABOUT SONAS MARKETING
We offer a Nationwide Service to our customers covering the entire United States and Canada. Our services comprise: - Retail Services: Sales, Merchandising, Instore Samplings, Retail Audit and Secret Shopper - Trade Show and Event Services: Trade Show & Event Staffing, Trade Show and Promotional Models
United States Head Office 5401 S. Kirkman Road, Suite 7000 Orlando, FL 32819
Canada Head Office 41st Floor, 40 King Street West Toronto, ON M5H 3Y2
E-mail: [email protected]

IMAGES
VIDEO
COMMENTS
Due to statistical analysis, the issue of random sampling is pertinent to any ... Keywords: case study; sampling; qualitative sample.
Case studies are qualitative in nature and does not subscribe to the quantitative conventions of sampling. It is called purposive sampling, where the
Statistical Sampling Case Study. To learn about sampling techniques in social science research, students practice tackling a real-world research problem
Case study sampling involves decisions like making sampling strategies, deciding the number of case studies, and defining the unit of analysis.
Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of
In case study research the process of sampling encapsulates a variety issues that the researcher needs to take into consideration. These include deciding upon (
It involves multiple case studies (about four companies). But I'm struggling with the right sampling technique to use. Kindly help.
Statistical Approach to Drug Sampling: A Case Study · NCJ Number. 139763 · Journal. Journal of Forensic Sciences Volume: 37 Issue: 6 Dated: (November 1992) Pages:
Unlike quantitative or experimental research, a strong case study does not require a random or representative sample. In fact, case studies
Five Fantastic Product Sampling Case Studies · Target Your Market Precisely: Mr. Drop Coffee · Use Social Media And Track Your Results: Texas Pete