Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 10: Single-Subject Research

Single-Subject Research Designs

Learning Objectives

  • Describe the basic elements of a single-subject research design.
  • Design simple single-subject studies using reversal and multiple-baseline designs.
  • Explain how single-subject research designs address the issue of internal validity.
  • Interpret the results of simple single-subject studies based on the visual inspection of graphed data.

General Features of Single-Subject Designs

Before looking at any specific single-subject research designs, it will be helpful to consider some features that are common to most of them. Many of these features are illustrated in Figure 10.2, which shows the results of a generic single-subject study. First, the dependent variable (represented on the  y -axis of the graph) is measured repeatedly over time (represented by the  x -axis) at regular intervals. Second, the study is divided into distinct phases, and the participant is tested under one condition per phase. The conditions are often designated by capital letters: A, B, C, and so on. Thus Figure 10.2 represents a design in which the participant was tested first in one condition (A), then tested in another condition (B), and finally retested in the original condition (A). (This is called a reversal design and will be discussed in more detail shortly.)

A subject was tested under condition A, then condition B, then under condition A again.

Another important aspect of single-subject research is that the change from one condition to the next does not usually occur after a fixed amount of time or number of observations. Instead, it depends on the participant’s behaviour. Specifically, the researcher waits until the participant’s behaviour in one condition becomes fairly consistent from observation to observation before changing conditions. This is sometimes referred to as the steady state strategy  (Sidman, 1960) [1] . The idea is that when the dependent variable has reached a steady state, then any change across conditions will be relatively easy to detect. Recall that we encountered this same principle when discussing experimental research more generally. The effect of an independent variable is easier to detect when the “noise” in the data is minimized.

Reversal Designs

The most basic single-subject research design is the  reversal design , also called the  ABA design . During the first phase, A, a  baseline  is established for the dependent variable. This is the level of responding before any treatment is introduced, and therefore the baseline phase is a kind of control condition. When steady state responding is reached, phase B begins as the researcher introduces the treatment. There may be a period of adjustment to the treatment during which the behaviour of interest becomes more variable and begins to increase or decrease. Again, the researcher waits until that dependent variable reaches a steady state so that it is clear whether and how much it has changed. Finally, the researcher removes the treatment and again waits until the dependent variable reaches a steady state. This basic reversal design can also be extended with the reintroduction of the treatment (ABAB), another return to baseline (ABABA), and so on.

The study by Hall and his colleagues was an ABAB reversal design. Figure 10.3 approximates the data for Robbie. The percentage of time he spent studying (the dependent variable) was low during the first baseline phase, increased during the first treatment phase until it leveled off, decreased during the second baseline phase, and again increased during the second treatment phase.

A graph showing the results of a study with an ABAB reversal design. Long description available.

Why is the reversal—the removal of the treatment—considered to be necessary in this type of design? Why use an ABA design, for example, rather than a simpler AB design? Notice that an AB design is essentially an interrupted time-series design applied to an individual participant. Recall that one problem with that design is that if the dependent variable changes after the treatment is introduced, it is not always clear that the treatment was responsible for the change. It is possible that something else changed at around the same time and that this extraneous variable is responsible for the change in the dependent variable. But if the dependent variable changes with the introduction of the treatment and then changes  back  with the removal of the treatment (assuming that the treatment does not create a permanent effect), it is much clearer that the treatment (and removal of the treatment) is the cause. In other words, the reversal greatly increases the internal validity of the study.

There are close relatives of the basic reversal design that allow for the evaluation of more than one treatment. In a  multiple-treatment reversal design , a baseline phase is followed by separate phases in which different treatments are introduced. For example, a researcher might establish a baseline of studying behaviour for a disruptive student (A), then introduce a treatment involving positive attention from the teacher (B), and then switch to a treatment involving mild punishment for not studying (C). The participant could then be returned to a baseline phase before reintroducing each treatment—perhaps in the reverse order as a way of controlling for carryover effects. This particular multiple-treatment reversal design could also be referred to as an ABCACB design.

In an  alternating treatments design , two or more treatments are alternated relatively quickly on a regular schedule. For example, positive attention for studying could be used one day and mild punishment for not studying the next, and so on. Or one treatment could be implemented in the morning and another in the afternoon. The alternating treatments design can be a quick and effective way of comparing treatments, but only when the treatments are fast acting.

Multiple-Baseline Designs

There are two potential problems with the reversal design—both of which have to do with the removal of the treatment. One is that if a treatment is working, it may be unethical to remove it. For example, if a treatment seemed to reduce the incidence of self-injury in a developmentally disabled child, it would be unethical to remove that treatment just to show that the incidence of self-injury increases. The second problem is that the dependent variable may not return to baseline when the treatment is removed. For example, when positive attention for studying is removed, a student might continue to study at an increased rate. This could mean that the positive attention had a lasting effect on the student’s studying, which of course would be good. But it could also mean that the positive attention was not really the cause of the increased studying in the first place. Perhaps something else happened at about the same time as the treatment—for example, the student’s parents might have started rewarding him for good grades.

One solution to these problems is to use a  multiple-baseline design , which is represented in Figure 10.4. In one version of the design, a baseline is established for each of several participants, and the treatment is then introduced for each one. In essence, each participant is tested in an AB design. The key to this design is that the treatment is introduced at a different  time  for each participant. The idea is that if the dependent variable changes when the treatment is introduced for one participant, it might be a coincidence. But if the dependent variable changes when the treatment is introduced for multiple participants—especially when the treatment is introduced at different times for the different participants—then it is extremely unlikely to be a coincidence.

Three graphs depicting the results of a multiple-baseline study. Long description available.

As an example, consider a study by Scott Ross and Robert Horner (Ross & Horner, 2009) [2] . They were interested in how a school-wide bullying prevention program affected the bullying behaviour of particular problem students. At each of three different schools, the researchers studied two students who had regularly engaged in bullying. During the baseline phase, they observed the students for 10-minute periods each day during lunch recess and counted the number of aggressive behaviours they exhibited toward their peers. (The researchers used handheld computers to help record the data.) After 2 weeks, they implemented the program at one school. After 2 more weeks, they implemented it at the second school. And after 2 more weeks, they implemented it at the third school. They found that the number of aggressive behaviours exhibited by each student dropped shortly after the program was implemented at his or her school. Notice that if the researchers had only studied one school or if they had introduced the treatment at the same time at all three schools, then it would be unclear whether the reduction in aggressive behaviours was due to the bullying program or something else that happened at about the same time it was introduced (e.g., a holiday, a television program, a change in the weather). But with their multiple-baseline design, this kind of coincidence would have to happen three separate times—a very unlikely occurrence—to explain their results.

In another version of the multiple-baseline design, multiple baselines are established for the same participant but for different dependent variables, and the treatment is introduced at a different time for each dependent variable. Imagine, for example, a study on the effect of setting clear goals on the productivity of an office worker who has two primary tasks: making sales calls and writing reports. Baselines for both tasks could be established. For example, the researcher could measure the number of sales calls made and reports written by the worker each week for several weeks. Then the goal-setting treatment could be introduced for one of these tasks, and at a later time the same treatment could be introduced for the other task. The logic is the same as before. If productivity increases on one task after the treatment is introduced, it is unclear whether the treatment caused the increase. But if productivity increases on both tasks after the treatment is introduced—especially when the treatment is introduced at two different times—then it seems much clearer that the treatment was responsible.

In yet a third version of the multiple-baseline design, multiple baselines are established for the same participant but in different settings. For example, a baseline might be established for the amount of time a child spends reading during his free time at school and during his free time at home. Then a treatment such as positive attention might be introduced first at school and later at home. Again, if the dependent variable changes after the treatment is introduced in each setting, then this gives the researcher confidence that the treatment is, in fact, responsible for the change.

Data Analysis in Single-Subject Research

In addition to its focus on individual participants, single-subject research differs from group research in the way the data are typically analyzed. As we have seen throughout the book, group research involves combining data across participants. Group data are described using statistics such as means, standard deviations, Pearson’s  r , and so on to detect general patterns. Finally, inferential statistics are used to help decide whether the result for the sample is likely to generalize to the population. Single-subject research, by contrast, relies heavily on a very different approach called  visual inspection . This means plotting individual participants’ data as shown throughout this chapter, looking carefully at those data, and making judgments about whether and to what extent the independent variable had an effect on the dependent variable. Inferential statistics are typically not used.

In visually inspecting their data, single-subject researchers take several factors into account. One of them is changes in the  level  of the dependent variable from condition to condition. If the dependent variable is much higher or much lower in one condition than another, this suggests that the treatment had an effect. A second factor is  trend , which refers to gradual increases or decreases in the dependent variable across observations. If the dependent variable begins increasing or decreasing with a change in conditions, then again this suggests that the treatment had an effect. It can be especially telling when a trend changes directions—for example, when an unwanted behaviour is increasing during baseline but then begins to decrease with the introduction of the treatment. A third factor is  latency , which is the time it takes for the dependent variable to begin changing after a change in conditions. In general, if a change in the dependent variable begins shortly after a change in conditions, this suggests that the treatment was responsible.

In the top panel of Figure 10.5, there are fairly obvious changes in the level and trend of the dependent variable from condition to condition. Furthermore, the latencies of these changes are short; the change happens immediately. This pattern of results strongly suggests that the treatment was responsible for the changes in the dependent variable. In the bottom panel of Figure 10.5, however, the changes in level are fairly small. And although there appears to be an increasing trend in the treatment condition, it looks as though it might be a continuation of a trend that had already begun during baseline. This pattern of results strongly suggests that the treatment was not responsible for any changes in the dependent variable—at least not to the extent that single-subject researchers typically hope to see.

Results of a single-subject study showing level, trend and latency. Long description available.

The results of single-subject research can also be analyzed using statistical procedures—and this is becoming more common. There are many different approaches, and single-subject researchers continue to debate which are the most useful. One approach parallels what is typically done in group research. The mean and standard deviation of each participant’s responses under each condition are computed and compared, and inferential statistical tests such as the  t  test or analysis of variance are applied (Fisch, 2001) [3] . (Note that averaging  across  participants is less common.) Another approach is to compute the  percentage of nonoverlapping data  (PND) for each participant (Scruggs & Mastropieri, 2001) [4] . This is the percentage of responses in the treatment condition that are more extreme than the most extreme response in a relevant control condition. In the study of Hall and his colleagues, for example, all measures of Robbie’s study time in the first treatment condition were greater than the highest measure in the first baseline, for a PND of 100%. The greater the percentage of nonoverlapping data, the stronger the treatment effect. Still, formal statistical approaches to data analysis in single-subject research are generally considered a supplement to visual inspection, not a replacement for it.

Key Takeaways

  • Single-subject research designs typically involve measuring the dependent variable repeatedly over time and changing conditions (e.g., from baseline to treatment) when the dependent variable has reached a steady state. This approach allows the researcher to see whether changes in the independent variable are causing changes in the dependent variable.
  • In a reversal design, the participant is tested in a baseline condition, then tested in a treatment condition, and then returned to baseline. If the dependent variable changes with the introduction of the treatment and then changes back with the return to baseline, this provides strong evidence of a treatment effect.
  • In a multiple-baseline design, baselines are established for different participants, different dependent variables, or different settings—and the treatment is introduced at a different time on each baseline. If the introduction of the treatment is followed by a change in the dependent variable on each baseline, this provides strong evidence of a treatment effect.
  • Single-subject researchers typically analyze their data by graphing them and making judgments about whether the independent variable is affecting the dependent variable based on level, trend, and latency.
  • Does positive attention from a parent increase a child’s toothbrushing behaviour?
  • Does self-testing while studying improve a student’s performance on weekly spelling tests?
  • Does regular exercise help relieve depression?
  • Practice: Create a graph that displays the hypothetical results for the study you designed in Exercise 1. Write a paragraph in which you describe what the results show. Be sure to comment on level, trend, and latency.

Long Descriptions

Figure 10.3 long description: Line graph showing the results of a study with an ABAB reversal design. The dependent variable was low during first baseline phase; increased during the first treatment; decreased during the second baseline, but was still higher than during the first baseline; and was highest during the second treatment phase. [Return to Figure 10.3]

Figure 10.4 long description: Three line graphs showing the results of a generic multiple-baseline study, in which different baselines are established and treatment is introduced to participants at different times.

For Baseline 1, treatment is introduced one-quarter of the way into the study. The dependent variable ranges between 12 and 16 units during the baseline, but drops down to 10 units with treatment and mostly decreases until the end of the study, ranging between 4 and 10 units.

For Baseline 2, treatment is introduced halfway through the study. The dependent variable ranges between 10 and 15 units during the baseline, then has a sharp decrease to 7 units when treatment is introduced. However, the dependent variable increases to 12 units soon after the drop and ranges between 8 and 10 units until the end of the study.

For Baseline 3, treatment is introduced three-quarters of the way into the study. The dependent variable ranges between 12 and 16 units for the most part during the baseline, with one drop down to 10 units. When treatment is introduced, the dependent variable drops down to 10 units and then ranges between 8 and 9 units until the end of the study. [Return to Figure 10.4]

Figure 10.5 long description: Two graphs showing the results of a generic single-subject study with an ABA design. In the first graph, under condition A, level is high and the trend is increasing. Under condition B, level is much lower than under condition A and the trend is decreasing. Under condition A again, level is about as high as the first time and the trend is increasing. For each change, latency is short, suggesting that the treatment is the reason for the change.

In the second graph, under condition A, level is relatively low and the trend is increasing. Under condition B, level is a little higher than during condition A and the trend is increasing slightly. Under condition A again, level is a little lower than during condition B and the trend is decreasing slightly. It is difficult to determine the latency of these changes, since each change is rather minute, which suggests that the treatment is ineffective. [Return to Figure 10.5]

  • Sidman, M. (1960). Tactics of scientific research: Evaluating experimental data in psychology . Boston, MA: Authors Cooperative. ↵
  • Ross, S. W., & Horner, R. H. (2009). Bully prevention in positive behaviour support. Journal of Applied Behaviour Analysis, 42 , 747–759. ↵
  • Fisch, G. S. (2001). Evaluating data from behavioural analysis: Visual inspection or statistical models.  Behavioural Processes, 54 , 137–154. ↵
  • Scruggs, T. E., & Mastropieri, M. A. (2001). How to summarize single-participant research: Ideas and applications.  Exceptionality, 9 , 227–244. ↵

The researcher waits until the participant’s behaviour in one condition becomes fairly consistent from observation to observation before changing conditions. This way, any change across conditions will be easy to detect.

A study method in which the researcher gathers data on a baseline state, introduces the treatment and continues observation until a steady state is reached, and finally removes the treatment and observes the participant until they return to a steady state.

The level of responding before any treatment is introduced and therefore acts as a kind of control condition.

A baseline phase is followed by separate phases in which different treatments are introduced.

Two or more treatments are alternated relatively quickly on a regular schedule.

A baseline is established for several participants and the treatment is then introduced to each participant at a different time.

The plotting of individual participants’ data, examining the data, and making judgements about whether and to what extent the independent variable had an effect on the dependent variable.

Whether the data is higher or lower based on a visual inspection of the data; a change in the level implies the treatment introduced had an effect.

The gradual increases or decreases in the dependent variable across observations.

The time it takes for the dependent variable to begin changing after a change in conditions.

The percentage of responses in the treatment condition that are more extreme than the most extreme response in a relevant control condition.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

single case research designs are best suited to

  • Skip to main content
  • Skip to primary sidebar

IResearchNet

Single-Case Experimental Design

Single-case experimental design, a versatile research methodology within psychology, holds particular significance in the field of school psychology. This article provides an overview of single-case experimental design, covering its definition, historical development, and key concepts. It delves into various types of single-case designs, including AB, ABA, and Multiple Baseline designs, illustrating their applications within school psychology. The article also explores data collection, analysis methods, and common challenges associated with this methodology. By highlighting its value in empirical research, this article underscores the enduring relevance of single-case experimental design in advancing the understanding and practice of school psychology.

Introduction

Single-case experimental design, a research methodology of profound importance in the realm of psychology, is characterized by its unique approach to investigating behavioral and psychological phenomena. Within this article, we will embark on a journey to explore the intricate facets of this research methodology and unravel its multifaceted applications, with a particular focus on its relevance in school psychology.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% off with 24start discount code.

Single-case experimental design, often referred to as “N of 1” research, is a methodology that centers on the in-depth examination of individual subjects or cases. Unlike traditional group-based designs, this approach allows researchers to closely study and understand the nuances of a single participant’s behavior, responses, and reactions over time. The precision and depth of insight offered by single-case experimental design have made it an invaluable tool in the field of psychology, facilitating both clinical and experimental research endeavors.

One of the most compelling aspects of this research methodology lies in its applicability to school psychology. In educational settings, understanding the unique needs and challenges of individual students is paramount, and single-case experimental design offers a tailored and systematic way to address these issues. Whether it involves assessing the effectiveness of an intervention for a specific learning disability or studying the impact of a behavior modification program for a student with special needs, single-case experimental design equips school psychologists with a powerful tool to make data-driven decisions and individualized educational plans.

Throughout this article, we will delve into the foundations of single-case experimental design, exploring its historical evolution, key concepts, and core terminology. We will also discuss the various types of single-case designs, including AB, ABA, and Multiple Baseline designs, illustrating their practical applications within the context of school psychology. Furthermore, the article will shed light on the data collection methods and the statistical techniques used for analysis, as well as the ethical considerations and challenges that researchers encounter in single-case experiments.

In sum, this article aims to provide an in-depth understanding of single-case experimental design and its pivotal role in advancing knowledge in psychology, particularly within the field of school psychology. As we embark on this exploration, it is evident that single-case experimental design serves as a bridge between rigorous scientific inquiry and the real-world needs of individuals, making it an indispensable asset in enhancing the quality of psychological research and practice.

Understanding Single-Case Experimental Design

Single-Case Experimental Design (SCED), often referred to as “N of 1” research, is a research methodology employed in psychology to investigate behavioral and psychological phenomena with an emphasis on the individual subject as the primary unit of analysis. The primary purpose of SCED is to meticulously study the behavior, responses, and changes within a single participant over time. Unlike traditional group-based research, SCED is tailored to the unique characteristics and needs of individual cases, enabling a more in-depth understanding of the variables under investigation.

The historical background of SCED can be traced back to the early 20th century when researchers like B.F. Skinner pioneered the development of operant conditioning and experimental analysis of behavior. Skinner’s work laid the groundwork for single-case experiments by emphasizing the importance of understanding the functional relations between behavior and environmental variables. Over the decades, SCED has evolved and gained prominence in various fields within psychology, notably in clinical and school psychology. Its relevance in school psychology is particularly noteworthy, as it offers a systematic and data-driven approach to address the diverse learning and behavioral needs of students. School psychologists use SCED to design and assess individualized interventions, evaluate the effectiveness of specific teaching strategies, and make informed decisions about special education programs.

Understanding SCED involves familiarity with key concepts and terminology that underpin the methodology. These terms include:

  • Baseline: The initial phase of data collection where the participant’s behavior is measured before any intervention is introduced. Baseline data serve as a point of reference for assessing the impact of subsequent interventions.
  • Intervention: The phase in which a specific treatment, manipulation, or condition is introduced to the participant. The goal of the intervention is to bring about a change in the target behavior.
  • Dependent Variables: These are the behaviors or responses under investigation. They are the outcomes that researchers aim to measure and analyze for changes across different phases of the experiment.

Reliability and validity are critical considerations in SCED. Reliability refers to the consistency and stability of measurement. In SCED, it is crucial to ensure that data collection procedures are reliable, as any variability can affect the interpretation of results. Validity pertains to the accuracy and truthfulness of the data. Researchers must establish that the dependent variable measurements are valid and accurately reflect the behavior of interest. When these principles are applied in SCED, it enhances the scientific rigor and credibility of the research findings, which is essential in both clinical and school psychology contexts.

This foundation of key concepts and terminology serves as the basis for designing, conducting, and interpreting single-case experiments, ensuring that the methodology maintains high standards of precision and integrity in the pursuit of understanding individual behavior and psychological processes.

Types of Single-Case Experimental Designs

The AB Design is one of the fundamental single-case experimental designs, characterized by its simplicity and effectiveness. In an AB Design, the researcher observes and measures a single subject’s behavior during two distinct phases: the baseline (A) phase and the intervention (B) phase. During the baseline phase, the researcher collects data on the subject’s behavior without any intervention or treatment. This baseline data serve as a reference point to understand the natural or typical behavior of the individual. Following the baseline phase, the intervention or treatment is introduced, and data on the subject’s behavior are collected again. The AB Design allows for the comparison of baseline data with intervention data, enabling researchers to determine whether the introduced intervention had a noticeable impact on the individual’s behavior.

AB Designs find extensive application in school psychology. For instance, consider a scenario where a school psychologist wishes to assess the effectiveness of a time-management training program for a student with attention deficit hyperactivity disorder (ADHD). During the baseline phase, the psychologist observes the student’s on-task behavior in the absence of any specific time-management training. Subsequently, during the intervention phase, the psychologist implements the time-management program and measures the student’s on-task behavior again. By comparing the baseline and intervention data, the psychologist can evaluate the program’s efficacy in improving the student’s behavior.

The ABA Design is another prominent single-case experimental design characterized by the inclusion of a reversal (A) phase. In this design, the researcher initially collects baseline data (Phase A), introduces the intervention (Phase B), and then returns to the baseline conditions (Phase A). The ABA Design is significant because it provides an opportunity to assess the reversibility of the effects of the intervention. If the behavior returns to baseline levels during the second A phase, it suggests a strong relationship between the intervention and the observed changes in behavior.

In school psychology, the ABA Design offers valuable insights into the effectiveness of interventions for students with diverse needs. For instance, a school psychologist may use the ABA Design to evaluate a behavior modification program for a student with autism spectrum disorder (ASD). During the first baseline phase (A), the psychologist observes the student’s behavior patterns. Subsequently, in the intervention phase (B), a behavior modification program is implemented. If the student’s behavior shows positive changes, this suggests that the program is effective. Finally, during the second baseline phase (A), the psychologist can determine if the changes are reversible, which informs decisions regarding the program’s ongoing use or modification.

The Multiple Baseline Design is a versatile single-case experimental design that addresses challenges such as ethical concerns or logistical constraints that might limit the use of reversal designs. In this design, researchers stagger the introduction of the intervention across multiple behaviors, settings, or individuals. Each baseline and intervention phase is implemented at different times for each behavior, allowing researchers to establish a cause-and-effect relationship by demonstrating that the intervention corresponds with changes in the specific behavior under investigation.

Within school psychology, Multiple Baseline Designs offer particular utility when assessing interventions for students in complex or sensitive situations. For example, a school psychologist working with a student who displays challenging behaviors may choose to implement a Multiple Baseline Design. The psychologist can introduce a behavior intervention plan (BIP) for different target behaviors, such as aggression, noncompliance, and self-injury, at different times. By measuring and analyzing changes in behavior across these multiple behaviors, the psychologist can assess the effectiveness of the BIP and make informed decisions about its implementation across various behavioral concerns. This design is particularly valuable when ethical considerations prevent the reversal of an effective intervention, as it allows researchers to demonstrate the intervention’s impact without removing a beneficial treatment.

Conducting and Analyzing Single-Case Experiments

In single-case experiments, data collection and measurement are pivotal components that underpin the scientific rigor of the research. Data are typically collected through direct observation, self-reports, or the use of various measuring instruments, depending on the specific behavior or variable under investigation. To ensure reliability and validity, researchers meticulously define and operationalize the target behavior, specifying how it will be measured. This may involve the use of checklists, rating scales, video recordings, or other data collection tools. In school psychology research, systematic data collection is imperative to make informed decisions about interventions and individualized education plans (IEPs). It provides school psychologists with empirical evidence to track the progress of students, assess the effectiveness of interventions, and adapt strategies based on the collected data.

Visual analysis is a core element of interpreting data in single-case experiments. Researchers plot the data in graphs, creating visual representations of the behavior across different phases. By visually inspecting the data, researchers can identify patterns, trends, and changes in behavior. Visual analysis is particularly well-suited for detecting whether an intervention has had a noticeable effect.

In addition to visual analysis, statistical methods are occasionally employed in single-case experiments to enhance the rigor of analysis. These methods include effect size calculations and phase change calculations. Effect size measures, such as Cohen’s d or Tau-U, quantify the magnitude of change between the baseline and intervention phases, providing a quantitative understanding of the treatment’s impact. Phase change calculations determine the statistical significance of behavior change across different phases, aiding in the determination of whether the intervention had a meaningful effect.

Visual analysis and statistical methods complement each other, enabling researchers in school psychology to draw more robust conclusions about the efficacy of interventions. These methods are valuable in making data-driven decisions regarding students’ educational and behavioral progress.

Single-case experimental designs are not without their challenges and limitations. Researchers must grapple with issues such as the potential for confounding variables, limited generalizability to other cases, and the need for careful control of extraneous factors. In school psychology, these challenges are compounded by the dynamic and diverse nature of educational settings, making it essential for researchers to adapt the methodology to specific contexts and populations.

Moreover, ethical considerations loom large in school psychology research. Researchers must adhere to strict ethical guidelines when conducting single-case experiments involving students. Informed consent, confidentiality, and the well-being of the participants are paramount. Ethical considerations are especially critical when conducting research with vulnerable populations, such as students with disabilities or those in special education programs. The ethical conduct of research in school psychology is pivotal to maintaining trust and ensuring the welfare of students and their families.

In conclusion, the application of single-case experimental design in school psychology research is a powerful approach for addressing individualized educational and behavioral needs. By emphasizing systematic data collection, employing visual analysis and statistical methods, and navigating the inherent challenges and ethical considerations, researchers can contribute to the advancement of knowledge in this field while ensuring the well-being and progress of the students they serve.

In conclusion, this article has provided a comprehensive exploration of Single-Case Experimental Design (SCED) and its vital role within the domain of school psychology. Key takeaways from this article underscore the significance of SCED as a versatile and invaluable research methodology:

First and foremost, SCED is a methodological cornerstone for investigating individual behavior and psychological phenomena. Through meticulous observation and data collection, it enables researchers to gain deep insights into the idiosyncratic needs and responses of students in educational settings.

The significance of SCED in school psychology is pronounced. It empowers school psychologists to design and assess tailored interventions, evaluate the effectiveness of educational programs, and make data-driven decisions that enhance the quality of education for students with diverse needs. Whether it’s tracking progress, assessing the efficacy of behavioral interventions, or individualizing education plans, SCED plays an instrumental role in achieving these goals.

Furthermore, the article has illuminated three primary types of single-case experimental designs: AB, ABA, and Multiple Baseline. These designs offer the flexibility to investigate the effects of interventions and assess their reversibility when required. Such methods have a direct and tangible impact on the daily practices of school psychologists, allowing them to optimize support and educational strategies.

The importance of systematic data collection and measurement, the role of visual analysis and statistical methods in data interpretation, and the acknowledgment of ethical considerations in school psychology research have been underscored. These aspects collectively serve as the foundation of SCED, ensuring the integrity and reliability of research outcomes.

As we look toward the future, the potential developments in SCED are promising. Advances in technology, such as wearable devices and digital data collection tools, offer new possibilities for precise and efficient data gathering. Additionally, the integration of SCED with other research methodologies, such as mixed-methods research, holds the potential to provide a more comprehensive understanding of students’ educational experiences.

In summary, Single-Case Experimental Design is a pivotal research methodology that bridges the gap between rigorous scientific inquiry and the real-world needs of students in school psychology. Its power lies in its capacity to assess, refine, and individualize interventions and educational plans. The continued application and refinement of SCED in school psychology research promise to contribute significantly to the advancement of knowledge and the enhancement of educational outcomes for students of all backgrounds and abilities. As we move forward, the integration of SCED with emerging technologies and research paradigms will continue to shape the landscape of school psychology research, leading to more effective and tailored interventions for the benefit of students and the field as a whole.

References:

  • Barlow, D. H., & Nock, M. K. (2009). Why can’t we be more idiographic in our research? Perspectives on Psychological Science, 4(1), 19-21.
  • Cook, B. G., & Schirmer, B. R. (2003). What is N of 1 research? Exceptionality, 11(1), 65-76.
  • Cooper, J. O., Heron, T. E., & Heward, W. L. (2020). Applied behavior analysis (3rd ed.). Pearson.
  • Kazdin, A. E. (1982). Single-case research designs: Methods for clinical and applied settings. Oxford University Press.
  • Kratochwill, T. R., Hitchcock, J. H., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2010). Single-case intervention research design standards. Remedial and Special Education, 31(3), 205-214.
  • Levin, J. R., Ferron, J. M., Kratochwill, T. R., Forster, J. L., Rodgers, M. S., Maczuga, S. A., & Chinn, S. (2016). A randomized controlled trial evaluation of a research synthesis and research proposal process aimed at improving graduate students’ research competency. Journal of Educational Psychology, 108(5), 680-701.
  • Morgan, D. L., & Morgan, R. K. (2009). Single-participant research design: Bringing science to managed care. Psychotherapy Research, 19(4-5), 577-587.
  • Ottenbacher, K. J., & Maas, F. (1999). The effect of statistical methodology on the single subject design: An empirical investigation. Journal of Behavioral Education, 9(2), 111-130.
  • Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin.
  • Sidman, M. (1960). Tactics of scientific research: Evaluating experimental data in psychology. Basic Books.
  • Vannest, K. J., Parker, R. I., Gonen, O., Adigüzel, T., & Bovaird, J. A. (2016). Single case research: web-based calculators for SCR analysis. Behavior Research Methods, 48(1), 97-103.
  • Wilczynski, S. M., & Christian, L. (2008). Applying single-subject design for students with disabilities in inclusive settings. Pearson.
  • Wong, C., Odom, S. L., Hume, K. A., Cox, A. W., Fettig, A., Kucharczyk, S., & Schultz, T. R. (2015). Evidence-based practices for children, youth, and young adults with autism spectrum disorder: A comprehensive review. Journal of Autism and Developmental Disorders, 45(7), 1951-1966.
  • Kratochwill, T. R., & Levin, J. R. (2018). Single-case research design and analysis: New directions for psychology and education. Routledge.
  • Hall, R. V., & Fox, L. (2015). The need for N of 1 research in special education. Exceptionality, 23(4), 225-233.
  • Shadish, W. R., & Sullivan, K. J. (2011). Characteristics of single-case designs used to assess intervention effects in 2008. Behavior Research Methods, 43(4), 971-980.
  • Campbell, D. T., & Stanley, J. C. (2015). Experimental and quasi-experimental designs for research. Ravenio Books.
  • Kazdin, A. E. (2011). Single-case research designs: Methods for clinical and applied settings (2nd ed.). Oxford University Press.
  • Therrien, W. J., & Bulawski, J. (2019). The use of single-case experimental designs in school psychology research: A systematic review. Journal of School Psychology, 73, 92-112.
  • Gavidia-Payne, S., Little, E., & Schell, G. (2018). Single-case experimental design: Applications in developmental and behavioral science. Routledge.

Book cover

Research Design in Business and Management pp 141–170 Cite as

Single Case Research Design

  • Stefan Hunziker 3 &
  • Michael Blankenagel 3  
  • First Online: 10 November 2021

3868 Accesses

3 Citations

This chapter addresses the peculiarities, characteristics, and major fallacies of single case research designs. A single case study research design is a collective term for an in-depth analysis of a small non-random sample. The focus on this design is on in-depth. This characteristic distinguishes the case study research from other research designs that understand the individual case as a rather insignificant and interchangeable aspect of a population or sample. Also, researchers find relevant information on how to write a single case research design paper and learn about typical methodologies used for this research design. The chapter closes with referring to overlapping and adjacent research designs.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF

Tax calculation will be finalised at checkout

Purchases are for personal use only

Baškarada, S. (2014). Qualitative case studies guidelines. The Qualitative Report, 19 (40), 1–25.

Google Scholar  

Berg, B., & Lune, H. (2012). Qualitative research methods for the social sciences. Pearson.

Bryman, A. (2004). Social research methods (2nd ed.). Oxford University Press, 592.

Burns, R. B. (2000). Introduction to research methods. United States of America.

Creswell, J. W. (2013). Qualitative inquiry and research design. Choosing among five approaches (3rd ed.). SAGE.

Darke, P., Shanks, G., & Broadbent, M. (1998). Successfully completing case study research: Combining rigour, relevance and pragmatism. Inform Syst J, 8 (4), 273–289.

Article   Google Scholar  

Dey, I. (1999). Grounding grounded theory: Guidelines for qualitative inquiry . Academic Press.

Dick, B. (2005). Grounded theory: A thumbnail sketch. Retrieved 11 June 2021 from http://www.scu.edu.au/schools/gcm/ar/arp/grounded.html .

Dooley, L. M. (2002). Case study research and theory building. Advances in Developing Human Resources, 4 (3), 335–354.

Edmonds, W. A., & Kennedy, T. D. (2012). An applied reference guide to research designs: Quantitative, qualitative, and mixed methods . Thousand Oaks, CA: Sage.

Edmondson, A. & McManus, S. (2007). Methodological fit in management field research. The Academy of Management Review, 32 (4), 1155–1179.

Eisenhardt, K. M. (1989). Building theories from case study research. Academy of Management Review, 14 (4), 532–550.

Glaser, B., & Strauss, A. (1967). The discovery of grounded theory: Strategies for qualitative research . Sociology Press.

Flynn, B. B., Sakakibara, S., Schroeder, R. G., Bates, K. A., & Flynn, E. J. (1990). Empirical research methods in operations management. Journal of Operations Management, 9 (2), 250–284.

Flyvbjerg, B. (2006). Five misunderstandings about case-study research. Qualitative Inquiry, 12 (2), 219–245.

General Accounting Office (1990). Case study evaluations. Retrieved May 15, 2021, from https://www.gao.gov/assets/pemd-10.1.9.pdf .

Gomm, R. (2000). Case study method. Key issues, key texts . SAGE.

Halaweh, M. (2012). Integration of grounded theory and case study: An exemplary application from e-commerce security perception research. Journal of Information Technology Theory and Application (JITTA), 13 (1).

Hancock, D., & Algozzine, B. (2016). Doing case study research: A practical guide for beginning researchers (3rd ed.). Teachers College Press.

Hekkala, R. (2007). Grounded theory—the two faces of the methodology and their manifestation in IS research. In Proceedings of the 30th Information Systems Research Seminar in Scandinavia IRIS, 11–14 August, Tampere, Finland (pp. 1–12).

Hyett, N., Kenny, A., & Dickson-Swift, V. (2014). Methodology or method? A critical review of qualitative case study reports. International Journal of Qualitative Studies on Health and Well-Being, 9 , 23606.

Keating, P. J. (1995). A framework for classifying and evaluating the theoretical contributions of case research in management accounting. Journal of Management Accounting Research, 7 , 66.

Levy, J. S. (2008). Case studies: Types, designs, and logics of inference. Conflict Management and Peace Science, 25 (1), 1–18.

Meyer, J.-A., & Kittel-Wegner, E. (2002). Die Fallstudie in der betriebswirtschaftlichen Forschung und Lehre . Stiftungslehrstuhl für ABWL, insb. kleine und mittlere Unternehmen, Universität.

Mitchell, J. C. (1983). Case and situation analysis. The Sociological Review, 31 (2), 187–211.

Ng, Y. N. K. & Hase, S. (2008). Grounded suggestions for doing a grounded theory business research. Electronic Journal on Business Research Methods, 6 (2).

Ng. (2005). A principal-distributor collaboration moden in the crane industry. Ph.D. Thesis, Graduate College of Management, Southern Cross University, Australia.

Ridder, H.-G. (2016). Case study research. Approaches, methods, contribution to theory. Sozialwissenschaftliche Forschungsmethoden (vol. 12). Rainer Hampp Verlag.

Ridder, H.-G. (2017). The theory contribution of case study research designs. Business Research, 10 (2), 281–305.

Maoz, Z. (2002). Case study methodology in international studies: from storytelling to hypothesis testing. In F. P. Harvey & M. Brecher (Eds.). Evaluating methodology in international studies . University of Michigan Press.

May, T. (2011). Social research: Issues, methods and process . Open University Press/Mc.

Merriam, S. B. (2009). Qualitative research in practice: Examples for discussion and analysis .

Onwuegbuzie, A. J., Leech, N. L., & Collins, K. M. (2012). Qualitative analysis techniques for the review of the literature. Qualitative Report, 17 (56).

Piekkari, R., Welch, C., & Paavilainen, E. (2009). The case study as disciplinary convention. Organizational Research Methods, 12 (3), 567–589.

Stake, R. E. (1995). The art of case study research . Sage.

Stake, R. E. (2005). Qualitative case studies. The SAGE handbook of qualitative research (3rd ed.), ed. N. K. Denzin & Y. S. Lincoln (pp. 443–466).

Strauss, A. L., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques . Sage publications.

Strauss, A. L., & Corbin, J. (1998). Basics of qualitative research techniques and procedures for developing grounded theory . Sage.

Tight, M. (2003). Researching higher education . Society for Research into Higher Education; Open University Press.

Tight, M. (2010). The curious case of case study: A viewpoint. International Journal of Social Research Methodology, 13 (4), 329–339.

Walsham, G. (2006). Doing interpretive research. European Journal of Information Systems, 15 (3), 320–330.

Welch, C., Piekkari, R., Plakoyiannaki, E., & Paavilainen-Mäntymäki, E. (2011). Theorising from case studies: Towards a pluralist future for international business research. Journal of International Business Studies, 42 (5), 740–762.

Woods, M. (2009). A contingency theory perspective on the risk management control system within Birmingham City Council. Management Accounting Research, 20 (1), 69–81.

Yin, R. K. (1994). Discovering the future of the case study. Method in evaluation research. American Journal of Evaluation, 15 (3), 283–290.

Yin, R. K. (2014). Case study research. Design and methods (5th ed.). SAGE.

Download references

Author information

Authors and affiliations.

Wirtschaft/IFZ – Campus Zug-Rotkreuz, Hochschule Luzern, Zug-Rotkreuz, Zug , Switzerland

Stefan Hunziker & Michael Blankenagel

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stefan Hunziker .

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature

About this chapter

Cite this chapter.

Hunziker, S., Blankenagel, M. (2021). Single Case Research Design. In: Research Design in Business and Management. Springer Gabler, Wiesbaden. https://doi.org/10.1007/978-3-658-34357-6_8

Download citation

DOI : https://doi.org/10.1007/978-3-658-34357-6_8

Published : 10 November 2021

Publisher Name : Springer Gabler, Wiesbaden

Print ISBN : 978-3-658-34356-9

Online ISBN : 978-3-658-34357-6

eBook Packages : Business and Economics (German Language)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Hum Neurosci
  • PMC10634204

The benefits of single-subject research designs and multi-methodological approaches for neuroscience research

1. introduction.

The scientific method is neither singular nor fixed; it is an evolving, plural set of processes. It develops and improves through time as methodology rises to meet new challenges (Lakatos, 1978 ; Hull, 1988 ; Kuhn and Hacking, 2012 ). “It would be wrong to assume that one must stay with a research programme until it has exhausted all its heuristic power, that one must not introduce a rival programme before everybody agrees that the point of degeneration has probably been reached” (Lakatos, 1978 ). These insights apply not least to experimental design approaches.

For better and for worse, no experimental design comes without limitation. We must accept that the realities of the world cannot be simplistically verified against universal standard procedures; we are free instead to explore how the progressive evolution of experimental design enables new advancement. This paper proposes support for a shift of focus in the methodology of experimental research in neuroscience toward an increased utilization of single-subject experimental designs. I will highlight several supports for this suggestion. Most importantly, single-subject methods can complement group methodologies in two ways: by addressing important points of internal validity and by enabling the inductive process characteristic of quality early research. The power of these approaches has already been somewhat established by key historical neuroscience experiments. Additionally, the individuated nature of subject matter in behavioral neuroscience makes the single-subject approach particularly powerful, and single-subject phases in a research program can decrease time and resource costs in relation to scientific gains.

2. Complimentary research designs

Though the completely randomized group design is considered by many to be the gold standard of evidence (Meldrum, 2000 ), its limitations as well as ethical and logistical execution difficulties have been noted: e.g., blindness to group heterogeneity, problematic application to individual cases, and experimental weakness in the context of other often-neglected aspects of study design such as group size, randomization, and bias (Kravitz et al., 2004 ; Grossman and Mackenzie, 2005 ; Williams, 2010 ; Button et al., 2013 ). Thus, the concept of a “gold standard” results not from the uniform superiority of a method, but from an implicit valuing of its relative strengths compared to other designs, all things being equal (even though such things as context, randomization, group size, bias, heterogeneity, etc. are rarely equal). There is an alternative to this approach. Utilizing a wider array of methods across studies can help compensate for the limitations of each and provide flexibility in the face of unequal contexts. In a multi-methodological approach, different experimental designs can be evaluated in terms of complementarity rather than absolute strength. If one experimental design is limited in a particular way, adding another approach that is stronger in that aspect (but perhaps limited in another) can provide a more complete picture. This tactic also implicitly acknowledges that scientific rigor does not proceed only from the single study; replication, systematic replication, and convergent evidence may proceed from a progression of methods.

I suggest adding greater utilization of single-subject design to the already traditionally utilized between-subject and within-subject group designs in neuroscience to achieve this complementarity. The advantages and limitations of these designs are somewhat symmetrical. Overall, single-subject experiments carry with them more finely-focused internal validity because the same subject (together with their array of individual characteristics) serves in both the experimental and control conditions. Unlike in typical within-subject group comparisons, the repetition of comparisons in single-subject designs control for other confounding variables, rendering n = 1 into a true experiment. While an unreplicated single-subject experiment by itself cannot establish external validity, systematic replication of single-subject experiments over the relevant range of individual differences can. On the other hand, group designs cannot demonstrate an effect on an individual level, but within-individual group studies can characterize the generality of effects across large populations in a single properly sampled study, and may be particularly suited to analyzing combined effects of multiple variables (Kazdin, 1981 ). Single subject and group approaches can also be hybridized to fit a study's goals (Kazdin, 2011 ). In the following sections, I will describe aspects of each approach that illustrate how the addition of single-subject methodology to neuroscience could be of use. I do not mean to exhaustively describe either methodology, which would be outside the scope of this paper.

2.1. Group designs

Group experimental designs 1 interrogate the effect of an independent variable (IV) by applying that variable to a group of people, other organisms, or other biological units (e.g., neurons) and usually—but not always—comparing an aggregated population measure to that of one or more control groups. These designs require data from multiple individuals (people, animals, cells, etc.). Group experiments with between-group comparisons often assign these individuals to conditions (experimental or control) randomly. Other group experiments (such as a randomized block design) assign individuals to conditions systematically to explicitly balance the groups according to particular pre-considered individual factors. In both cases, the assumption is that if alternative variables influence the dependent variable (DV), they are unlikely to do so differentially across groups. Group experiments with within-subject comparisons expose each individual to both experimental and control conditions at different times and compare the grouped measures between conditions; this approach assures that the groups are truly identical since the same individuals are included in both conditions.

Because they involve multiple individuals, some group designs can provide important information about the generality of an effect across the included population, especially in the case of within-subject group designs. Unfortunately, some often-misused aspects of group designs tend to temper this advantage. For example, restricted inclusion criteria are often necessary to produce clear results. When desired generality involves only such a restricted population (e.g., only acute stroke patients, or only layer IV glutamatergic cortical neurons), this practice carries no disadvantage. However, if the study aims to identify more widely applicable processes, stringent inclusion criteria can produce cleaner but overly conditional results, limiting external validity (Henrich et al., 2010 ). Further, the analysis approach taken in many group designs that narrowly examines changes in central tendency (such as the mean) of groups can limit the assessment of generality within the sampled population since averaging will wash out heterogeneity of effects. Other aspects of rigor in group designs can also affect external validity (e.g., Kravitz et al., 2004 ; Grossman and Mackenzie, 2005 ; Williams, 2010 ; Button et al., 2013 ).

Another limitation of group design logic is the practical difficulty of balancing individual differences between groups. In the case of between-group comparisons, these difficulties arise from selection bias, mortality, etc. Even well controlled studies can still produce probabilistically imbalanced groups, especially in the small sample sizes often used in neuroscience research (Button et al., 2013 ). Deliberately balanced groups or post-hoc statistical control may help, but the former introduces a potential problem with true randomization, and the latter is weaker than true experimental control. Within-subject group comparisons implement both experimental and control conditions for each individual in a group and therefore better control for individual differences, however these designs still do not experimentally establish effects within the individual since single manipulations of experimental conditions can be confounded with other changes on an individual level.

The typical focus on parameters such as the mean in the analysis of group designs can also threaten internal as well as external validity, particularly if the experimental question concerns biological or behavioral variables that are highly individually contextualized or developmentally variant. 2 This problem extends from the fact that aggregate measures across populations do not necessarily reflect any of the underlying individuals (e.g., Williams, 2010 ); for example, average brain functional mapping tends not to apply to individual brains (Brett et al., 2002 ; Dworetsky et al., 2021 ; Fedorenko, 2021 ; Hanson, 2022 ). This kind of problem is particularly amplified in the study of human behavior and brain sciences, which both tend to be highly idiosyncratic. In these cases, aggregated measures can mask key heterogeneity including contradictory effects of IVs. This can complicate the application of results to individuals: an issue especially relevant in clinical research (Sidman, 1960 ; Williams, 2010 ). Relatedly, the estimation of population-based effect size provides scant information with which to estimate effects and relevance for an individual. Post-hoc statistical analysis may help to tease out these issues, but verification still requires new experimentation. True generality of a scientific insight requires not only that effects occur with reasonable replicability across individuals, but that a reasonable range of conditions that would alter the effect can be predicted: a difficult point to discern in group studies. Thus, while group designs carry advantages insofar as they can be used to characterize effects across a whole population in a single experiment, those advantages can be and often are subverted. Perhaps counter-intuitively, single-subject approaches can be ideal for methodically discovering the common processes that underlie diversity within a population, which have made it particularly powerful in producing generalizable results (see next section).

2.2. Single-subject designs

Single-subject designs compare experimental to control conditions repeatedly over time within the same individual. Like group designs with within-subject comparisons, single-subject designs can control for individual differences, which remain constant. However, single-subject designs take individual control to a new level. Since other confounding changes may coincide with a single change in the IV, single-subject designs also require multiple implementations of the same manipulation so that the comparison can be repeated within the individual, controlling for the coincidental confounds of a single condition change. Additionally, single-subject designs measure multiple data points through time within each condition before any experimental change occurs to assess pre-existing variation and trends in comparisons with the subsequent condition. Of course, a single-subject experiment without inter-individual replication has no generality—systematic replications across relevant individual characteristics and contexts are generally required to establish external validity. However, the typical group design also often requires similar replication to establish the same validity, and unlike group designs single-subject studies are also capable of rigorously interrogating even the rarest of effects.

Because single-subject experiments deal well with individual effects, they are often used in clinical and closely applied disciplines, e.g., education (Alnahdi, 2015 ), rehabilitation and therapy (Tankersley et al., 2006 ), speech and language (Byiers et al., 2012 ), implementation science (Miller et al., 2020 ), neuropsychology (Perdices and Tate, 2009 ), biomedicine (Janosky et al., 2009 ), and behavior analysis (Perone, 1991 ). However, the single-subject design is not limited to clinical applications or to the study of rare effects; it can also be used for the study of generalizable individual processes via systematic replication. Serial replications often enable detailed distillation of both common and uncommon relevant factors across individuals, making the approach particularly powerful for identifying generalizable processes that account for within-population diversity (although this process can be challenging even on the single-subject level; see Kazdin, 1981 ). Single-subject methodology has historically established some of the most generalizable findings in psychology including the principles of Pavlovian and operant conditioning (Iversen, 2013 ). Establishing this generalizability requires a research program rather than a single study, however since each replication (and comparisons between them) can potentially add information about important contextual variables, systematic progression toward generality can be more efficient than in one-shot group studies.

Single-subject designs are sometimes confused with within-subject group comparisons or n-of-1 case studies, neither of which usually include multiple implementations of each condition for any one individual. N-of-1 case studies sometimes make no manipulation at all or may make a single comparison (as with an embedded AB design or pre-post observation), which can at best serve as a quasi-experiment (Kazdin and Tuma, 1982 ). A single subject design, in contrast, will include many repeated condition changes and collect multiple data points inside each condition (as in the ABABABAB design as well as many others, see Perone, 1991 ). As is the case for group designs, the quality of evidence in a single-subject experiment increases with the number of instances in which the experimental condition is compared to a control condition; the more comparisons occur, the less likely it is that an alternative explanation will have tracked with the manipulation. A strong single-subject design will require a minimum of three IV implementations for the same individual (i.e., ABABAB, with multiple data points for each A and each B), and a robust effect will require many more.

Because single-subject designs implement conditions across time, they are susceptible to some important limitations including sequence, maturation, and exposure effects. The need to consider within-condition stability, serial dependence in data sets, reversibility, carryover effects, and long experimental time courses can also complicate these designs. Still, manipulations common in neuroscience research is often amenable to these challenges (Soto, 2020 ). Single-subject designs for phenomena that are not reversable (such as skill acquisition) can also be studied using approaches such as the within-subject multiple baseline. Multiple baselines experiments across behaviors, across cell populations, or across homotopic brain regions may be reasonable if independence can be established (Soto, 2020 ). A variety of single-subject methods are available that can help to address the unique strengths and limitations in single-subject methodology; the reader is encouraged to explore the variety of designs that cannot be enumerated in the scope of the current paper (Horner and Baer, 1978 ; Hains and Baer, 1989 ; Perone, 1991 ; Holcombe et al., 1994 ; Edgington, 1996 ; Kratochwill et al., 2010 ; Ward-Horner and Sturmey, 2010 ).

2.3. A note about statistical methods

Issues relating to statistical analysis are commonly erroneously conflated with group experimental design per se . Problems with the frequentist statistical approach commonly used in group designs has greatly impacted its efficacy; frequentist statistical methods carry limitations that have been treated thoroughly elsewhere [e.g., the generic problems with null-hypothesis statistical testing NHST (Branch, 2014 ), the inappropriate use of frequentist statistics contrary to their best use and design (Moen et al., 2016 ; Wasserstein and Lazar, 2016 ), and the inappropriate reliance on p -values (Wasserstein and Lazar, 2016 )]. I do not expand on these issues in my summary of group design because such critiques need not apply to all between-group comparisons. The use and applicability of analysis techniques are separable from the experimental utility of group designs in general, which are not limited to inferential statistics. Group experiments can also be analyzed using alternative, less problematic statistical approaches such as the probability of replication statistic or P-rep (Killeen, 2015 ) and Bayesian approaches (Berry and Stangl, 2018 ). Well-considered statistical best practices for various forms of group analysis (e.g., Moen et al., 2016 ) can help a researcher to address limitations.

The conflation of statistical methods with group designs has also led to the misconception that single-subject designs cannot be analyzed statistically. Most scientists have less familiarity with statistical analyses appropriate for use in single-subject designs and the serially-dependent data sets that they produce. While pronounced effects uncovered in single-subject experiments can often be clearly detected using appropriate visual analysis, rigorous statistical methods applicable to single-subject designs are also available (e.g., Parker and Brossart, 2003 ; Scruggs and Mastropieri, 2013 ).

3. Single-subject design and the inductive process

The advantages highlighted above suggest not only compatibility between single-subject and group approaches, but a potential advantage conferred by an order of operations between methods. Early in the research process, inductive inference based on single-subject manipulations are ideal to generate likely and testable abstractions (Russell, 1962 ). Using single-subject approaches for this inductive phase requires fewer resources compared to fully powered group approaches and can be more rigorous than small-n group pilots. An effect can be isolated in one individual, then systematically replicated across relevant differences and contexts until it fails to replicate, at which time explanatory variables can be adjusted until replicated results are produced. The altered experiment can then be analyzed in comparison to previous experiments to form a more general understanding that can be tested in a new series of experiments. After sufficient systematic replication, hybrid and group designs can assess the extent to which inductively and contextually informed abstractions generalize across the widest relevant populations.

4. Precedent of within-subject methods

Although within-subject group experiments are common in human neuroscience and psychology, e.g., Greenwald ( 1976 ) and Crockett and Fehr ( 2014 ), full-fledged single-subject designs are virtually unknown in many subfields. Still, high-impact neuroscience experiments have occasionally either implicitly or deliberately implemented within-subject reversals, demonstrating the power of these approaches to advance the science. To name just a few high-impact examples, Hodgkin et al. ( 1952 ) classic work on voltage clamping utilizing the giant squid neuron involved multiple parametric IV implementations on single neurons. The discovery of circadian rhythms in humans also involved systematic single-subject experiments comparing circadian patterns at various light intensities, light-dark schedules, and control contexts, which allowed investigators to establish that outside entrainment overrode the cycle-altering effects of different light intensities (Aschoff, 1965 ). This fruitful precedent of single-subject-like experiments at the very foundation of historical neuroscience together with the well-established efficacy of single-subject design in other fields imply that the wider adoption of the full methodology can succeed.

5. Single-subject design and individuality in neuroscience

As suggested earlier in this paper, individual variation dominates the scene in behavioral and brain sciences and constitutes a basic part of the evolutionary selection processes that shaped them. In human neuroscience, individual developmental and experience-dependent variation are of particular importance. Human brains are so individuated that functional units across individuals cannot be discerned via typical anatomical landmarks, and even between-group designs often need to utilize individuated or normalized measures (Brett et al., 2002 ; Dworetsky et al., 2021 ; Fedorenko, 2021 ; Hanson, 2022 ). A shift toward including rigorous single-subject research therefore holds particular promise for the field. For example, systematically replicated individual analyses of functional brain networks and their dynamics may more easily lead to generalizable ideas about how they develop and change, and these purportedly general processes could in turn be tested across individual contexts.

6. Time and resource logistics

Group methodology often requires great time and resources in order to produce properly powered experiments. This can lead to problems with rigor, particularly in contexts of limited funding and publish-or-perish job demands (Bernard, 2016 ; Button, 2016 ). Especially in early stages of research, single-subject methodology enables experimenters to investigate effects more critically and rigorously for each subject, to more quickly answer and refine questions in individuals first before systematically exploring the generality of findings or the importance of context, and to do so in a cost-effective way. Thus, both cost and rigor could be served by conscientiously adding single-subject methodology to the neuroscience toolbelt.

7. Suggestions for neuroscience subfields that could benefit

Cognitive, behavioral, social, and developmental neuroscience each deal with individual variation in which later stages are often dependent on earlier stages and seek to identify generalizable processes that produce variant outcomes: a task for which the single-subject and multi-method approach is ideal. Neurology and clinical neuroscience also stand to benefit from a more rigorous tool for investigating clinical cases or rare phenomena. While I do not mean to suggest that the method's utility should be limited to these subfields, the potential benefit seems particularly pronounced.

8. Discussion

In summary, greater utilization of single-subject research in human neuroscience can complement current methods by balancing the progression toward internal and then external validity and enabling a low-cost and flexible inductive process that can strengthen subsequent between-group studies. These methods have already been incidentally utilized in important neuroscience research, and they could be an even more powerful, thorough, cost-efficient, rigorous, and deliberate ingredient of an ideal approach to studying the generalizable processes that account for the highly individuated human brain and the behavior that it enables.

Author contributions

AB conceived of and wrote this manuscript.

Acknowledgments

The author would like to thank Daniele Ortu, Ph.D. for helpful comments.

Funding Statement

AB was funded by the Beatrice H. Barrett endowment for research on neuro-operant relations.

1 This discussion intentionally excludes assignment to groups based on non-manipulable variables because of the qualitative difference between correlational approaches and true experimental approaches that manipulates the IV. The former carries a very different set of considerations outside the scope of this paper.

2 If the biological process under investigation actually occurs at the population level (e.g. natural selection), the population parameter precisely applies to the question at hand. However, group comparisons are more often used to study processes that function on the individual level.

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

  • Alnahdi G. H. (2015). Single-subject designs in special education: advantages and limitations . J. Res. Special Educ. Needs 15 , 257–265. 10.1111/1471-3802.12039 [ CrossRef ] [ Google Scholar ]
  • Aschoff J. (1965). Circadian rhythms in man: a self-sustained oscillator with an inherent frequency underlies human 24-hour periodicity . Science 148 , 1427–1432. 10.1126/science.148.3676.1427 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bernard C. (2016). Scientific rigor or rigor mortis? Eneuro 3 , 5–48. 10.1523/ENEURO.0176-16.2016 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Berry D. A., Stangl D. (2018). Bayesian Biostatistics . Boca Raton, FL: CRC Press. [ Google Scholar ]
  • Branch M. (2014). Malignant side effects of null-hypothesis significance testing . Theory Psychol. 24 , 256–277. 10.1177/0959354314525282 [ CrossRef ] [ Google Scholar ]
  • Brett M., Johnsrude I. S., Owen A. M. (2002). The problem of functional localization in the human brain . Nat. Rev. Neurosci. 3 , 243–249. 10.1038/nrn756 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Button K. S. (2016). Statistical rigor and the perils of chance . Eneuro 3 , e30. 10.1523/ENEURO.0030-16.2016 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Button K. S., Ioannidis J. P., Mokrysz C., Nosek B. A., Flint J., Robinson E. S., et al.. (2013). Power failure: why small sample size undermines the reliability of neuroscience . Nat. Rev. Neurosci. 14 , 365–376. 10.1038/nrn3475 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Byiers B. J., Reichle J., Symons F. J. (2012). Single-subject experimental design for evidence-based practice . Am. J. Speech Lang. Pathol. 21 , 397–414. 10.1044/1058-0360(2012/11-0036) [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Crockett M. J., Fehr E. (2014). Social brains on drugs: tools for neuromodulation in social neuroscience . Soc. Cogn. Affect. Neurosci. 9 , 250–254. 10.1093/scan/nst113 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dworetsky A., Seitzman B. A., Adeyemo B., Nielsen A. N., Hatoum A. S., Smith D. M., et al.. (2021). Two common and distinct forms of variation in human functional brain networks . bioRxiv 2021.2009. 2017.460799 . 10.1101/2021.09.17.460799 [ CrossRef ] [ Google Scholar ]
  • Edgington E. S. (1996). Randomized single-subject experimental designs . Behav. Res. Ther. 34 , 567–574. 10.1016/0005-7967(96)00012-5 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fedorenko E. (2021). The early origins and the growing popularity of the individual-subject analytic approach in human neuroscience . Curr. Opin. Behav. Sci. 40 , 105–112. 10.1016/j.cobeha.2021.02.023 [ CrossRef ] [ Google Scholar ]
  • Greenwald A. G. (1976). Within-subjects designs: To use or not to use? Psychol. Bull. 83 , 314. 10.1037/0033-2909.83.2.314 [ CrossRef ] [ Google Scholar ]
  • Grossman J., Mackenzie F. J. (2005). The randomized controlled trial: gold standard, or merely standard? Perspect. Biol. Med. 48 , 516–534. 10.1353/pbm.2005.0092 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hains A. H., Baer D. M. (1989). Interaction effects in multielement designs: Inevitable, desirable, and ignorable . J. Appl. Behav. Analy. 22 , 57–69. 10.1901/jaba.1989.22-57 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hanson S. J. (2022). The failure of blobology: FMRI misinterpretation, maleficience and muddle . Front. Hum. Neurosci. 16 , 205. 10.3389/fnhum.2022.870091 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Henrich J., Heine S. J., Norenzayan A. (2010). The weirdest people in the world? Behav. Brain Sci. 33 , 61–83. 10.1017/S0140525X0999152X [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hodgkin A. L., Huxley A. F., Katz B. (1952). Measurement of current-voltage relations in the membrane of the giant axon of Loligo . J. Physiol. 116 , 424–448. 10.1113/jphysiol.1952.sp004716 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Holcombe A., Wolery M., Gast D. L. (1994). Comparative single-subject research: Description of designs and discussion of problems . Topics Early Childh. Special Educ. 14 , 119–145. 10.1177/027112149401400111 [ CrossRef ] [ Google Scholar ]
  • Horner R. D., Baer D. M. (1978). Multiple-probe technique: a variation of the multiple baseline 1 . J. Appl. Behav. Analy. 11 , 189–196. 10.1901/jaba.1978.11-189 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hull D. L. (1988). Science as a Process: An Evolutionary Account of the Social and Conceptual Development of Science . Chicago: University of Chicago Press. 10.7208/chicago/9780226360492.001.0001 [ CrossRef ] [ Google Scholar ]
  • Iversen I. H. (2013). “Single-case research methods: an overview,” in APA handbook of behavior analysis: Methods and principles , ed. G. J. Madden (New York, NY: American Psychological Association; ). 10.1037/13937-001 [ CrossRef ] [ Google Scholar ]
  • Janosky J. E., Leininger S. L., Hoerger M. P., Libkuman T. M. (2009). Single Subject Designs in Biomedicine. Cham: Springer Science and Business Media. 10.1007/978-90-481-2444-2 [ CrossRef ] [ Google Scholar ]
  • Kazdin A. E. (1981). External validity and single-case experimentation: Issues and limitations (a response to JS Birnbrauer) . Analy. Interv. Dev. Disab. 1 , 133–143. 10.1016/0270-4684(81)90027-6 [ CrossRef ] [ Google Scholar ]
  • Kazdin A. E. (2011). “Additional design options,” in Single-Case Research Designs , ed. A. E. Kazdin (New York: Oxford Press; ), 227–256. [ Google Scholar ]
  • Kazdin A. E., Tuma A. H. (1982). Single-Case Research Designs . San Francisco: Jossey Bass. [ Google Scholar ]
  • Killeen P. R. (2015). P rep, the probability of replicating an effect . Encyclop. Clin. Psychol. 4 , 2201–2208. 10.1002/9781118625392.wbecp030 [ CrossRef ] [ Google Scholar ]
  • Kratochwill T. R., Hitchcock J., Horner R. H., Levin J. R., Odom S., Rindskopf D., et al.. (2010). Single-case designs technical documentation . What works clearinghouse. Technical paper. [ Google Scholar ]
  • Kravitz R. L., Duan N., Braslow J. (2004). Evidence-based medicine, heterogeneity of treatment effects, and the trouble with averages . Milbank Q. 82 , 661–687. 10.1111/j.0887-378X.2004.00327.x [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kuhn T. S., Hacking I. (2012). The Structure of Scientific Revolutions. Chicago; London: The University of Chicago Press. 10.7208/chicago/9780226458144.001.0001 [ CrossRef ] [ Google Scholar ]
  • Lakatos I. (1978). The Methodology of Scientific Research Programmes. Cambridge; New York: Cambridge University Press. 10.1017/CBO9780511621123 [ CrossRef ] [ Google Scholar ]
  • Meldrum M. L. (2000). A brief history of the randomized controlled trial: From oranges and lemons to the gold standard . Hematol. Oncol. Clin. North Am. 14 , 745–760. 10.1016/S0889-8588(05)70309-9 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Miller C. J., Smith S. N., Pugatch M. (2020). Experimental and quasi-experimental designs in implementation research . Psychiat. Res. 283 , 112452. 10.1016/j.psychres.2019.06.027 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Moen E. L., Fricano-Kugler C. J., Luikart B. W., O'Malley A. J. (2016). Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS ONE 11 , e0146721. 10.1371/journal.pone.0146721 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Parker R. I., Brossart D. F. (2003). Evaluating single-case research data: A comparison of seven statistical methods . Behav. Ther. 34 , 189–211. 10.1016/S0005-7894(03)80013-8 [ CrossRef ] [ Google Scholar ]
  • Perdices M., Tate R. L. (2009). Single-subject designs as a tool for evidence-based clinical practice: Are they unrecognised and undervalued? Neuropsychol. Rehabilit. 19 , 904–927. 10.1080/09602010903040691 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Perone M. (1991). Experimental design in the analysis of free-operant behavior . Techn. Behav. Neur. Sci. 6 , 135–171. [ Google Scholar ]
  • Russell B. (1962). The Scientific Outlook. New York, NY: Norton. [ Google Scholar ]
  • Scruggs T. E., Mastropieri M. A. (2013). PND at 25: Past, present, and future trends in summarizing single-subject research . Remed. Special Educ. 34 , 9–19. 10.1177/0741932512440730 [ CrossRef ] [ Google Scholar ]
  • Sidman M. (1960). Tactics of Scientific Research . New York: Basic Books. [ Google Scholar ]
  • Soto P. L. (2020). Single-case experimental designs for behavioral neuroscience . J. Exper. Analy. Behav. 114 , 447–467. 10.1002/jeab.633 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tankersley M., McGoey K. E., Dalton D., Rumrill Jr P. D., Balan C. M. (2006). Single subject research methods in rehabilitation . Work 26 , 85–92. [ PubMed ] [ Google Scholar ]
  • Ward-Horner J., Sturmey P. (2010). Component analyses using single-subject experimental designs: A review . J. Appl. Behav. Analy. 43 , 685–704. 10.1901/jaba.2010.43-685 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wasserstein R. L., Lazar N. A. (2016). The ASA statement on p-values: context, process, and purpose . Am. Statist. 70 , 129–133. 10.1080/00031305.2016.1154108 [ CrossRef ] [ Google Scholar ]
  • Williams B. A. (2010). Perils of evidence-based medicine . Perspect. Biol. Med. 53 , 106–120. 10.1353/pbm.0.0132 [ PubMed ] [ CrossRef ] [ Google Scholar ]

IMAGES

  1. single subject research design recommendations for levels of evidence

    single case research designs are best suited to

  2. Mixed Methods Single Case Research: State of the Art and Future

    single case research designs are best suited to

  3. a case study design definition

    single case research designs are best suited to

  4. PPT

    single case research designs are best suited to

  5. Single-Case Research Designs: Methods for Clinical and Applied Settings

    single case research designs are best suited to

  6. How to Create a Case Study + 14 Case Study Templates

    single case research designs are best suited to

VIDEO

  1. Comparative Designs

  2. BCBA Task List 5: D 4

  3. Chapter 7: Single-Case Research

  4. Single Subject Designs: Featuring Dr Keith Storey Ph D , BCBA D

  5. Using the Binomial Test for Single Case Design Research

  6. Case Study Research

COMMENTS

  1. Single-Case Designs

    The behavior analyst selects the single-case design best suited to the evaluation at hand and uses the logic of the selected design to test the effects of the intervention. Single-case designs are flexible, and the behavior analyst can refine, add, or remove intervention components based on incoming data. ... Single-case research designs bear ...

  2. Single-Case Experimental Designs: A Systematic Review of Published

    The single-case experiment has a storied history in psychology dating back to the field's founders: Fechner (1889), Watson (1925), and Skinner (1938).It has been used to inform and develop theory, examine interpersonal processes, study the behavior of organisms, establish the effectiveness of psychological interventions, and address a host of other research questions (for a review, see ...

  3. Single Case Research Design

    Policies and ethics. This chapter addresses single-case research designs' peculiarities, characteristics, and significant fallacies. A single case research design is a collective term for an in-depth analysis of a small non-random sample. The focus of this design is in-depth.

  4. Advancing the Application and Use of Single-Case Research Designs

    The application of single-case research methods is entering a new phase of scientific relevance. Researchers in an increasing array of disciplines are finding single-case methods useful for the questions they are asking and the clinical needs in their fields (Kratochwill et al., 2010; Maggin et al., 2017; Maggin & Odom, 2014; Riley-Tillman, Burns, & Kilgus, 2020).

  5. Single-Case Designs

    Single-case designs (also called single-case experimental designs) are system of research design strategies that can provide strong evidence of intervention effectiveness by using repeated measurement to establish each participant (or case) as his or her own control. The flexibility of the designs, and the focus on the individual as the unit of ...

  6. Single‐case experimental designs: Characteristics, changes, and

    Tactics of Scientific Research (Sidman, 1960) provides a visionary treatise on single-case designs, their scientific underpinnings, and their critical role in understanding behavior. Since the foundational base was provided, single-case designs have proliferated especially in areas of application where they have been used to evaluate interventions with an extraordinary range of clients ...

  7. Single-case experimental designs: A systematic review of published

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and ...

  8. Single-case research designs and the scientist-practitioner ideal in

    This chapter argues that single-case research designs are ideally suited to the needs of scientist-practitioners in psychology and the behavioral sciences. How to cope with variation within repetition; how to balance the abstract and the particular, challenge all science, including behavioral science. The author considers how psychology responded to these challenges by adopting the now ...

  9. Single-case research designs: Methods for clinical and applied settings

    Single-case research has played an important role in developing and evaluating interventions that are designed to alter a particular facet of human functioning. Now thoroughly updated in its second edition, acclaimed author Alan Kazdin's Single-Case Research Designs provides a notable contrast to the quantitative methodology approach that pervades the biological and social sciences.

  10. A Primer on Single-Case Research Designs: Contemporary Use and Analysis

    The overarching purpose of this article is to provide an introduction to the use of rigorous single-case research designs (SCRDs) in special education and related fields. Authors first discuss ...

  11. Single-Case Design, Analysis, and Quality Assessment for Intervention

    Single Case Research Designs for Intervention Research There are a variety of SC designs that can be used to study the effectiveness of interventions. Here we discuss: 1) AB designs, 2) reversal designs, 3) multiple baseline designs, and 4) alternating treatment designs, as well as ways replication and randomization techniques can be used to ...

  12. Advancing the Application and Use of Single-Case Research Designs

    A special issue of Perspectives on Behavior Science focused on methodological advances needed for single-case research is a timely contribution to the field. There are growing efforts to both articulate professional standards for single-case methods (Kratochwill et al., 2010; Tate et al., 2016), and advance new procedures for analysis and interpretation of single-case studies (Manolov ...

  13. PDF Guide for the Use of Single Case Design Research Evidence

    Components of Single Case Design Research Single case research requires repeated measurement of behavior over time. This repeated measurement may include, for example, assessing the extent to which a behavior occurs for multiple consecutive days. Most often, the types of behaviors appropriate for measurement in SCR are proximal and context-bound.

  14. Single-Subject Research Designs

    Many of these features are illustrated in Figure 10.2, which shows the results of a generic single-subject study. First, the dependent variable (represented on the y -axis of the graph) is measured repeatedly over time (represented by the x -axis) at regular intervals. Second, the study is divided into distinct phases, and the participant is ...

  15. Single-Case Experimental Design

    Single-case experimental design, often referred to as "N of 1" research, is a methodology that centers on the in-depth examination of individual subjects or cases. Unlike traditional group-based designs, this approach allows researchers to closely study and understand the nuances of a single participant's behavior, responses, and ...

  16. Single-Case Research Design and Analysis: Counseling Applications

    The application of single-case research design (SCRD) offers counseling practitioners and researchers a practical and viable method for evaluating the effectiveness of interventions that target behavior, emotions, personal characteristics, and other counseling-related constructs of interest.

  17. Constructing single-case research designs: Logic and options.

    Research is the process of asking questions systematically, and research designs are the vehicle for adding precision to that process. An effective intervention research design documents that improvement has occurred (e.g., students acquire reading skills; problem behavior decreases; social interaction increases) and that this improvement occurred only when an intervention was implemented (e.g ...

  18. Using Single-Case Research Designs to Evaluate Outcomes

    Readers learn about the situations most appropriate for SCRDs and the research questions best suited for single cases. The basics of single-case research design are discussed, including baseline and continuous assessment, performance stability, self as comparison, minimal sample size, flexibility and responsiveness, homogeneity of the sample ...

  19. Single-Case Designs

    Single-case design (SCD), also known as single-subject design, single-case experimental design, or N-of-1 trials, refers to a research methodology that involves examining the effect of an intervention on an individual or on each of multiple individuals. Unlike case studies, SCDs involve the systematic manipulation of an independent variable (IV ...

  20. (PDF) Single-Case Designs in Group Work: Past ...

    This paper examined how group work researchers and practitioners have used single-. case designs (SCDs) to evaluate interventions for improving group processes and. outcomes. Fifty-one group work ...

  21. Single Case Research Design

    Abstract. This chapter addresses the peculiarities, characteristics, and major fallacies of single case research designs. A single case study research design is a collective term for an in-depth analysis of a small non-random sample. The focus on this design is on in-depth.

  22. The benefits of single-subject research designs and multi

    2. Complimentary research designs. Though the completely randomized group design is considered by many to be the gold standard of evidence (Meldrum, 2000), its limitations as well as ethical and logistical execution difficulties have been noted: e.g., blindness to group heterogeneity, problematic application to individual cases, and experimental weakness in the context of other often-neglected ...

  23. Counseling Research

    Single case research designs are best suited to? Prediction. Nonoverlap methods of analysis? Often complement visual analysis AND Are easily computed by most researchers AND Are distribution free. When measuring a continuous behavior that has a beginning or end point that may be difficult to define, the researcher will likely use? ...