Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Controlled Experiment? | Definitions & Examples

What Is a Controlled Experiment? | Definitions & Examples

Published on April 19, 2021 by Pritha Bhandari . Revised on June 22, 2023.

In experiments , researchers manipulate independent variables to test their effects on dependent variables. In a controlled experiment , all variables other than the independent variable are controlled or held constant so they don’t influence the dependent variable.

Controlling variables can involve:

  • holding variables at a constant or restricted level (e.g., keeping room temperature fixed).
  • measuring variables to statistically control for them in your analyses.
  • balancing variables across your experiment through randomization (e.g., using a random order of tasks).

Table of contents

Why does control matter in experiments, methods of control, problems with controlled experiments, other interesting articles, frequently asked questions about controlled experiments.

Control in experiments is critical for internal validity , which allows you to establish a cause-and-effect relationship between variables. Strong validity also helps you avoid research biases , particularly ones related to issues with generalizability (like sampling bias and selection bias .)

  • Your independent variable is the color used in advertising.
  • Your dependent variable is the price that participants are willing to pay for a standard fast food meal.

Extraneous variables are factors that you’re not interested in studying, but that can still influence the dependent variable. For strong internal validity, you need to remove their effects from your experiment.

  • Design and description of the meal,
  • Study environment (e.g., temperature or lighting),
  • Participant’s frequency of buying fast food,
  • Participant’s familiarity with the specific fast food brand,
  • Participant’s socioeconomic status.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

You can control some variables by standardizing your data collection procedures. All participants should be tested in the same environment with identical materials. Only the independent variable (e.g., ad color) should be systematically changed between groups.

Other extraneous variables can be controlled through your sampling procedures . Ideally, you’ll select a sample that’s representative of your target population by using relevant inclusion and exclusion criteria (e.g., including participants from a specific income bracket, and not including participants with color blindness).

By measuring extraneous participant variables (e.g., age or gender) that may affect your experimental results, you can also include them in later analyses.

After gathering your participants, you’ll need to place them into groups to test different independent variable treatments. The types of groups and method of assigning participants to groups will help you implement control in your experiment.

Control groups

Controlled experiments require control groups . Control groups allow you to test a comparable treatment, no treatment, or a fake treatment (e.g., a placebo to control for a placebo effect ), and compare the outcome with your experimental treatment.

You can assess whether it’s your treatment specifically that caused the outcomes, or whether time or any other treatment might have resulted in the same effects.

To test the effect of colors in advertising, each participant is placed in one of two groups:

  • A control group that’s presented with red advertisements for a fast food meal.
  • An experimental group that’s presented with green advertisements for the same fast food meal.

Random assignment

To avoid systematic differences and selection bias between the participants in your control and treatment groups, you should use random assignment .

This helps ensure that any extraneous participant variables are evenly distributed, allowing for a valid comparison between groups .

Random assignment is a hallmark of a “true experiment”—it differentiates true experiments from quasi-experiments .

Masking (blinding)

Masking in experiments means hiding condition assignment from participants or researchers—or, in a double-blind study , from both. It’s often used in clinical studies that test new treatments or drugs and is critical for avoiding several types of research bias .

Sometimes, researchers may unintentionally encourage participants to behave in ways that support their hypotheses , leading to observer bias . In other cases, cues in the study environment may signal the goal of the experiment to participants and influence their responses. These are called demand characteristics . If participants behave a particular way due to awareness of being observed (called a Hawthorne effect ), your results could be invalidated.

Using masking means that participants don’t know whether they’re in the control group or the experimental group. This helps you control biases from participants or researchers that could influence your study results.

You use an online survey form to present the advertisements to participants, and you leave the room while each participant completes the survey on the computer so that you can’t tell which condition each participant was in.

Although controlled experiments are the strongest way to test causal relationships, they also involve some challenges.

Difficult to control all variables

Especially in research with human participants, it’s impossible to hold all extraneous variables constant, because every individual has different experiences that may influence their perception, attitudes, or behaviors.

But measuring or restricting extraneous variables allows you to limit their influence or statistically control for them in your study.

Risk of low external validity

Controlled experiments have disadvantages when it comes to external validity —the extent to which your results can be generalized to broad populations and settings.

The more controlled your experiment is, the less it resembles real world contexts. That makes it harder to apply your findings outside of a controlled setting.

There’s always a tradeoff between internal and external validity . It’s important to consider your research aims when deciding whether to prioritize control or generalizability in your experiment.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment.
  • Random assignment of participants to ensure the groups are equivalent.

Depending on your study topic, there are various other methods of controlling variables .

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). What Is a Controlled Experiment? | Definitions & Examples. Scribbr. Retrieved March 25, 2024, from https://www.scribbr.com/methodology/controlled-experiment/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, extraneous variables | examples, types & controls, guide to experimental design | overview, steps, & examples, how to write a lab report, what is your plagiarism score.

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

8.1 Experimental design: What is it and when should it be used?

Learning objectives.

  • Define experiment
  • Identify the core features of true experimental designs
  • Describe the difference between an experimental group and a control group
  • Identify and describe the various types of true experimental designs

Experiments are an excellent data collection strategy for social workers wishing to observe the effects of a clinical intervention or social welfare program. Understanding what experiments are and how they are conducted is useful for all social scientists, whether they actually plan to use this methodology or simply aim to understand findings from experimental studies. An experiment is a method of data collection designed to test hypotheses under controlled conditions. In social scientific research, the term experiment has a precise meaning and should not be used to describe all research methodologies.

what research method tests hypotheses under highly controlled conditions

Experiments have a long and important history in social science. Behaviorists such as John Watson, B. F. Skinner, Ivan Pavlov, and Albert Bandura used experimental design to demonstrate the various types of conditioning. Using strictly controlled environments, behaviorists were able to isolate a single stimulus as the cause of measurable differences in behavior or physiological responses. The foundations of social learning theory and behavior modification are found in experimental research projects. Moreover, behaviorist experiments brought psychology and social science away from the abstract world of Freudian analysis and towards empirical inquiry, grounded in real-world observations and objectively-defined variables. Experiments are used at all levels of social work inquiry, including agency-based experiments that test therapeutic interventions and policy experiments that test new programs.

Several kinds of experimental designs exist. In general, designs considered to be true experiments contain three basic key features:

  • random assignment of participants into experimental and control groups
  • a “treatment” (or intervention) provided to the experimental group
  • measurement of the effects of the treatment in a post-test administered to both groups

Some true experiments are more complex.  Their designs can also include a pre-test and can have more than two groups, but these are the minimum requirements for a design to be a true experiment.

Experimental and control groups

In a true experiment, the effect of an intervention is tested by comparing two groups: one that is exposed to the intervention (the experimental group , also known as the treatment group) and another that does not receive the intervention (the control group ). Importantly, participants in a true experiment need to be randomly assigned to either the control or experimental groups. Random assignment uses a random number generator or some other random process to assign people into experimental and control groups. Random assignment is important in experimental research because it helps to ensure that the experimental group and control group are comparable and that any differences between the experimental and control groups are due to random chance. We will address more of the logic behind random assignment in the next section.

Treatment or intervention

In an experiment, the independent variable is receiving the intervention being tested—for example, a therapeutic technique, prevention program, or access to some service or support. It is less common in of social work research, but social science research may also have a stimulus, rather than an intervention as the independent variable. For example, an electric shock or a reading about death might be used as a stimulus to provoke a response.

In some cases, it may be immoral to withhold treatment completely from a control group within an experiment. If you recruited two groups of people with severe addiction and only provided treatment to one group, the other group would likely suffer. For these cases, researchers use a control group that receives “treatment as usual.” Experimenters must clearly define what treatment as usual means. For example, a standard treatment in substance abuse recovery is attending Alcoholics Anonymous or Narcotics Anonymous meetings. A substance abuse researcher conducting an experiment may use twelve-step programs in their control group and use their experimental intervention in the experimental group. The results would show whether the experimental intervention worked better than normal treatment, which is useful information.

The dependent variable is usually the intended effect the researcher wants the intervention to have. If the researcher is testing a new therapy for individuals with binge eating disorder, their dependent variable may be the number of binge eating episodes a participant reports. The researcher likely expects her intervention to decrease the number of binge eating episodes reported by participants. Thus, she must, at a minimum, measure the number of episodes that occur after the intervention, which is the post-test .  In a classic experimental design, participants are also given a pretest to measure the dependent variable before the experimental treatment begins.

Types of experimental design

Let’s put these concepts in chronological order so we can better understand how an experiment runs from start to finish. Once you’ve collected your sample, you’ll need to randomly assign your participants to the experimental group and control group. In a common type of experimental design, you will then give both groups your pretest, which measures your dependent variable, to see what your participants are like before you start your intervention. Next, you will provide your intervention, or independent variable, to your experimental group, but not to your control group. Many interventions last a few weeks or months to complete, particularly therapeutic treatments. Finally, you will administer your post-test to both groups to observe any changes in your dependent variable. What we’ve just described is known as the classical experimental design and is the simplest type of true experimental design. All of the designs we review in this section are variations on this approach. Figure 8.1 visually represents these steps.

Steps in classic experimental design: Sampling to Assignment to Pretest to intervention to Posttest

An interesting example of experimental research can be found in Shannon K. McCoy and Brenda Major’s (2003) study of people’s perceptions of prejudice. In one portion of this multifaceted study, all participants were given a pretest to assess their levels of depression. No significant differences in depression were found between the experimental and control groups during the pretest. Participants in the experimental group were then asked to read an article suggesting that prejudice against their own racial group is severe and pervasive, while participants in the control group were asked to read an article suggesting that prejudice against a racial group other than their own is severe and pervasive. Clearly, these were not meant to be interventions or treatments to help depression, but were stimuli designed to elicit changes in people’s depression levels. Upon measuring depression scores during the post-test period, the researchers discovered that those who had received the experimental stimulus (the article citing prejudice against their same racial group) reported greater depression than those in the control group. This is just one of many examples of social scientific experimental research.

In addition to classic experimental design, there are two other ways of designing experiments that are considered to fall within the purview of “true” experiments (Babbie, 2010; Campbell & Stanley, 1963).  The posttest-only control group design is almost the same as classic experimental design, except it does not use a pretest. Researchers who use posttest-only designs want to eliminate testing effects , in which participants’ scores on a measure change because they have already been exposed to it. If you took multiple SAT or ACT practice exams before you took the real one you sent to colleges, you’ve taken advantage of testing effects to get a better score. Considering the previous example on racism and depression, participants who are given a pretest about depression before being exposed to the stimulus would likely assume that the intervention is designed to address depression. That knowledge could cause them to answer differently on the post-test than they otherwise would. In theory, as long as the control and experimental groups have been determined randomly and are therefore comparable, no pretest is needed. However, most researchers prefer to use pretests in case randomization did not result in equivalent groups and to help assess change over time within both the experimental and control groups.

Researchers wishing to account for testing effects but also gather pretest data can use a Solomon four-group design. In the Solomon four-group design , the researcher uses four groups. Two groups are treated as they would be in a classic experiment—pretest, experimental group intervention, and post-test. The other two groups do not receive the pretest, though one receives the intervention. All groups are given the post-test. Table 8.1 illustrates the features of each of the four groups in the Solomon four-group design. By having one set of experimental and control groups that complete the pretest (Groups 1 and 2) and another set that does not complete the pretest (Groups 3 and 4), researchers using the Solomon four-group design can account for testing effects in their analysis.

Solomon four-group designs are challenging to implement in the real world because they are time- and resource-intensive. Researchers must recruit enough participants to create four groups and implement interventions in two of them.

Overall, true experimental designs are sometimes difficult to implement in a real-world practice environment. It may be impossible to withhold treatment from a control group or randomly assign participants in a study. In these cases, pre-experimental and quasi-experimental designs–which we  will discuss in the next section–can be used.  However, the differences in rigor from true experimental designs leave their conclusions more open to critique.

Experimental design in macro-level research

You can imagine that social work researchers may be limited in their ability to use random assignment when examining the effects of governmental policy on individuals.  For example, it is unlikely that a researcher could randomly assign some states to implement decriminalization of recreational marijuana and some states not to in order to assess the effects of the policy change.  There are, however, important examples of policy experiments that use random assignment, including the Oregon Medicaid experiment. In the Oregon Medicaid experiment, the wait list for Oregon was so long, state officials conducted a lottery to see who from the wait list would receive Medicaid (Baicker et al., 2013).  Researchers used the lottery as a natural experiment that included random assignment. People selected to be a part of Medicaid were the experimental group and those on the wait list were in the control group. There are some practical complications macro-level experiments, just as with other experiments.  For example, the ethical concern with using people on a wait list as a control group exists in macro-level research just as it does in micro-level research.

Key Takeaways

  • True experimental designs require random assignment.
  • Control groups do not receive an intervention, and experimental groups receive an intervention.
  • The basic components of a true experiment include a pretest, posttest, control group, and experimental group.
  • Testing effects may cause researchers to use variations on the classic experimental design.
  • Classic experimental design- uses random assignment, an experimental and control group, as well as pre- and posttesting
  • Control group- the group in an experiment that does not receive the intervention
  • Experiment- a method of data collection designed to test hypotheses under controlled conditions
  • Experimental group- the group in an experiment that receives the intervention
  • Posttest- a measurement taken after the intervention
  • Posttest-only control group design- a type of experimental design that uses random assignment, and an experimental and control group, but does not use a pretest
  • Pretest- a measurement taken prior to the intervention
  • Random assignment-using a random process to assign people into experimental and control groups
  • Solomon four-group design- uses random assignment, two experimental and two control groups, pretests for half of the groups, and posttests for all
  • Testing effects- when a participant’s scores on a measure change because they have already been exposed to it
  • True experiments- a group of experimental designs that contain independent and dependent variables, pretesting and post testing, and experimental and control groups

Image attributions

exam scientific experiment by mohamed_hassan CC-0

Foundations of Social Work Research Copyright © 2020 by Rebecca L. Mauldin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

what research method tests hypotheses under highly controlled conditions

12.2 Experiments

Learning objectives.

  • Define experiment .
  • Distinguish “true” experiments from preexperimental designs.
  • Identify the core features of true experimental designs.
  • Describe the difference between an experimental group and a control group.
  • Identify and describe the various types of true experimental designs.
  • Identify and describe the various types of preexperimental designs.
  • Name the key strengths and weaknesses of experiments.
  • Define internal validity and external validity .

Experiments are an excellent data collection strategy for those wishing to observe the consequences of very specific actions or stimuli. Most commonly a quantitative research method, experiments are used more often by psychologists than sociologists, but understanding what experiments are and how they are conducted is useful for all social scientists, whether they actually plan to use this methodology or simply aim to understand findings based on experimental designs. An experiment A method of data collection designed to test hypotheses under controlled conditions. is a method of data collection designed to test hypotheses under controlled conditions. Students in my research methods classes often use the term experiment to describe all kinds of empirical research projects, but in social scientific research, the term has a unique meaning and should not be used to describe all research methodologies.

Several kinds of experimental designs exist. In general, designs considered to be “true experiments” contain three key features: independent and dependent variables, pretesting and posttesting, and experimental and control groups. In the classic experiment The effect of a stimulus is tested by comparing an experimental group to a control group. , the effect of a stimulus is tested by comparing two groups: one that is exposed to the stimulus (the experimental group The group of participants who receive the stimulus in an experiment. ) and another that does not receive the stimulus (the control group The group of participants who do not receive the stimulus in an experiment. ). In other words, the effects of an independent variable upon a dependent variable are tested. Because the researcher’s interest lies in the effects of an independent variable, she must measure participants on the dependent variable before and after the independent variable (or stimulus) is administered. Thus pretesting and posttesting are both important steps in a classic experiment.

One example of experimental research can be found in Shannon K. McCoy and Brenda Major’s (2003) McCoy, S. K., & Major, B. (2003). Group identification moderates emotional response to perceived prejudice. Personality and Social Psychology Bulletin, 29, 1005–1017. study of people’s perceptions of prejudice. In one portion of this multifaceted study, all participants were given a pretest to assess their levels of depression. No significant differences in depression were found between the experimental and control groups during the pretest. Participants in the experimental group were then asked to read an article suggesting that prejudice against their own racial group is severe and pervasive, while participants in the control group were asked to read an article suggesting that prejudice against a racial group other than their own is severe and pervasive. Upon measuring depression scores during the posttest period, the researchers discovered that those who had received the experimental stimulus (the article citing prejudice against their same racial group) reported greater depression than those in the control group. This is just one of many examples of social scientific experimental research.

In addition to the classic experimental design, there are two other ways of designing experiments that are considered to fall within the purview of “true” experiments (Babbie, 2010; Campbell & Stanley, 1963). Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth; Campbell, D., & Stanley, J. (1963). Experimental and quasi-experimental designs for research . Chicago, IL: Rand McNally. They are the Solomon four-group design and the posttest-only control group design. In the former, four groups exist. Two groups are treated as they would be in a classic experiment. Another group receives the stimulus and is then given the posttest. The remaining group does not receive the stimulus but is given the posttest. Table 12.2 "Solomon Four-Group Design" illustrates the features of each of the four groups in the Solomon four-group design.

Table 12.2 Solomon Four-Group Design

Finally, the posttest only control group is also considered a “true” experimental design though it lacks any pretest group. In this design, participants are assigned to either an experimental or a control group. Individuals are then measured on some dependent variable following the administration of an experimental stimulus to the experimental group. In theory, as long as the control and experimental groups have been determined randomly, no pretest is needed.

Time, other resources such as funding, and even one’s topic may limit a researcher’s ability to conduct a true experiment. For researchers in the medical and health sciences, conducting a true experiment could require denying needed treatment to patients, which is a clear ethical violation. Even those whose research may not involve the administration of needed medications or treatments may be limited in their ability to conduct a classic experiment. In social scientific experiments, for example, it might not be equitable or ethical to provide a large financial or other reward only to members of the experimental group. When random assignment of participants into experimental and control groups is not feasible, researchers may turn to a preexperimental design Experimental design used when random assignment of participants into experimental and control groups is not feasible. (Campbell & Stanley, 1963). Campbell, D., & Stanley, J. (1963). Experimental and quasi-experimental designs for research . Chicago, IL: Rand McNally. However, this type of design comes with some unique disadvantages, which we’ll describe as we review the preexperimental designs available.

If we wished to measure the impact of some natural disaster, for example, Hurricane Katrina, we might conduct a preexperiment by identifying an experimental group from a community that experienced the hurricane and a control group from a similar community that had not been hit by the hurricane. This study design, called a static group comparison An experiment that includes a comparison control group that did not experience the stimulus; it involves experimental and control groups determined by a factor or factors other than random assignment. , has the advantage of including a comparison control group that did not experience the stimulus (in this case, the hurricane) but the disadvantage of containing experimental and control groups that were determined by a factor or factors other than random assignment. As you might have guessed from our example, static group comparisons are useful in cases where a researcher cannot control or predict whether, when, or how the stimulus is administered, as in the case of natural disasters.

In cases where the administration of the stimulus is quite costly or otherwise not possible, a one-shot case study An experiment that contains no pretest and no control group. design might be used. In this instance, no pretest is administered, nor is a control group present. In our example of the study of the impact of Hurricane Katrina, a researcher using this design would test the impact of Katrina only among a community that was hit by the hurricane and not seek out a comparison group from a community that did not experience the hurricane. Researchers using this design must be extremely cautious about making claims regarding the effect of the stimulus, though the design could be useful for exploratory studies aimed at testing one’s measures or the feasibility of further study.

Finally, if a researcher is unlikely to be able to identify a sample large enough to split into multiple groups, or if he or she simply doesn’t have access to a control group, the researcher might use a one-group pre-/posttest An experiment in which pre- and posttests are both taken but there is no control group. design. In this instance, pre- and posttests are both taken but, as stated, there is no control group to which to compare the experimental group. We might be able to study of the impact of Hurricane Katrina using this design if we’d been collecting data on the impacted communities prior to the hurricane. We could then collect similar data after the hurricane. Applying this design involves a bit of serendipity and chance. Without having collected data from impacted communities prior to the hurricane, we would be unable to employ a one-group pre-/posttest design to study Hurricane Katrina’s impact.

Table 12.3 "Preexperimental Designs" summarizes each of the preceding examples of preexperimental designs.

Table 12.3 Preexperimental Designs

As implied by the preceding examples where we considered studying the impact of Hurricane Katrina, experiments do not necessarily need to take place in the controlled setting of a lab. In fact, many applied researchers rely on experiments to assess the impact and effectiveness of various programs and policies. You might recall our discussion of the police experiment described in Chapter 2 "Linking Methods With Theory" . It is an excellent example of an applied experiment. Researchers did not “subject” participants to conditions in a lab setting; instead, they applied their stimulus (in this case, arrest) to some subjects in the field and they also had a control group in the field that did not receive the stimulus (and therefore were not arrested).

Finally, a review of some of the strengths and weaknesses of experiments as a method of data collection is in order. A strength of this method, particularly in cases where experiments are conducted in lab settings, is that the researcher has substantial control over the conditions to which participants are subjected. Experiments are also generally easier to replicate than are other methods of data collection. Again, this is particularly true in cases where an experiment has been conducted in a lab setting.

As sociologists, who are especially attentive to how social context shapes social life, are likely to point out, a disadvantage of experiments is that they are rather artificial. How often do real-world social interactions occur in the same way that they do in a lab? Experiments that are conducted in applied settings may not be as subject to artificiality, though then their conditions are less easily controlled. Experiments also present a few unique concerns regarding validity. Problems of external validity The extent to which the conditions of an experiment adequately represent those of the world outside the boundaries of the experiment. might arise when the conditions of an experiment don’t adequately represent those of the world outside the boundaries of the experiment. In the case of McCoy and Major’s (2003) McCoy, S. K., & Major, B. (2003). Group identification moderates emotional response to perceived prejudice. Personality and Social Psychology Bulletin, 29, 1005–1017. research on prejudice described earlier in this section, for example, the questions to ask with regard to external validity are these: Can we say with certainty that the stimulus applied to the experimental group resembles the stimuli that people are likely to encounter in their real lives outside of the lab? Will reading an article on prejudice against one’s race in a lab have the same impact that it would outside of the lab? This is not to suggest that experimental research is not or cannot be valid, but experimental researchers must always be aware that external validity problems can occur and be forthcoming in their reports of findings about this potential weakness. Concerns about internal validity The extent to which we can be confident that an experiment’s stimulus actually produced the observed effect or whether something else caused the effect. also arise in experimental designs. These have to do with our level of confidence about whether the stimulus actually produced the observed effect or whether some other factor, such as other conditions of the experiment or changes in participants over time, may have produced the effect.

In sum, the potential strengths and weaknesses of experiments as a method of data collection in social scientific research include the following:

Table 12.4 Strengths and Weaknesses of Experimental Research

Key Takeaways

  • Experiments are designed to test hypotheses under controlled conditions.
  • True experimental designs differ from preexperimental designs.
  • Preexperimental designs each lack one of the core features of true experimental designs.
  • Experiments enable researchers to have great control over the conditions to which participants are subjected and are typically easier to replicate than other methods of data collection.
  • Experiments come with some degree of artificiality and may run into problems of external or internal validity.
  • Taking into consideration your own research topic of interest, how might you conduct an experiment to learn more about your topic? Which experiment type would you use, and why?
  • Do you agree or disagree with the sociological critique that experiments are artificial? Why or why not? How important is this weakness? Do the strengths of experimental research outweigh this drawback?
  • Be a research participant! The Social Psychology Network offers many online opportunities to participate in social psychological experiments. Check them out at http://www.socialpsychology.org/expts.htm .

Experimental Method In Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

The experimental method involves the manipulation of variables to establish cause-and-effect relationships. The key features are controlled methods and the random allocation of participants into controlled and experimental groups .

What is an Experiment?

An experiment is an investigation in which a hypothesis is scientifically tested. An independent variable (the cause) is manipulated in an experiment, and the dependent variable (the effect) is measured; any extraneous variables are controlled.

An advantage is that experiments should be objective. The researcher’s views and opinions should not affect a study’s results. This is good as it makes the data more valid  and less biased.

There are three types of experiments you need to know:

1. Lab Experiment

A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions.

A laboratory experiment is conducted under highly controlled conditions (not necessarily a laboratory) where accurate measurements are possible.

The researcher uses a standardized procedure to determine where the experiment will take place, at what time, with which participants, and in what circumstances.

Participants are randomly allocated to each independent variable group.

Examples are Milgram’s experiment on obedience and  Loftus and Palmer’s car crash study .

  • Strength : It is easier to replicate (i.e., copy) a laboratory experiment. This is because a standardized procedure is used.
  • Strength : They allow for precise control of extraneous and independent variables. This allows a cause-and-effect relationship to be established.
  • Limitation : The artificiality of the setting may produce unnatural behavior that does not reflect real life, i.e., low ecological validity. This means it would not be possible to generalize the findings to a real-life setting.
  • Limitation : Demand characteristics or experimenter effects may bias the results and become confounding variables .

2. Field Experiment

A field experiment is a research method in psychology that takes place in a natural, real-world setting. It is similar to a laboratory experiment in that the experimenter manipulates one or more independent variables and measures the effects on the dependent variable.

However, in a field experiment, the participants are unaware they are being studied, and the experimenter has less control over the extraneous variables .

Field experiments are often used to study social phenomena, such as altruism, obedience, and persuasion. They are also used to test the effectiveness of interventions in real-world settings, such as educational programs and public health campaigns.

An example is Holfing’s hospital study on obedience .

  • Strength : behavior in a field experiment is more likely to reflect real life because of its natural setting, i.e., higher ecological validity than a lab experiment.
  • Strength : Demand characteristics are less likely to affect the results, as participants may not know they are being studied. This occurs when the study is covert.
  • Limitation : There is less control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way.

3. Natural Experiment

A natural experiment in psychology is a research method in which the experimenter observes the effects of a naturally occurring event or situation on the dependent variable without manipulating any variables.

Natural experiments are conducted in the day (i.e., real life) environment of the participants, but here, the experimenter has no control over the independent variable as it occurs naturally in real life.

Natural experiments are often used to study psychological phenomena that would be difficult or unethical to study in a laboratory setting, such as the effects of natural disasters, policy changes, or social movements.

For example, Hodges and Tizard’s attachment research (1989) compared the long-term development of children who have been adopted, fostered, or returned to their mothers with a control group of children who had spent all their lives in their biological families.

Here is a fictional example of a natural experiment in psychology:

Researchers might compare academic achievement rates among students born before and after a major policy change that increased funding for education.

In this case, the independent variable is the timing of the policy change, and the dependent variable is academic achievement. The researchers would not be able to manipulate the independent variable, but they could observe its effects on the dependent variable.

  • Strength : behavior in a natural experiment is more likely to reflect real life because of its natural setting, i.e., very high ecological validity.
  • Strength : Demand characteristics are less likely to affect the results, as participants may not know they are being studied.
  • Strength : It can be used in situations in which it would be ethically unacceptable to manipulate the independent variable, e.g., researching stress .
  • Limitation : They may be more expensive and time-consuming than lab experiments.
  • Limitation : There is no control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way.

Key Terminology

Ecological validity.

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.

Demand characteristics

The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).

Independent variable (IV)

The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e., the result) of a study.

Extraneous variables (EV)

All variables which are not independent variables but could affect the results (DV) of the experiment. EVs should be controlled where possible.

Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of participating in each condition.

The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.

what research method tests hypotheses under highly controlled conditions

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Biology LibreTexts

1.6: Scientific Experiments

  • Last updated
  • Save as PDF
  • Page ID 16716

  • Suzanne Wakim & Mandeep Grewal
  • Butte College

Seeing Spots

The spots on this child's tongue are an early sign of vitamin C deficiency , which is also called scurvy. This disorder, which may be fatal, is uncommon today because foods high in vitamin C are readily available. They include tomatoes, peppers, and citrus fruits such as oranges, lemons, and limes. However, scurvy was a well-known problem on navy ships in the 1700s. It was said that scurvy caused more deaths in the British fleet than French and Spanish arms. At that time, the cause of scurvy was unknown and vitamins had not yet been discovered. Anecdotal evidence suggested that eating citrus fruits might cure scurvy. However, no one knew for certain until 1747, when a Scottish naval physician named John Lind did an experiment to test the idea. Lind's experiment was one of the first clinical experiments in the history of medicine.

child sticking out a Scorbutic tongue

What Is an Experiment?

An experiment is a special type of scientific investigation that is performed under controlled conditions. Like all investigations, an experiment generates evidence to test a hypothesis. But unlike some other types of investigations, an experiment involves manipulating some factors in a system in order to see how it affects the outcome. Ideally, experiments also involve controlling as many other factors as possible in order to isolate the cause of the experimental results.

An experiment generally tests how one particular variable is affected by some other specific variable. The affected variable is called the dependent variable  or outcome variable. The variable that affects the dependent variable is called the independent variable. It is also called the manipulated variable because this is the variable that is manipulated by the researcher. Any other variables ( control variable ) that might also affect the dependent variable are held constant, so the effects of the independent variable alone are measured.

Lind's Scurvy Experiment

Lind began his scurvy experiment onboard a British ship after it had been at sea for two months and sailors had started showing signs of scurvy. He chose a group of 12 sailors with scurvy and divided the group into 6 pairs. All 12 sailors received the same diet, but each pair also received a different daily supplement to the diet (Table \(\PageIndex{1}\)).

Lind's experiment ended after just five days when the fresh citrus fruits ran out for pair 5. However, the two sailors in this pair had already fully recovered or greatly improved. The sailors in pair 1 (receiving the quart of cider) also showed some improvement, but sailors in the other pairs showed none.

Can you identify the independent and dependent variables in Lind's experiment? The independent variable is the daily supplement received by the pairs. The dependent variable is the improvement/no improvement in scurvy symptoms. Lind's results supported the citrus fruit cure for scurvy, and it was soon adopted by the British navy with good results. However, the fact that scurvy is caused by a vitamin C deficiency was not discovered until almost 200 years later.

Lind's scurvy experiment included just 12 subjects. This is a very small sample by modern scientific standards. The sample in an experiment or other investigation consists of the individuals or events that are actually studied. It rarely includes the entire population because doing so would likely be impractical or even impossible.

There are two types of errors that may occur by studying a sample instead of the entire population: chance error and bias.

  • A chance error occurs if the sample is too small. The smaller the sample is, the greater the chance that it does not fairly represent the whole population. Chance error is mitigated by using a larger sample.
  • Bias occurs if the sample is not selected randomly with respect to a variable in the study. This problem is mitigated by taking care to choose a randomized sample.

A reliable experiment must be designed to minimize both of these potential sources of error. You can see how the sources of error were addressed in another landmark experiment: Jonas Salk's famous 1953 trial of his newly developed polio vaccine. Salk's massive experiment has been called the "greatest public health experiment in history."

Salk's Polio Vaccine Experiment

Imagine a nationwide epidemic of a contagious flu-like illness that attacks mainly children and often causes paralysis. That's exactly what happened in the U.S. during the first half of the 20th century. Starting in the early 1900s, there were repeated cycles of polio epidemics, and each seemed to be stronger than the one before. Many children ended up on life support in so-called "iron lungs" (see photo below) because their breathing muscles were paralyzed by the disease.

Iron Lung ward-Rancho Los Amigos Hospital in 1953

Polio is caused by a virus, and there is still no cure for this potentially devastating illness. Fortunately, it can now be prevented with vaccines. The first polio vaccine was discovered by Jonas Salk in 1952. After testing the vaccine on himself and his family members to assess its safety, Salk undertook a nationwide experiment to test the effectiveness of the vaccine using more than a million schoolchildren as subjects. It's hard to imagine a nationwide trial of an experimental vaccine using children as "guinea pigs." It would never happen today. However, in 1953, polio struck such fear in the hearts of parents that they accepted Salk's word that the vaccine was safe and gladly permitted their children to participate in the study.

Salk's experiment was very well designed. First, it included two very large, random samples of children — 600,000 in the treatment group, called the experimental group , and 600,000 in the untreated group, called the control group . Using very large and randomized samples reduced the potential for chance error and bias in the experiment. Children in the experimental group were injected with the experimental polio vaccine. Children in the control group were injected with a harmless saline (saltwater) solution. The saline injection was a placebo. A placebo is a "fake" treatment that actually has no effect on health. It is included in trials of vaccines and other medical treatments, so subjects will not know in which group (control or experimental) they have been placed. The use of a placebo helps researchers control for the placebo effect . This is a psychologically-based reaction to a treatment that occurs just because the subject is treated, even if the treatment has no real effect.

Experiments in which a placebo is used are generally blind experiments because the subjects are "blind" to their experimental group. This helps prevent bias in the experiment. Often, even the researchers do not know which subjects are in each group. This type of experiment is called a double-blind experiment because both subjects and researchers are "blind" to which subjects are in each group. Salk's vaccine trial was a double-blind experiment, and double-blind experiments are now considered the gold standard of clinical trials of vaccines, therapeutic drugs, and other medical treatments.

Salk's polio vaccine proved to be highly successful. Analysis of data from his study revealed that the vaccine was 80 to 90 percent effective in preventing polio. Almost overnight, Salk was hailed as a national hero. He appeared on the cover of Time magazine and was invited to the White House. Within a few years, millions of children had received the polio vaccine. By 1961, the incidence of polio in the U.S. had been reduced by 96 percent.

Limits on Experimentation

Well-done experiments are generally the most rigorous and reliable scientific investigations. However, their hallmark feature of manipulating variables to test outcomes is not possible, practical, or ethical in all investigations. As a result, many ideas cannot be tested through experimentation. For example, experiments cannot be used to test ideas about what our ancestors ate millions of years ago or how long-term cigarette smoking contributes to lung cancer. In the case of our ancestors, it is impossible to study them directly. Researchers must rely instead on indirect evidence, such as detailed observations of their fossilized teeth. In the case of smoking, it is unethical to expose human subjects to harmful cigarette smoke. Instead, researchers may use large observational studies of people who are already smokers, with nonsmokers as controls, to look for correlations between smoking habits and lung cancer.

Feature: Human Biology in the News

Lind undertook his experiment to test the effects of citrus fruits on scurvy at a time when seamen were dying by the thousands from this nutritional disease as he explored the world. Today's explorers are astronauts in space, and their nutrition is also crucial to the success of their missions. However, maintaining good nutrition in astronauts in space can be challenging. One problem is that astronauts tend to eat less while in space. Not only are they very busy on their missions, but they may also get tired of the space food rations. The environment of space is another problem. Factors such as microgravity and higher radiation exposure can have major effects on human health and require nutritional adjustments to help counteract them. A novel way of studying astronaut nutrition and health is provided by identical twin astronauts Scott and Mark Kelly (Figure \(\PageIndex{3}\)).

Mark and Scott Kelly at the Johnson Space Center, Houston Texas

The Kellys are the first identical twin astronauts, but twin studies are nothing new. Scientists have used identical (homozygotic) twins as research subjects for many decades. Identical twins have the same genes, so any differences between them generally can be attributed to environmental influences rather than genetic causes. Mark Kelly spent almost a full year on the International Space Station (ISS) between 2015 and 2016, while his twin, Scott Kelly, stayed on the ground, serving as a control in the experiment. You may have noticed a lot of media coverage of Mark Kelly's return to Earth in March 2016 because his continuous sojourn in space was the longest of any American astronaut at that time. NASA is learning a great deal about the effects of long-term space travel on the human body by measuring and comparing nutritional indicators and other health data in the twins.

  • How do experiments differ from other types of scientific investigations?
  • Identify the independent and dependent variables in Salk's nationwide polio vaccine trial.
  • Compare and contrast chance error and bias in sampling. How can each type of error be minimized?
  • What is the placebo effect? Explain how Salk's experimental design controlled for it.
  • Fill in the blanks. The _____________ variable is manipulated to see the effects on the ___________ variable.
  • True or False. In studies of identical twins, the independent variable is their genetics.
  • True or False. Experiments cannot be done on humans.
  • True or False. Larger sample sizes are generally better than smaller ones in scientific experiments.
  • Why do you think it was important that the sailors’ diets were all kept the same, other than the daily supplement?
  • Can you think of some factors other than diet that could have potentially been different between the sailors that might have affected the outcome of the experiment?
  • Why do you think the sailors who drank cider had some improvement in their scurvy symptoms?
  • Explain why double-blind experiments are considered to be more rigorous than regular blind experiments.
  • Why are studies using identical twins so useful?
  • Do you think it is necessary to include a placebo (such as an injection with saline in a drug testing experiment) in experiments that use animals? Why or why not?

Explore More

Watch this entertaining TED talk, in which biochemist, Kary Mullis, talks about the experiment as the basis of modern science.

Check out this video to learn more about conducting scientific experiments:

Attributions

  • Scorbutic tongue by CDC, public domain via Wikimedia Commons
  • Iron lung ward by Food and Drug Administration, public domain via Wikimedia Commons
  • Mark and Scott Kelly by NASA/Robert Markowitz, public domain via Wikimedia Commons
  • Text adapted from Human Biology by CK-12 licensed CC BY-NC 3.0

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

1.5: Chapter 5 Research Design

  • Last updated
  • Save as PDF
  • Page ID 84800

  • William Pelz
  • Herkimer College via Lumen Learning

Research design is a comprehensive plan for data collection in an empirical research project. It is a “blueprint” for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: (1) the data collection process, (2) the instrument development process, and (3) the sampling process. The instrument development and sampling processes are described in next two chapters, and the data collection process (which is often loosely called “research design”) is introduced in this chapter and is described in further detail in Chapters 9-12.

Broadly speaking, data collection methods can be broadly grouped into two categories: positivist and interpretive. Positivist methods , such as laboratory experiments and survey research, are aimed at theory (or hypotheses) testing, while interpretive methods, such as action research and ethnography, are aimed at theory building. Positivist methods employ a deductive approach to research, starting with a theory and testing theoretical postulates using empirical data. In contrast, interpretive methods employ an inductive approach that starts with data and tries to derive a theory about the phenomenon of interest from the observed data. Often times, these methods are incorrectly equated with quantitative and qualitative research. Quantitative and qualitative methods refers to the type of data being collected (quantitative data involve numeric scores, metrics, and so on, while qualitative data includes interviews, observations, and so forth) and analyzed (i.e., using quantitative techniques such as regression or qualitative techniques such as coding). Positivist research uses predominantly quantitative data, but can also use qualitative data. Interpretive research relies heavily on qualitative data, but can sometimes benefit from including quantitative data as well. Sometimes, joint use of qualitative and quantitative data may help generate unique insight into a complex social phenomenon that are not available from either types of data alone, and hence, mixed-mode designs that combine qualitative and quantitative data are often highly desirable.

Key Attributes of a Research Design

The quality of research designs can be defined in terms of four key design attributes: internal validity, external validity, construct validity, and statistical conclusion validity.

Internal validity , also called causality, examines whether the observed change in a dependent variable is indeed caused by a corresponding change in hypothesized independent variable, and not by variables extraneous to the research context. Causality requires three conditions: (1) covariation of cause and effect (i.e., if cause happens, then effect also happens; and if cause does not happen, effect does not happen), (2) temporal precedence: cause must precede effect in time, (3) no plausible alternative explanation (or spurious correlation). Certain research designs, such as laboratory experiments, are strong in internal validity by virtue of their ability to manipulate the independent variable (cause) via a treatment and observe the effect (dependent variable) of that treatment after a certain point in time, while controlling for the effects of extraneous variables. Other designs, such as field surveys, are poor in internal validity because of their inability to manipulate the independent variable (cause), and because cause and effect are measured at the same point in time which defeats temporal precedence making it equally likely that the expected effect might have influenced the expected cause rather than the reverse. Although higher in internal validity compared to other methods, laboratory experiments are, by no means, immune to threats of internal validity, and are susceptible to history, testing, instrumentation, regression, and other threats that are discussed later in the chapter on experimental designs. Nonetheless, different research designs vary considerably in their respective level of internal validity.

External validity or generalizability refers to whether the observed associations can be generalized from the sample to the population (population validity), or to other people, organizations, contexts, or time (ecological validity). For instance, can results drawn from a sample of financial firms in the United States be generalized to the population of financial firms (population validity) or to other firms within the United States (ecological validity)? Survey research, where data is sourced from a wide variety of individuals, firms, or other units of analysis, tends to have broader generalizability than laboratory experiments where artificially contrived treatments and strong control over extraneous variables render the findings less generalizable to real-life settings where treatments and extraneous variables cannot be controlled. The variation in internal and external validity for a wide range of research designs are shown in Figure 5.1.

image72.jpg

Some researchers claim that there is a tradeoff between internal and external validity: higher external validity can come only at the cost of internal validity and vice-versa. But this is not always the case. Research designs such as field experiments, longitudinal field surveys, and multiple case studies have higher degrees of both internal and external validities. Personally, I prefer research designs that have reasonable degrees of both internal and external validities, i.e., those that fall within the cone of validity shown in Figure 5.1. But this should not suggest that designs outside this cone are any less useful or valuable. Researchers’ choice of designs is ultimately a matter of their personal preference and competence, and the level of internal and external validity they desire.

Construct validity examines how well a given measurement scale is measuring the theoretical construct that it is expected to measure. Many constructs used in social science research such as empathy, resistance to change, and organizational learning are difficult to define, much less measure. For instance, construct validity must assure that a measure of empathy is indeed measuring empathy and not compassion, which may be difficult since these constructs are somewhat similar in meaning. Construct validity is assessed in positivist research based on correlational or factor analysis of pilot test data, as described in the next chapter.

Statistical conclusion validity examines the extent to which conclusions derived using a statistical procedure is valid. For example, it examines whether the right statistical method was used for hypotheses testing, whether the variables used meet the assumptions of that statistical test (such as sample size or distributional requirements), and so forth. Because interpretive research designs do not employ statistical test, statistical conclusion validity is not applicable for such analysis. The different kinds of validity and where they exist at the theoretical/empirical levels are illustrated in Figure 5.2.

image36.jpg

Fortunately, there are several solutions to this problem. One is to include a placebo control condition , in which participants receive a placebo that looks much like the treatment but lacks the active ingredient or element thought to be responsible for the treatment’s effectiveness. When participants in a treatment condition take a pill, for example, then those in a placebo control condition would take an identical-looking pill that lacks the active ingredient in the treatment (a “sugar pill”). In research on psychotherapy effectiveness, the placebo might involve going to a psychotherapist and talking in an unstructured way about one’s problems. The idea is that if participants in both the treatment and the placebo control groups expect to improve, then any improvement in the treatment group over and above that in the placebo control group must have been caused by the treatment and not by participants’ expectations. This is what is shown by a comparison of the two outer bars in Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” .

Of course, the principle of informed consent requires that participants be told that they will be assigned to either a treatment or a placebo control condition—even though they cannot be told which until the experiment ends. In many cases the participants who had been in the control condition are then offered an opportunity to have the real treatment. An alternative approach is to use a waitlist control condition , in which participants are told that they will receive the treatment but must wait until the participants in the treatment condition have already received it. This allows researchers to compare participants who have received the treatment with participants who are not currently receiving it but who still expect to improve (eventually). A final solution to the problem of placebo effects is to leave out the control condition completely and compare any new treatment with the best available alternative treatment. For example, a new treatment for simple phobia could be compared with standard exposure therapy. Because participants in both conditions receive a treatment, their expectations about improvement should be similar. This approach also makes sense because once there is an effective treatment, the interesting question about a new treatment is not simply “Does it work?” but “Does it work better than what is already available?”

The Powerful Placebo

Many people are not surprised that placebos can have a positive effect on disorders that seem fundamentally psychological, including depression, anxiety, and insomnia. However, placebos can also have a positive effect on disorders that most people think of as fundamentally physiological. These include asthma, ulcers, and warts (Shapiro & Shapiro, 1999). There is even evidence that placebo surgery—also called “sham surgery”—can be as effective as actual surgery.

Medical researcher J. Bruce Moseley and his colleagues conducted a study on the effectiveness of two arthroscopic surgery procedures for osteoarthritis of the knee (Moseley et al., 2002). The control participants in this study were prepped for surgery, received a tranquilizer, and even received three small incisions in their knees. But they did not receive the actual arthroscopic surgical procedure. The surprising result was that all participants improved in terms of both knee pain and function, and the sham surgery group improved just as much as the treatment groups. According to the researchers, “This study provides strong evidence that arthroscopic lavage with or without débridement [the surgical procedures used] is not better than and appears to be equivalent to a placebo procedure in improving knee pain and self-reported function” (p. 85).

Doctors treating a patient in Surgery

Research has shown that patients with osteoarthritis of the knee who receive a “sham surgery” experience reductions in pain and improvement in knee function similar to those of patients who receive a real surgery.

Army Medicine – Surgery – CC BY 2.0.

Within-Subjects Experiments

In a within-subjects experiment , each participant is tested under all conditions. Consider an experiment on the effect of a defendant’s physical attractiveness on judgments of his guilt. Again, in a between-subjects experiment, one group of participants would be shown an attractive defendant and asked to judge his guilt, and another group of participants would be shown an unattractive defendant and asked to judge his guilt. In a within-subjects experiment, however, the same group of participants would judge the guilt of both an attractive and an unattractive defendant.

The primary advantage of this approach is that it provides maximum control of extraneous participant variables. Participants in all conditions have the same mean IQ, same socioeconomic status, same number of siblings, and so on—because they are the very same people. Within-subjects experiments also make it possible to use statistical procedures that remove the effect of these extraneous participant variables on the dependent variable and therefore make the data less “noisy” and the effect of the independent variable easier to detect. We will look more closely at this idea later in the book.

Carryover Effects and Counterbalancing

The primary disadvantage of within-subjects designs is that they can result in carryover effects. A carryover effect is an effect of being tested in one condition on participants’ behavior in later conditions. One type of carryover effect is a practice effect , where participants perform a task better in later conditions because they have had a chance to practice it. Another type is a fatigue effect , where participants perform a task worse in later conditions because they become tired or bored. Being tested in one condition can also change how participants perceive stimuli or interpret their task in later conditions. This is called a context effect . For example, an average-looking defendant might be judged more harshly when participants have just judged an attractive defendant than when they have just judged an unattractive defendant. Within-subjects experiments also make it easier for participants to guess the hypothesis. For example, a participant who is asked to judge the guilt of an attractive defendant and then is asked to judge the guilt of an unattractive defendant is likely to guess that the hypothesis is that defendant attractiveness affects judgments of guilt. This could lead the participant to judge the unattractive defendant more harshly because he thinks this is what he is expected to do. Or it could make participants judge the two defendants similarly in an effort to be “fair.”

Carryover effects can be interesting in their own right. (Does the attractiveness of one person depend on the attractiveness of other people that we have seen recently?) But when they are not the focus of the research, carryover effects can be problematic. Imagine, for example, that participants judge the guilt of an attractive defendant and then judge the guilt of an unattractive defendant. If they judge the unattractive defendant more harshly, this might be because of his unattractiveness. But it could be instead that they judge him more harshly because they are becoming bored or tired. In other words, the order of the conditions is a confounding variable. The attractive condition is always the first condition and the unattractive condition the second. Thus any difference between the conditions in terms of the dependent variable could be caused by the order of the conditions and not the independent variable itself.

There is a solution to the problem of order effects, however, that can be used in many situations. It is counterbalancing , which means testing different participants in different orders. For example, some participants would be tested in the attractive defendant condition followed by the unattractive defendant condition, and others would be tested in the unattractive condition followed by the attractive condition. With three conditions, there would be six different orders (ABC, ACB, BAC, BCA, CAB, and CBA), so some participants would be tested in each of the six orders. With counterbalancing, participants are assigned to orders randomly, using the techniques we have already discussed. Thus random assignment plays an important role in within-subjects designs just as in between-subjects designs. Here, instead of randomly assigning to conditions, they are randomly assigned to different orders of conditions. In fact, it can safely be said that if a study does not involve random assignment in one form or another, it is not an experiment.

There are two ways to think about what counterbalancing accomplishes. One is that it controls the order of conditions so that it is no longer a confounding variable. Instead of the attractive condition always being first and the unattractive condition always being second, the attractive condition comes first for some participants and second for others. Likewise, the unattractive condition comes first for some participants and second for others. Thus any overall difference in the dependent variable between the two conditions cannot have been caused by the order of conditions. A second way to think about what counterbalancing accomplishes is that if there are carryover effects, it makes it possible to detect them. One can analyze the data separately for each order to see whether it had an effect.

When 9 Is “Larger” Than 221

Researcher Michael Birnbaum has argued that the lack of context provided by between-subjects designs is often a bigger problem than the context effects created by within-subjects designs. To demonstrate this, he asked one group of participants to rate how large the number 9 was on a 1-to-10 rating scale and another group to rate how large the number 221 was on the same 1-to-10 rating scale (Birnbaum, 1999). Participants in this between-subjects design gave the number 9 a mean rating of 5.13 and the number 221 a mean rating of 3.10. In other words, they rated 9 as larger than 221! According to Birnbaum, this is because participants spontaneously compared 9 with other one-digit numbers (in which case it is relatively large) and compared 221 with other three-digit numbers (in which case it is relatively small).

Simultaneous Within-Subjects Designs

So far, we have discussed an approach to within-subjects designs in which participants are tested in one condition at a time. There is another approach, however, that is often used when participants make multiple responses in each condition. Imagine, for example, that participants judge the guilt of 10 attractive defendants and 10 unattractive defendants. Instead of having people make judgments about all 10 defendants of one type followed by all 10 defendants of the other type, the researcher could present all 20 defendants in a sequence that mixed the two types. The researcher could then compute each participant’s mean rating for each type of defendant. Or imagine an experiment designed to see whether people with social anxiety disorder remember negative adjectives (e.g., “stupid,” “incompetent”) better than positive ones (e.g., “happy,” “productive”). The researcher could have participants study a single list that includes both kinds of words and then have them try to recall as many words as possible. The researcher could then count the number of each type of word that was recalled. There are many ways to determine the order in which the stimuli are presented, but one common way is to generate a different random order for each participant.

Between-Subjects or Within-Subjects?

Almost every experiment can be conducted using either a between-subjects design or a within-subjects design. This means that researchers must choose between the two approaches based on their relative merits for the particular situation.

Between-subjects experiments have the advantage of being conceptually simpler and requiring less testing time per participant. They also avoid carryover effects without the need for counterbalancing. Within-subjects experiments have the advantage of controlling extraneous participant variables, which generally reduces noise in the data and makes it easier to detect a relationship between the independent and dependent variables.

A good rule of thumb, then, is that if it is possible to conduct a within-subjects experiment (with proper counterbalancing) in the time that is available per participant—and you have no serious concerns about carryover effects—this is probably the best option. If a within-subjects design would be difficult or impossible to carry out, then you should consider a between-subjects design instead. For example, if you were testing participants in a doctor’s waiting room or shoppers in line at a grocery store, you might not have enough time to test each participant in all conditions and therefore would opt for a between-subjects design. Or imagine you were trying to reduce people’s level of prejudice by having them interact with someone of another race. A within-subjects design with counterbalancing would require testing some participants in the treatment condition first and then in a control condition. But if the treatment works and reduces people’s level of prejudice, then they would no longer be suitable for testing in the control condition. This is true for many designs that involve a treatment meant to produce long-term change in participants’ behavior (e.g., studies testing the effectiveness of psychotherapy). Clearly, a between-subjects design would be necessary here.

Remember also that using one type of design does not preclude using the other type in a different study. There is no reason that a researcher could not use both a between-subjects design and a within-subjects design to answer the same research question. In fact, professional researchers often do exactly this.

Key Takeaways

  • Experiments can be conducted using either between-subjects or within-subjects designs. Deciding which to use in a particular situation requires careful consideration of the pros and cons of each approach.
  • Random assignment to conditions in between-subjects experiments or to orders of conditions in within-subjects experiments is a fundamental element of experimental research. Its purpose is to control extraneous variables so that they do not become confounding variables.
  • Experimental research on the effectiveness of a treatment requires both a treatment condition and a control condition, which can be a no-treatment control condition, a placebo control condition, or a waitlist control condition. Experimental treatments can also be compared with the best available alternative.

Discussion: For each of the following topics, list the pros and cons of a between-subjects and within-subjects design and decide which would be better.

  • You want to test the relative effectiveness of two training programs for running a marathon.
  • Using photographs of people as stimuli, you want to see if smiling people are perceived as more intelligent than people who are not smiling.
  • In a field experiment, you want to see if the way a panhandler is dressed (neatly vs. sloppily) affects whether or not passersby give him any money.
  • You want to see if concrete nouns (e.g., dog ) are recalled better than abstract nouns (e.g., truth ).
  • Discussion: Imagine that an experiment shows that participants who receive psychodynamic therapy for a dog phobia improve more than participants in a no-treatment control group. Explain a fundamental problem with this research design and at least two ways that it might be corrected.

Birnbaum, M. H. (1999). How to show that 9 > 221: Collect judgments in a between-subjects design. Psychological Methods, 4 , 243–249.

Moseley, J. B., O’Malley, K., Petersen, N. J., Menke, T. J., Brody, B. A., Kuykendall, D. H., … Wray, N. P. (2002). A controlled trial of arthroscopic surgery for osteoarthritis of the knee. The New England Journal of Medicine, 347 , 81–88.

Price, D. D., Finniss, D. G., & Benedetti, F. (2008). A comprehensive review of the placebo effect: Recent advances and current thought. Annual Review of Psychology, 59 , 565–590.

Shapiro, A. K., & Shapiro, E. (1999). The powerful placebo: From ancient priest to modern physician . Baltimore, MD: Johns Hopkins University Press.

  • Research Methods in Psychology. Provided by : University of Minnesota Libraries Publishing. Located at : http://open.lib.umn.edu/psychologyresearchmethods . License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike

Footer Logo Lumen Candela

Privacy Policy

Science-Education-Research

Prof. Keith S. Taber's site

Control conditions in experimental research

Page contents

A topic in research methodology

Read about experiment as research methodology

" Experiments are set up to test specific hypotheses. In a 'true' experiment the researcher controls variables, so that only the factor which is hypothesised to have an effect differs between the experimental and control treatments. In reality, such control is rarely (if ever) possible in enquiries into teaching and learning – even if the range of potentially significant variables can be identified" ( Taber, 2013 , p.83).

Researchers working in educational contexts will not always have a free choice over the comparison conditions, but ideally the control is set up in view of the research questions to best meet the needs of the study.

what research method tests hypotheses under highly controlled conditions

Three levels of comparison condition

"Table 2 sets out a simple typology of three levels depending upon the nature of the educational input provided for the learners in a control or comparison group. The activity undertaken with a group of learners that could potentially be educative is here referred to as a 'treatment'. In experimental work, the experimental /intervention group is subject to a treatment that differs in some well- characterised way from the treatment of the control or comparison group. The three levels suggested set different tests for the experimental treatment. These are, in effect,

  • Does it have any educational value? (level 1);
  • Is it better than standard educational provision? (level 2);
  • How does it compare to what is already recognised as good practice? (level 3)" ( Taber, 2019 , p.88).

Treatment versus no treatment: Does the experimental treatment have any educational effect?

"The first level of experimental design suggested in Table 2 simply looks to see if outcomes on some educational measure are better after some treatment than in a matched group of learners who did not experience any treatment. This level of design is potentially useful when the research question concerns whether there is any value in introducing some new educational provision or resource that would be additional to current provision. That is, this type of study is not concerned with doing something differently, but rather whether there is sufficient value in committing additional resources to do something extra, that is not currently done, to consider recommending it should be added to existing educational provision" ( Taber, 2019 , p.89).

Innovation versus standard treatment: Does the intervention represent an improvement on current practice?

"The second type of experimental design represented in Table 2 concerns the testing of an innovation which is conjectured to offer an improved form of educational provision in relation to some specific educational aim(s). In this situation, the innovation is compared with what is considered the 'standard' or 'normal' form of provision" ( Taber, 2019 , p.91).

Innovation versus enhanced treatment: How does an innovation compare with currently recognised good practice?

"…the third type of experimental design in Table 2 that sets a higher standard for an innovation to be measured against.

  • Where at the first level researchers seek to find out if some educational treatment has some effect in comparison to no treatment at all; and
  • at the second level researchers look to see if an innovative approach has a more positive effect than standard provision;
  • at the third level a comparison is made with educational provision considered to reflect good practice.

In effect, researchers are asking if an innovation is as good as, or even better than, something that is already considered to be effective. In studies with level 3 control conditions, a failure to find a significant difference between outcomes in the two conditions may be seen as reflecting positively on the experimental treatment.

A fourth alternative: How does an innovation compare with currently recognised poor practice ?

Sadly, some studies in the research literature seem to choose the research question " is the experimental treatment better than an educational input which is known to fall short of good practice ? " This does not seem a very useful question from a research perspective (it seems to be a kind of ' rhetorical research ' -seeking to demonstrate the obvious) and may lead to unethical control conditions.

Read about unethical control conditions

Making a choice

When to, and not to, use level 1 controls.

"The choice between (a) level 1 control conditions where a teaching innovation is compared with a treatment without teaching (or where standard teaching that is supplemented by an additional teaching input is compared with only the standard provision) and (b), level 2 and 3 control conditions that offer an equivalent level of teaching input intended to meet the same educational objectives as the innovatory treatment, will derive from the motivation for the study. In many teaching contexts, there will be an existing provision which, even if not considered effective, will be assumed to bring about learning objectives to some extent.

In these situations, a level 1 control condition is of limited use as such a study will simply show that the tested teaching treatment produces some level of learning – something that is to be expected (as even mediocre teaching is likely to facilitate some level of learning), and, without a meaningful comparison with existing practice, offers little guidance for teachers" ( Taber, 2019 , p.93).

Choosing between levels 2 and 3

"The choice between levels 2 (the comparison treatment being standard provision) and 3 (the comparison treatment being recognised good practice) may depend upon what the innovation is hoped to provide. If existing provision is considered to draw upon too high a resource level, or is found to have some undesirable side effects, then seeking an alternative that is just as effective may be well motivated. So, a hypothetical school level biology course using animal dissection might lead to satisfactory levels of learning of anatomy, but lead to a minority of students declining to take part. In such a situation an experiment to test an alternative to dissection may only be seeking to find an approach that produces learning outcomes that are as good as in the comparison condition. In this situation, current standard practice provides an effective comparison condition and there is a sensible rationale for a 'level 2ʹ control (see Table 2).

Many published studies argue that the innovation being tested has the potential to be more effective than current standard teaching practice, and seek to demonstrate this by comparing an innovative treatment with existing practice that is not seen as especially effective. This seems logical where the likely effectiveness of the innovation being tested is genuinely uncertain, and the 'standard' provision is the only available comparison. However, often these studies are carried out in contexts where the advantages of a range of innovative approaches have already been well demonstrated, in which case it would be more informative to test the innovation that is the focus of the study against some other approach already shown to be effective" ( Taber, 2019 , p.93).

what research method tests hypotheses under highly controlled conditions

Sources cited:

  • Taber, K. S. (2013). Classroom-based Research and Evidence-based Practice: An introduction (2nd ed.). London: Sage.
  • Taber, K. S. (2019). Experimental research into teaching innovations: responding to methodological and ethical challenges . Studies in Science Education . doi:10.1080/03057267.2019.1658058 [ Download the article ]

what research method tests hypotheses under highly controlled conditions

My introduction to educational research:

Taber, K. S. (2013).  Classroom-based Research and Evidence-based Practice: An introduction (2nd ed.).  London: Sage.

Share this:

  • Click to email a link to a friend (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Research Approach – Types Methods and Examples

Research Approach – Types Methods and Examples

Table of Contents

Research Approach

Research Approach

Definition:

Research approaches refer to the systematic and structured ways that researchers use to conduct research, and they differ in terms of their underlying logic and methods of inquiry.

Types of Research Approach

The Three main research approaches are deductive, inductive, and abductive.

Deductive Approach

The deductive approach starts with a theory or a hypothesis, and the researcher tests the hypothesis through the collection and analysis of data. The researcher develops a research design and data collection methods based on the theory or hypothesis. The goal of this approach is to confirm or reject the hypothesis.

Inductive Approach

The inductive approach starts with the collection and analysis of data. The researcher develops a theory or an explanation based on the patterns and themes that emerge from the data. The goal of this approach is to generate a new theory or to refine an existing one.

Abductive Approach

The abductive approach is a combination of deductive and inductive approaches. It starts with a problem or a phenomenon that is not fully understood, and the researcher develops a theory or an explanation that can account for the data. The researcher then tests the theory through the collection and analysis of more data. The goal of this approach is to generate a plausible explanation or theory that can be further refined or tested.

Research Approach Methods

Research approach methods are the specific techniques or tools that are used to conduct research within a particular research approach. Below are some examples of methods that are commonly used in each research approach:

Deductive approach methods:

  • Surveys and questionnaires: to collect data from a large sample of participants
  • Experiments: to manipulate variables and test hypotheses under controlled conditions
  • Statistical analysis: to test the significance of relationships between variables
  • Content analysis: to analyze and interpret text-based data

Inductive approach methods:

  • Interviews: to collect in-depth data and explore individual experiences and perspectives
  • Focus groups: to collect data from a group of participants who share common characteristics or experiences
  • Observations: to gather data on naturalistic settings and behaviors
  • Grounded theory: to develop theories or concepts from data through iterative cycles of analysis and interpretation

Abductive approach methods:

  • Case studies: to examine a phenomenon in its real-life context and generate new insights or explanations
  • Triangulation: to combine multiple data sources or methods to enhance the validity and reliability of findings
  • Exploratory research: to gather preliminary data and generate new research questions
  • Concept mapping: to visually represent relationships and patterns in data and develop new theoretical frameworks.

Applications of Research Approach

Here are some common applications of research approach:

  • Academic Research : Researchers in various academic fields, such as sociology, psychology, economics, and education, use research approaches to study a wide range of topics.
  • Business Research : Organizations use research approaches to gather information on customer preferences, market trends, and competitor behavior to make informed business decisions.
  • Medical Research : Researchers use research approaches to study various diseases and medical conditions, develop new treatments and drugs, and improve public health.
  • Social Research: Researchers use research approaches to study social issues, such as poverty, crime, discrimination, and inequality, and to develop policies and programs to address these issues.
  • Environmental Research: Researchers use research approaches to study environmental problems, such as climate change, pollution, and biodiversity loss, and to develop strategies to mitigate these problems.
  • Marketing Research : Companies use research approaches to study consumer behavior, preferences, and needs in order to develop effective marketing strategies.
  • Educational Research: Researchers use research approaches to study teaching and learning processes, develop new teaching methods and materials, and improve educational outcomes.
  • Legal Research : Lawyers and legal scholars use research approaches to study legal precedents, statutes, and regulations in order to make legal arguments and develop new laws and policies.

Examples of Research Approach

Examples Deductive approach:

  • A researcher starts with a theory or hypothesis and then develops a research design to test it. For example, a researcher might hypothesize that students who receive positive feedback from their teachers are more likely to perform well academically. The researcher would then design a study to test this hypothesis, such as surveying students to assess their feedback from teachers and comparing their academic performance.
  • Another example of a deductive approach is a clinical trial to test the effectiveness of a new medication. The researchers start with a theory that the medication will be effective and then design the study to test this theory by comparing the outcomes of patients who receive the medication with those who receive a placebo.

Examples Inductive approach:

  • A researcher begins with data and then develops a theory or explanation to account for it. For example, a researcher might collect data on the experiences of immigrants in a particular city and then use that data to develop a theory about the factors that contribute to their success or challenges.
  • Another example of an inductive approach is ethnographic research, where the researcher immerses themselves in a cultural context to observe and document the practices, beliefs, and values of the community. The researcher might then develop a theory or explanation for these practices based on the observed patterns and themes.

Examples Abductive approach:

  • A researcher starts with a puzzle or a phenomenon that is not easily explained by existing theories and uses a combination of deductive and inductive reasoning to generate a new explanation or theory. For example, a researcher might notice a pattern of behavior in a particular group of people that is not easily explained by existing theories and then use both deductive and inductive reasoning to develop a new theory to explain the behavior.
  • Another example of an abductive approach is diagnosis in medicine. A physician starts with a set of symptoms and uses deductive reasoning to generate a list of possible diagnoses. The physician then uses inductive reasoning to gather more information about the patient and the symptoms to narrow down the list of possible diagnoses and arrive at a final diagnosis.

Purpose of Research Approach

The purpose of a research approach is to provide a systematic and logical way of conducting research to achieve the research goals and objectives. It helps the researcher to plan, design, and conduct research effectively and efficiently, ensuring that the research is reliable, valid, and useful. Different research approaches have different purposes and are suited for different types of research questions and contexts.

Here are some specific purposes of different research approaches:

Deductive approach:

  • To test hypotheses or theories
  • To confirm or refute existing knowledge
  • To generalize findings to broader populations or contexts

Inductive approach:

  • To generate new theories or hypotheses
  • To identify patterns, themes, or relationships in data
  • To develop an understanding of social or natural phenomena

Abductive approach:

  • To develop new explanations or theories when existing ones are inadequate
  • To identify new patterns or phenomena that may be overlooked by existing theories
  • To propose new research questions or directions

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Evaluating Research

Evaluating Research – Process, Examples and...

Logo for Iowa State University Digital Press

Scientific Methods and Human Subjects Research

Introduction to Experimental Methods

Karri Haen Whitmer

Our understanding of the methods used to conduct good scientific research is important for progress in our scientific understanding but also impacts our daily lives. Understanding good scientific methodology allows us to not only conduct experiments, but it helps us to analyze research conducted by others. For example, it helps us to determine whether research studies reported in the news are reliable. Research knowledge also helps us to discriminate among different medical treatments when it comes to making personal health decisions.

Scientific research methods include several steps, which may differ depending upon the topic to be addressed by a study. Standard scientific methods typically include: definition of the research problem, conducting background research, formulation of hypotheses, designing and conducting experiments, analysis of results, formulation of conclusions, and communication of research results to the public.

Central to our acquisition of scientific knowledge is the concept of the experiment. Researchers do experiments to answer questions about the world around us. The following are examples of simple research questions in human physiology:

  • Does changing the respiratory rate affect heart rate?
  • Does caffeine consumption affect blood glucose levels?
  • Does body temperature affect blood oxygen levels?

In order to answer these questions, researchers begin by formulating testable hypotheses . A hypothesis is a tentative statement describing the relationship between the variables in an experiment. Research hypotheses are written as if/then statements that include dependent and independent variables.

A variable is any factor that can change, affecting the experimental results. The dependent variable is the variable in the experiment that is measured by the researcher. The independent variable is the variable that is manipulated by the researcher in order to exert an effect on the dependent variable. In the first example research question, the heart rate is the dependent variable, and the respiratory rate is the independent variable. The researcher will use an experimental method (for example deep breathing) to manipulate a subject’s respiratory rate to measure whether any changes occur in the heart rate.

Dependent variable: the variable that is measured as the output of an experiment (the result)

Independent variable: a variable that is manipulated by the researcher

Writing Hypotheses

A hypothesis is a “tentative statement that proposes a possible explanation to some phenomenon or event.” [1]   Hypotheses written for the purpose of conducting experiments must be testable. Formalized hypotheses use an if/then format that helps to assure that all important aspects of the hypothesis are intact, including the independent and dependent variables. Additionally, a good research hypothesis has three parts: an explanation of a phenomenon to be tested, a method, and a prediction. A research hypothesis must be written before an experiment is conducted.

Imagine students working on a physiology project involving muscle contraction and temperature. The students observe that cold hands do not function as well at performing certain tasks requiring manual dexterity than do warm hands. The students decide to test grip strength under different temperature conditions using a handgrip dynamometer, which measures the strength of contraction of hand and forearm muscles.

The following are examples of bad and good research hypotheses for this experiment:

  • My grip strength will be stronger with warm hands than with cold hands.

This example is not a research hypothesis because it only includes a prediction. A prediction by itself is never a formalized hypothesis.

  • If I test grip strength with a handgrip dynamometer, then my grip strength will be stronger with warm hands than with cold hands.

This example is not a research hypothesis because it only includes a method (a test) and a prediction. It does not include any explanation of the phenomenon to be tested.

  • If low temperatures suppress muscle contraction, and I test grip strength at different temperatures with a handgrip dynamometer, then my grip strength will be stronger with warm hands than with cold hands.

This is an example of a correct research hypothesis. Note the three parts: “if low temperatures suppress muscle contraction” (a possible explanation of the phenomenon to be tested), “and I test grip strength at different temperatures with a handgrip dynamometer” (the method used for the test), and “then my grip strength will be stronger with warm hands than with cold hands” (the prediction).

This writing sample is also an example of a formalized hypothesis due to the use of the if/then format. In this hypothesis, the independent variable is muscle temperature, and the dependent variable is muscle contraction strength.

Exercise: Practice writing a research hypothesis

Background: The ad for a creatine supplement claims ingesting 10g of creatine once a day for four weeks results in measurable increases in muscle mass. A student decided to test the claim in 10 subjects by measuring the circumference of the upper arm, around the belly of the biceps muscles, before and after treatment. The subjects were not allowed to take part in weight or resistance training during the testing period.  

Write a hypothesis as an if/then statement for this experiment:

What is the dependent variable?

What is the independent variable?

Designing Experiments involving Humans

Well-designed experiments must minimize the effects of extraneous environmental and physiological factors, in order to make sure changes recorded in the dependent variable are actually the result of manipulating the independent variable. Experimental controls  establish a baseline for the experiment. When conducting human subject experiments in physiology, the control might consist of a separate group of people, the control group , who are not exposed to any manipulation of the independent variable, or it might be the same group of subjects tested before (and then after) altering the independent variable.

Experimental studies may be in vitro , conducted in highly controlled laboratory conditions (example: in a test tube), or in vivo , conducted in a live organism. Controlled laboratory experiments (also called “bench research,” molecular, or cellular research) allow for a great amount of control over the variables that could affect experimental outcomes because all the components in the experimental system can typically be easily accounted for and measured. In human subject research , studies that use human participants to answer a research question, there is typically much less control over experimental variables due to the natural anatomical, physiological, and environmental variation innate to human populations. These are called external variables and can profoundly affect the outcome of an experiment. For example, two subjects may metabolize a compound differently due to differences in enzymes or two subjects that may react to cardiovascular stress differently due to their sex, age, or fitness level. To account for these external, or uncontrolled, variables in human subjects, experiments often use a within-subjects design (below) where the dependent variable is measured in the same subjects before and after manipulating the independent variable.

In human subjects research, there are two main types of experimental designs: within-subjects design and between-subjects design.  In a within-subjects design , the subjects of the study participate under each study condition, including in the control group. In the most simplistic design, the subjects participate in baseline measurements for the control (no treatment) and then participate under experimental conditions. Because the subjects in this kind of study serve as their own control group, variation in the results due to many external variables can be reduced.

An example of a simple within-subjects design can be found in many pharmaceutical studies where a group of participants is given a placebo drug for a defined amount of time, and then the same group is given an experimental drug. Differences in physiological measurements after treatment with the experimental drug are inferred as effects of drug administration.

One disadvantage of this research design is the problem of carryover effects , where the first test adversely influences the other. Two examples of this, with opposite effects, are fatigue and practice. In a complicated experiment, with multiple treatment conditions, the participants may be tired and thoroughly fed up of researchers prying and asking questions and pressuring them into taking tests. This could decrease their performance on the last study. [2]

Alternatively, the practice effect might mean that they are more confident and accomplished after the first condition, simply because the experience has made them more confident about taking tests. As a result, for many experiments, a counterbalance design, where the order of treatments is varied, is preferred, but this is not always possible.

Another type of experimental design is the between-subjects design . In the between-subjects design, there are separate participants for the control and treatment groups, which avoids carryover effects. However, the between-subjects design may make it impossible to maintain homogeneity across the groups: age, gender, and social class are just some of the obvious factors that could result in differences between control and treatment groups, skewing the data.

Within-subjects design: the subjects in the study participate in the control and treatment conditions

Between-subjects design: different groups of subjects participate in the control and treatment conditions

Experimental Error

No matter how careful we are in creating an experimental design, no experiment can be perfect. We must assume there is some margin of error in the collected data. There are three general types of errors that can impact the outcome of an experiment:

  • Human error: human errors are simple mistakes made by an experimenter. For example, the experimenter didn’t appropriately attach a sensor or read a patient’s blood pressure wrong.
  • Sampling bias: the participants in the study are not representative of the population at large; thus, the results cannot be generalized outside of the study population. For example, data from a study conducted on only 80- year-old men may not be generalized to everyone else in the human population.
  • Selection bias: the assignment of subjects to control and treatment groups was not random, resulting in experimental results highly impacted by external variables. For example, a control group that included only females and a treatment group that contained only males.
  • Measurement bias: the experimenters rate subjects differently due to their own expectations of experimental outcomes.
  • Random error: by-chance variations in measurements that cannot be controlled. Random errors can be reduced by repeated measurements.

The box below lists some sources of error that are possible in all human subject experiments.

Common factors adversely affecting the outcome of human subject experiments:

  • Subjects in the study are not representative of the human population at large: e.g., small sample size is too small to fully account for variation in the population
  • Interference due to external variables
  • Problems with the reliability or accuracy of instruments: e.g., equipment does not have the precision to detect changes in the dependent variable
  • Human error: the researcher makes an erroneous measurement or other error

Please cite:

Haen Whitmer, K.M. (2021). A Mixed Course-Based Research Approach to Human Physiology . Ames, IA: Iowa State University Digital Press.  https://iastate.pressbooks.pub/curehumanphysiology/

  • http://www.accessexcellence.org/LC/TL/filson/writhypo.html ↵
  • Martyn Shuttleworth  (May 16, 2009). Within Subject Design . Retrieved Jul 30, 2019 from Explorable.com:  https://explorable.com/within-subject-design   Creative Commons-License Attribution 4.0 International (CC BY 4.0) . ↵

A Mixed Course-Based Research Approach to Human Physiology Copyright © 2021 by Karri Haen Whitmer is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.37(16); 2022 Apr 25

Logo of jkms

A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles

Edward barroga.

1 Department of General Education, Graduate School of Nursing Science, St. Luke’s International University, Tokyo, Japan.

Glafera Janet Matanguihan

2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.

The development of research questions and the subsequent hypotheses are prerequisites to defining the main research purpose and specific objectives of a study. Consequently, these objectives determine the study design and research outcome. The development of research questions is a process based on knowledge of current trends, cutting-edge studies, and technological advances in the research field. Excellent research questions are focused and require a comprehensive literature search and in-depth understanding of the problem being investigated. Initially, research questions may be written as descriptive questions which could be developed into inferential questions. These questions must be specific and concise to provide a clear foundation for developing hypotheses. Hypotheses are more formal predictions about the research outcomes. These specify the possible results that may or may not be expected regarding the relationship between groups. Thus, research questions and hypotheses clarify the main purpose and specific objectives of the study, which in turn dictate the design of the study, its direction, and outcome. Studies developed from good research questions and hypotheses will have trustworthy outcomes with wide-ranging social and health implications.

INTRODUCTION

Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses. 1 , 2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results. 3 , 4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the inception of novel studies and the ethical testing of ideas. 5 , 6

It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or qualitative research, particularly when conceptualizing research questions and hypotheses. 4

There is a continuing need to support researchers in the creation of innovative research questions and hypotheses, as well as for journal articles that carefully review these elements. 1 When research questions and hypotheses are not carefully thought of, unethical studies and poor outcomes usually ensue. Carefully formulated research questions and hypotheses define well-founded objectives, which in turn determine the appropriate design, course, and outcome of the study. This article then aims to discuss in detail the various aspects of crafting research questions and hypotheses, with the goal of guiding researchers as they develop their own. Examples from the authors and peer-reviewed scientific articles in the healthcare field are provided to illustrate key points.

DEFINITIONS AND RELATIONSHIP OF RESEARCH QUESTIONS AND HYPOTHESES

A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. Thus, the research question gives a preview of the different parts and variables of the study meant to address the problem posed in the research question. 1 An excellent research question clarifies the research writing while facilitating understanding of the research topic, objective, scope, and limitations of the study. 5

On the other hand, a research hypothesis is an educated statement of an expected outcome. This statement is based on background research and current knowledge. 8 , 9 The research hypothesis makes a specific prediction about a new phenomenon 10 or a formal statement on the expected relationship between an independent variable and a dependent variable. 3 , 11 It provides a tentative answer to the research question to be tested or explored. 4

Hypotheses employ reasoning to predict a theory-based outcome. 10 These can also be developed from theories by focusing on components of theories that have not yet been observed. 10 The validity of hypotheses is often based on the testability of the prediction made in a reproducible experiment. 8

Conversely, hypotheses can also be rephrased as research questions. Several hypotheses based on existing theories and knowledge may be needed to answer a research question. Developing ethical research questions and hypotheses creates a research design that has logical relationships among variables. These relationships serve as a solid foundation for the conduct of the study. 4 , 11 Haphazardly constructed research questions can result in poorly formulated hypotheses and improper study designs, leading to unreliable results. Thus, the formulations of relevant research questions and verifiable hypotheses are crucial when beginning research. 12

CHARACTERISTICS OF GOOD RESEARCH QUESTIONS AND HYPOTHESES

Excellent research questions are specific and focused. These integrate collective data and observations to confirm or refute the subsequent hypotheses. Well-constructed hypotheses are based on previous reports and verify the research context. These are realistic, in-depth, sufficiently complex, and reproducible. More importantly, these hypotheses can be addressed and tested. 13

There are several characteristics of well-developed hypotheses. Good hypotheses are 1) empirically testable 7 , 10 , 11 , 13 ; 2) backed by preliminary evidence 9 ; 3) testable by ethical research 7 , 9 ; 4) based on original ideas 9 ; 5) have evidenced-based logical reasoning 10 ; and 6) can be predicted. 11 Good hypotheses can infer ethical and positive implications, indicating the presence of a relationship or effect relevant to the research theme. 7 , 11 These are initially developed from a general theory and branch into specific hypotheses by deductive reasoning. In the absence of a theory to base the hypotheses, inductive reasoning based on specific observations or findings form more general hypotheses. 10

TYPES OF RESEARCH QUESTIONS AND HYPOTHESES

Research questions and hypotheses are developed according to the type of research, which can be broadly classified into quantitative and qualitative research. We provide a summary of the types of research questions and hypotheses under quantitative and qualitative research categories in Table 1 .

Research questions in quantitative research

In quantitative research, research questions inquire about the relationships among variables being investigated and are usually framed at the start of the study. These are precise and typically linked to the subject population, dependent and independent variables, and research design. 1 Research questions may also attempt to describe the behavior of a population in relation to one or more variables, or describe the characteristics of variables to be measured ( descriptive research questions ). 1 , 5 , 14 These questions may also aim to discover differences between groups within the context of an outcome variable ( comparative research questions ), 1 , 5 , 14 or elucidate trends and interactions among variables ( relationship research questions ). 1 , 5 We provide examples of descriptive, comparative, and relationship research questions in quantitative research in Table 2 .

Hypotheses in quantitative research

In quantitative research, hypotheses predict the expected relationships among variables. 15 Relationships among variables that can be predicted include 1) between a single dependent variable and a single independent variable ( simple hypothesis ) or 2) between two or more independent and dependent variables ( complex hypothesis ). 4 , 11 Hypotheses may also specify the expected direction to be followed and imply an intellectual commitment to a particular outcome ( directional hypothesis ) 4 . On the other hand, hypotheses may not predict the exact direction and are used in the absence of a theory, or when findings contradict previous studies ( non-directional hypothesis ). 4 In addition, hypotheses can 1) define interdependency between variables ( associative hypothesis ), 4 2) propose an effect on the dependent variable from manipulation of the independent variable ( causal hypothesis ), 4 3) state a negative relationship between two variables ( null hypothesis ), 4 , 11 , 15 4) replace the working hypothesis if rejected ( alternative hypothesis ), 15 explain the relationship of phenomena to possibly generate a theory ( working hypothesis ), 11 5) involve quantifiable variables that can be tested statistically ( statistical hypothesis ), 11 6) or express a relationship whose interlinks can be verified logically ( logical hypothesis ). 11 We provide examples of simple, complex, directional, non-directional, associative, causal, null, alternative, working, statistical, and logical hypotheses in quantitative research, as well as the definition of quantitative hypothesis-testing research in Table 3 .

Research questions in qualitative research

Unlike research questions in quantitative research, research questions in qualitative research are usually continuously reviewed and reformulated. The central question and associated subquestions are stated more than the hypotheses. 15 The central question broadly explores a complex set of factors surrounding the central phenomenon, aiming to present the varied perspectives of participants. 15

There are varied goals for which qualitative research questions are developed. These questions can function in several ways, such as to 1) identify and describe existing conditions ( contextual research question s); 2) describe a phenomenon ( descriptive research questions ); 3) assess the effectiveness of existing methods, protocols, theories, or procedures ( evaluation research questions ); 4) examine a phenomenon or analyze the reasons or relationships between subjects or phenomena ( explanatory research questions ); or 5) focus on unknown aspects of a particular topic ( exploratory research questions ). 5 In addition, some qualitative research questions provide new ideas for the development of theories and actions ( generative research questions ) or advance specific ideologies of a position ( ideological research questions ). 1 Other qualitative research questions may build on a body of existing literature and become working guidelines ( ethnographic research questions ). Research questions may also be broadly stated without specific reference to the existing literature or a typology of questions ( phenomenological research questions ), may be directed towards generating a theory of some process ( grounded theory questions ), or may address a description of the case and the emerging themes ( qualitative case study questions ). 15 We provide examples of contextual, descriptive, evaluation, explanatory, exploratory, generative, ideological, ethnographic, phenomenological, grounded theory, and qualitative case study research questions in qualitative research in Table 4 , and the definition of qualitative hypothesis-generating research in Table 5 .

Qualitative studies usually pose at least one central research question and several subquestions starting with How or What . These research questions use exploratory verbs such as explore or describe . These also focus on one central phenomenon of interest, and may mention the participants and research site. 15

Hypotheses in qualitative research

Hypotheses in qualitative research are stated in the form of a clear statement concerning the problem to be investigated. Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes. 2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed-methods research question can be developed. 1

FRAMEWORKS FOR DEVELOPING RESEARCH QUESTIONS AND HYPOTHESES

Research questions followed by hypotheses should be developed before the start of the study. 1 , 12 , 14 It is crucial to develop feasible research questions on a topic that is interesting to both the researcher and the scientific community. This can be achieved by a meticulous review of previous and current studies to establish a novel topic. Specific areas are subsequently focused on to generate ethical research questions. The relevance of the research questions is evaluated in terms of clarity of the resulting data, specificity of the methodology, objectivity of the outcome, depth of the research, and impact of the study. 1 , 5 These aspects constitute the FINER criteria (i.e., Feasible, Interesting, Novel, Ethical, and Relevant). 1 Clarity and effectiveness are achieved if research questions meet the FINER criteria. In addition to the FINER criteria, Ratan et al. described focus, complexity, novelty, feasibility, and measurability for evaluating the effectiveness of research questions. 14

The PICOT and PEO frameworks are also used when developing research questions. 1 The following elements are addressed in these frameworks, PICOT: P-population/patients/problem, I-intervention or indicator being studied, C-comparison group, O-outcome of interest, and T-timeframe of the study; PEO: P-population being studied, E-exposure to preexisting conditions, and O-outcome of interest. 1 Research questions are also considered good if these meet the “FINERMAPS” framework: Feasible, Interesting, Novel, Ethical, Relevant, Manageable, Appropriate, Potential value/publishable, and Systematic. 14

As we indicated earlier, research questions and hypotheses that are not carefully formulated result in unethical studies or poor outcomes. To illustrate this, we provide some examples of ambiguous research question and hypotheses that result in unclear and weak research objectives in quantitative research ( Table 6 ) 16 and qualitative research ( Table 7 ) 17 , and how to transform these ambiguous research question(s) and hypothesis(es) into clear and good statements.

a These statements were composed for comparison and illustrative purposes only.

b These statements are direct quotes from Higashihara and Horiuchi. 16

a This statement is a direct quote from Shimoda et al. 17

The other statements were composed for comparison and illustrative purposes only.

CONSTRUCTING RESEARCH QUESTIONS AND HYPOTHESES

To construct effective research questions and hypotheses, it is very important to 1) clarify the background and 2) identify the research problem at the outset of the research, within a specific timeframe. 9 Then, 3) review or conduct preliminary research to collect all available knowledge about the possible research questions by studying theories and previous studies. 18 Afterwards, 4) construct research questions to investigate the research problem. Identify variables to be accessed from the research questions 4 and make operational definitions of constructs from the research problem and questions. Thereafter, 5) construct specific deductive or inductive predictions in the form of hypotheses. 4 Finally, 6) state the study aims . This general flow for constructing effective research questions and hypotheses prior to conducting research is shown in Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g001.jpg

Research questions are used more frequently in qualitative research than objectives or hypotheses. 3 These questions seek to discover, understand, explore or describe experiences by asking “What” or “How.” The questions are open-ended to elicit a description rather than to relate variables or compare groups. The questions are continually reviewed, reformulated, and changed during the qualitative study. 3 Research questions are also used more frequently in survey projects than hypotheses in experiments in quantitative research to compare variables and their relationships.

Hypotheses are constructed based on the variables identified and as an if-then statement, following the template, ‘If a specific action is taken, then a certain outcome is expected.’ At this stage, some ideas regarding expectations from the research to be conducted must be drawn. 18 Then, the variables to be manipulated (independent) and influenced (dependent) are defined. 4 Thereafter, the hypothesis is stated and refined, and reproducible data tailored to the hypothesis are identified, collected, and analyzed. 4 The hypotheses must be testable and specific, 18 and should describe the variables and their relationships, the specific group being studied, and the predicted research outcome. 18 Hypotheses construction involves a testable proposition to be deduced from theory, and independent and dependent variables to be separated and measured separately. 3 Therefore, good hypotheses must be based on good research questions constructed at the start of a study or trial. 12

In summary, research questions are constructed after establishing the background of the study. Hypotheses are then developed based on the research questions. Thus, it is crucial to have excellent research questions to generate superior hypotheses. In turn, these would determine the research objectives and the design of the study, and ultimately, the outcome of the research. 12 Algorithms for building research questions and hypotheses are shown in Fig. 2 for quantitative research and in Fig. 3 for qualitative research.

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g002.jpg

EXAMPLES OF RESEARCH QUESTIONS FROM PUBLISHED ARTICLES

  • EXAMPLE 1. Descriptive research question (quantitative research)
  • - Presents research variables to be assessed (distinct phenotypes and subphenotypes)
  • “BACKGROUND: Since COVID-19 was identified, its clinical and biological heterogeneity has been recognized. Identifying COVID-19 phenotypes might help guide basic, clinical, and translational research efforts.
  • RESEARCH QUESTION: Does the clinical spectrum of patients with COVID-19 contain distinct phenotypes and subphenotypes? ” 19
  • EXAMPLE 2. Relationship research question (quantitative research)
  • - Shows interactions between dependent variable (static postural control) and independent variable (peripheral visual field loss)
  • “Background: Integration of visual, vestibular, and proprioceptive sensations contributes to postural control. People with peripheral visual field loss have serious postural instability. However, the directional specificity of postural stability and sensory reweighting caused by gradual peripheral visual field loss remain unclear.
  • Research question: What are the effects of peripheral visual field loss on static postural control ?” 20
  • EXAMPLE 3. Comparative research question (quantitative research)
  • - Clarifies the difference among groups with an outcome variable (patients enrolled in COMPERA with moderate PH or severe PH in COPD) and another group without the outcome variable (patients with idiopathic pulmonary arterial hypertension (IPAH))
  • “BACKGROUND: Pulmonary hypertension (PH) in COPD is a poorly investigated clinical condition.
  • RESEARCH QUESTION: Which factors determine the outcome of PH in COPD?
  • STUDY DESIGN AND METHODS: We analyzed the characteristics and outcome of patients enrolled in the Comparative, Prospective Registry of Newly Initiated Therapies for Pulmonary Hypertension (COMPERA) with moderate or severe PH in COPD as defined during the 6th PH World Symposium who received medical therapy for PH and compared them with patients with idiopathic pulmonary arterial hypertension (IPAH) .” 21
  • EXAMPLE 4. Exploratory research question (qualitative research)
  • - Explores areas that have not been fully investigated (perspectives of families and children who receive care in clinic-based child obesity treatment) to have a deeper understanding of the research problem
  • “Problem: Interventions for children with obesity lead to only modest improvements in BMI and long-term outcomes, and data are limited on the perspectives of families of children with obesity in clinic-based treatment. This scoping review seeks to answer the question: What is known about the perspectives of families and children who receive care in clinic-based child obesity treatment? This review aims to explore the scope of perspectives reported by families of children with obesity who have received individualized outpatient clinic-based obesity treatment.” 22
  • EXAMPLE 5. Relationship research question (quantitative research)
  • - Defines interactions between dependent variable (use of ankle strategies) and independent variable (changes in muscle tone)
  • “Background: To maintain an upright standing posture against external disturbances, the human body mainly employs two types of postural control strategies: “ankle strategy” and “hip strategy.” While it has been reported that the magnitude of the disturbance alters the use of postural control strategies, it has not been elucidated how the level of muscle tone, one of the crucial parameters of bodily function, determines the use of each strategy. We have previously confirmed using forward dynamics simulations of human musculoskeletal models that an increased muscle tone promotes the use of ankle strategies. The objective of the present study was to experimentally evaluate a hypothesis: an increased muscle tone promotes the use of ankle strategies. Research question: Do changes in the muscle tone affect the use of ankle strategies ?” 23

EXAMPLES OF HYPOTHESES IN PUBLISHED ARTICLES

  • EXAMPLE 1. Working hypothesis (quantitative research)
  • - A hypothesis that is initially accepted for further research to produce a feasible theory
  • “As fever may have benefit in shortening the duration of viral illness, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response when taken during the early stages of COVID-19 illness .” 24
  • “In conclusion, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response . The difference in perceived safety of these agents in COVID-19 illness could be related to the more potent efficacy to reduce fever with ibuprofen compared to acetaminophen. Compelling data on the benefit of fever warrant further research and review to determine when to treat or withhold ibuprofen for early stage fever for COVID-19 and other related viral illnesses .” 24
  • EXAMPLE 2. Exploratory hypothesis (qualitative research)
  • - Explores particular areas deeper to clarify subjective experience and develop a formal hypothesis potentially testable in a future quantitative approach
  • “We hypothesized that when thinking about a past experience of help-seeking, a self distancing prompt would cause increased help-seeking intentions and more favorable help-seeking outcome expectations .” 25
  • “Conclusion
  • Although a priori hypotheses were not supported, further research is warranted as results indicate the potential for using self-distancing approaches to increasing help-seeking among some people with depressive symptomatology.” 25
  • EXAMPLE 3. Hypothesis-generating research to establish a framework for hypothesis testing (qualitative research)
  • “We hypothesize that compassionate care is beneficial for patients (better outcomes), healthcare systems and payers (lower costs), and healthcare providers (lower burnout). ” 26
  • Compassionomics is the branch of knowledge and scientific study of the effects of compassionate healthcare. Our main hypotheses are that compassionate healthcare is beneficial for (1) patients, by improving clinical outcomes, (2) healthcare systems and payers, by supporting financial sustainability, and (3) HCPs, by lowering burnout and promoting resilience and well-being. The purpose of this paper is to establish a scientific framework for testing the hypotheses above . If these hypotheses are confirmed through rigorous research, compassionomics will belong in the science of evidence-based medicine, with major implications for all healthcare domains.” 26
  • EXAMPLE 4. Statistical hypothesis (quantitative research)
  • - An assumption is made about the relationship among several population characteristics ( gender differences in sociodemographic and clinical characteristics of adults with ADHD ). Validity is tested by statistical experiment or analysis ( chi-square test, Students t-test, and logistic regression analysis)
  • “Our research investigated gender differences in sociodemographic and clinical characteristics of adults with ADHD in a Japanese clinical sample. Due to unique Japanese cultural ideals and expectations of women's behavior that are in opposition to ADHD symptoms, we hypothesized that women with ADHD experience more difficulties and present more dysfunctions than men . We tested the following hypotheses: first, women with ADHD have more comorbidities than men with ADHD; second, women with ADHD experience more social hardships than men, such as having less full-time employment and being more likely to be divorced.” 27
  • “Statistical Analysis
  • ( text omitted ) Between-gender comparisons were made using the chi-squared test for categorical variables and Students t-test for continuous variables…( text omitted ). A logistic regression analysis was performed for employment status, marital status, and comorbidity to evaluate the independent effects of gender on these dependent variables.” 27

EXAMPLES OF HYPOTHESIS AS WRITTEN IN PUBLISHED ARTICLES IN RELATION TO OTHER PARTS

  • EXAMPLE 1. Background, hypotheses, and aims are provided
  • “Pregnant women need skilled care during pregnancy and childbirth, but that skilled care is often delayed in some countries …( text omitted ). The focused antenatal care (FANC) model of WHO recommends that nurses provide information or counseling to all pregnant women …( text omitted ). Job aids are visual support materials that provide the right kind of information using graphics and words in a simple and yet effective manner. When nurses are not highly trained or have many work details to attend to, these job aids can serve as a content reminder for the nurses and can be used for educating their patients (Jennings, Yebadokpo, Affo, & Agbogbe, 2010) ( text omitted ). Importantly, additional evidence is needed to confirm how job aids can further improve the quality of ANC counseling by health workers in maternal care …( text omitted )” 28
  • “ This has led us to hypothesize that the quality of ANC counseling would be better if supported by job aids. Consequently, a better quality of ANC counseling is expected to produce higher levels of awareness concerning the danger signs of pregnancy and a more favorable impression of the caring behavior of nurses .” 28
  • “This study aimed to examine the differences in the responses of pregnant women to a job aid-supported intervention during ANC visit in terms of 1) their understanding of the danger signs of pregnancy and 2) their impression of the caring behaviors of nurses to pregnant women in rural Tanzania.” 28
  • EXAMPLE 2. Background, hypotheses, and aims are provided
  • “We conducted a two-arm randomized controlled trial (RCT) to evaluate and compare changes in salivary cortisol and oxytocin levels of first-time pregnant women between experimental and control groups. The women in the experimental group touched and held an infant for 30 min (experimental intervention protocol), whereas those in the control group watched a DVD movie of an infant (control intervention protocol). The primary outcome was salivary cortisol level and the secondary outcome was salivary oxytocin level.” 29
  • “ We hypothesize that at 30 min after touching and holding an infant, the salivary cortisol level will significantly decrease and the salivary oxytocin level will increase in the experimental group compared with the control group .” 29
  • EXAMPLE 3. Background, aim, and hypothesis are provided
  • “In countries where the maternal mortality ratio remains high, antenatal education to increase Birth Preparedness and Complication Readiness (BPCR) is considered one of the top priorities [1]. BPCR includes birth plans during the antenatal period, such as the birthplace, birth attendant, transportation, health facility for complications, expenses, and birth materials, as well as family coordination to achieve such birth plans. In Tanzania, although increasing, only about half of all pregnant women attend an antenatal clinic more than four times [4]. Moreover, the information provided during antenatal care (ANC) is insufficient. In the resource-poor settings, antenatal group education is a potential approach because of the limited time for individual counseling at antenatal clinics.” 30
  • “This study aimed to evaluate an antenatal group education program among pregnant women and their families with respect to birth-preparedness and maternal and infant outcomes in rural villages of Tanzania.” 30
  • “ The study hypothesis was if Tanzanian pregnant women and their families received a family-oriented antenatal group education, they would (1) have a higher level of BPCR, (2) attend antenatal clinic four or more times, (3) give birth in a health facility, (4) have less complications of women at birth, and (5) have less complications and deaths of infants than those who did not receive the education .” 30

Research questions and hypotheses are crucial components to any type of research, whether quantitative or qualitative. These questions should be developed at the very beginning of the study. Excellent research questions lead to superior hypotheses, which, like a compass, set the direction of research, and can often determine the successful conduct of the study. Many research studies have floundered because the development of research questions and subsequent hypotheses was not given the thought and meticulous attention needed. The development of research questions and hypotheses is an iterative process based on extensive knowledge of the literature and insightful grasp of the knowledge gap. Focused, concise, and specific research questions provide a strong foundation for constructing hypotheses which serve as formal predictions about the research outcomes. Research questions and hypotheses are crucial elements of research that should not be overlooked. They should be carefully thought of and constructed when planning research. This avoids unethical studies and poor outcomes by defining well-founded objectives that determine the design, course, and outcome of the study.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Barroga E, Matanguihan GJ.
  • Methodology: Barroga E, Matanguihan GJ.
  • Writing - original draft: Barroga E, Matanguihan GJ.
  • Writing - review & editing: Barroga E, Matanguihan GJ.

IMAGES

  1. 10 Proven Steps: How to Find the Hypothesis in a Research Article

    what research method tests hypotheses under highly controlled conditions

  2. Learn all About Hypothesis Testing!

    what research method tests hypotheses under highly controlled conditions

  3. How to Optimize the Value of Hypothesis Testing

    what research method tests hypotheses under highly controlled conditions

  4. Hypothesis Testing in Research

    what research method tests hypotheses under highly controlled conditions

  5. 1.3: The Science of Biology

    what research method tests hypotheses under highly controlled conditions

  6. What is Hypothesis Testing?

    what research method tests hypotheses under highly controlled conditions

VIDEO

  1. Choosing a Research Question: Developing a Hypothesis and Objectives Part 3

  2. Key Concepts of Tests of Hypotheses on the Population Mean and Population Proportion Week8 Part 311F

  3. Research Questions and Hypotheses

  4. Selecting the Appropriate Hypothesis Test [FIL]

  5. Discuss about the Formulation of Hypotheses 9th Class Biology-Hiba Edu

  6. mod07lec41

COMMENTS

  1. 14.1 What is experimental design and when should you use it?

    Experimental design is an umbrella term for a research method that is designed to test hypotheses related to causality under controlled conditions. Table 14.1 describes the three major types of experimental design (pre-experimental, quasi-experimental, and true experimental) and presents subtypes for each.

  2. What Is a Controlled Experiment?

    In a controlled experiment, all variables other than the independent variable are controlled or held constant so they don't influence the dependent variable. holding variables at a constant or restricted level (e.g., keeping room temperature fixed). measuring variables to statistically control for them in your analyses.

  3. Controlled experiments (article)

    When possible, scientists test their hypotheses using controlled experiments. A controlled experiment is a scientific test done under controlled conditions, meaning that just one (or a few) factors are changed at a time, while all others are kept constant. We'll look closely at controlled experiments in the next section.

  4. 8.1 Experimental design: What is it and when should it be used?

    An experiment is a method of data collection designed to test hypotheses under controlled conditions. In social scientific research, the term experiment has a precise meaning and should not be used to describe all research methodologies. Experiments have a long and important history in social science.

  5. What Is a Controlled Experiment?

    Hypotheses are crucial to controlled experiments because they provide a clear focus and direction for the research. A hypothesis is a testable prediction about the relationship between variables. It guides the design of the experiment, including what variables to manipulate (independent variables) and what outcomes to measure (dependent variables).

  6. Experiments

    An experiment A method of data collection designed to test hypotheses under controlled conditions. is a method of data collection designed to test hypotheses under controlled conditions. Students in my research methods classes often use the term experiment to describe all kinds of empirical research projects, but in social scientific research ...

  7. Experimental Method In Psychology

    1. Lab Experiment. A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions. A laboratory experiment is conducted under highly controlled conditions (not necessarily a laboratory) where ...

  8. 12.1 Experimental design: What is it and when should it be used?

    An experiment is a method of data collection designed to test hypotheses under controlled conditions. Students in my research methods classes often use the term experiment to describe all kinds of research projects, but in social scientific research, the term has a unique meaning and should not be used to describe all research methodologies ...

  9. 1.6: Scientific Experiments

    An experiment is a special type of scientific investigation that is performed under controlled conditions. Like all investigations, an experiment generates evidence to test a hypothesis. But unlike some other types of investigations, an experiment involves manipulating some factors in a system in order to see how it affects the outcome.

  10. 1.5: Chapter 5 Research Design

    1.5: Chapter 5 Research Design. Research design is a comprehensive plan for data collection in an empirical research project. It is a "blueprint" for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: (1) the data collection process, (2) the instrument ...

  11. 6.2 Experimental Design

    Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too. In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition ...

  12. Control conditions in experimental research

    Read about experiment as research methodology. " Experiments are set up to test specific hypotheses. In a 'true' experiment the researcher controls variables, so that only the factor which is hypothesised to have an effect differs between the experimental and control treatments. In reality, such control is rarely (if ever) possible in enquiries ...

  13. Hypothesis tests

    A hypothesis test is a procedure used in statistics to assess whether a particular viewpoint is likely to be true. They follow a strict protocol, and they generate a 'p-value', on the basis of which a decision is made about the truth of the hypothesis under investigation.All of the routine statistical 'tests' used in research—t-tests, χ 2 tests, Mann-Whitney tests, etc.—are all ...

  14. Predict, Control, and Replicate to Understand: How Statistics Can

    Scientists abstract hypotheses from observations of the world, which they then deploy to test their reliability. The best way to test reliability is to predict an effect before it occurs. If we can manipulate the independent variables (the efficient causes) that make it occur, then ability to predict makes it possible to control.Such control helps to isolate the relevant variables.

  15. Research Approach

    Deductive approach methods: Surveys and questionnaires: to collect data from a large sample of participants. Experiments: to manipulate variables and test hypotheses under controlled conditions. Statistical analysis: to test the significance of relationships between variables. Content analysis: to analyze and interpret text-based data.

  16. Scientific Methods and Human Subjects Research

    The following are examples of bad and good research hypotheses for this experiment: ... (the method used for the test), and "then my grip strength will be stronger with warm hands than with cold hands" (the prediction). ... Experimental studies may be in vitro, conducted in highly controlled laboratory conditions (example: in a test tube), ...

  17. Research Methodology

    An experiment is a research method for investigating cause and effect under highly controlled conditions. When conducting an experiment, researchers will test a hypothesis .

  18. Scientific Hypothesis-Testing Strengthens Neuroscience Research

    Materials and Methods. To gauge the applicability of the statistical criticisms to typical neuroscience research, I classified all Research Articles that appeared in the first three issues of The Journal of Neuroscience in 2018 according my interpretation of the scientific "modes" they represented, i.e., "hypothesis testing," "questioning," etc., because these modes have different ...

  19. Research Methods Flashcards

    A scientific research method that uses participants in a formal trial to confirm or disconfirm a hypothesis. A research method that involves gathering data under controlled conditions to test a hypothesis by exposing participants to a treatment and observing and measuring its effect. In a controlled experiment, the group of participants exposed ...

  20. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  21. Chapter 12: Other Methods of Data Collection and Analysis

    -A method of data collection designed to test hypotheses under controlled conditions. (textbook)-excellent data collection strategy for those wishing to observe the consequences of very specific actions or stimuli.-Most commonly a quantitative research method, experiments are used more often by psychologists than sociologists-in social scientific research, the term has a unique meaning and ...