What is a scientific hypothesis?

It's the initial building block in the scientific method.

A girl looks at plants in a test tube for a science experiment. What's her scientific hypothesis?

Hypothesis basics

What makes a hypothesis testable.

  • Types of hypotheses
  • Hypothesis versus theory

Additional resources

Bibliography.

A scientific hypothesis is a tentative, testable explanation for a phenomenon in the natural world. It's the initial building block in the scientific method . Many describe it as an "educated guess" based on prior knowledge and observation. While this is true, a hypothesis is more informed than a guess. While an "educated guess" suggests a random prediction based on a person's expertise, developing a hypothesis requires active observation and background research. 

The basic idea of a hypothesis is that there is no predetermined outcome. For a solution to be termed a scientific hypothesis, it has to be an idea that can be supported or refuted through carefully crafted experimentation or observation. This concept, called falsifiability and testability, was advanced in the mid-20th century by Austrian-British philosopher Karl Popper in his famous book "The Logic of Scientific Discovery" (Routledge, 1959).

A key function of a hypothesis is to derive predictions about the results of future experiments and then perform those experiments to see whether they support the predictions.

A hypothesis is usually written in the form of an if-then statement, which gives a possibility (if) and explains what may happen because of the possibility (then). The statement could also include "may," according to California State University, Bakersfield .

Here are some examples of hypothesis statements:

  • If garlic repels fleas, then a dog that is given garlic every day will not get fleas.
  • If sugar causes cavities, then people who eat a lot of candy may be more prone to cavities.
  • If ultraviolet light can damage the eyes, then maybe this light can cause blindness.

A useful hypothesis should be testable and falsifiable. That means that it should be possible to prove it wrong. A theory that can't be proved wrong is nonscientific, according to Karl Popper's 1963 book " Conjectures and Refutations ."

An example of an untestable statement is, "Dogs are better than cats." That's because the definition of "better" is vague and subjective. However, an untestable statement can be reworded to make it testable. For example, the previous statement could be changed to this: "Owning a dog is associated with higher levels of physical fitness than owning a cat." With this statement, the researcher can take measures of physical fitness from dog and cat owners and compare the two.

Types of scientific hypotheses

Elementary-age students study alternative energy using homemade windmills during public school science class.

In an experiment, researchers generally state their hypotheses in two ways. The null hypothesis predicts that there will be no relationship between the variables tested, or no difference between the experimental groups. The alternative hypothesis predicts the opposite: that there will be a difference between the experimental groups. This is usually the hypothesis scientists are most interested in, according to the University of Miami .

For example, a null hypothesis might state, "There will be no difference in the rate of muscle growth between people who take a protein supplement and people who don't." The alternative hypothesis would state, "There will be a difference in the rate of muscle growth between people who take a protein supplement and people who don't."

If the results of the experiment show a relationship between the variables, then the null hypothesis has been rejected in favor of the alternative hypothesis, according to the book " Research Methods in Psychology " (​​BCcampus, 2015). 

There are other ways to describe an alternative hypothesis. The alternative hypothesis above does not specify a direction of the effect, only that there will be a difference between the two groups. That type of prediction is called a two-tailed hypothesis. If a hypothesis specifies a certain direction — for example, that people who take a protein supplement will gain more muscle than people who don't — it is called a one-tailed hypothesis, according to William M. K. Trochim , a professor of Policy Analysis and Management at Cornell University.

Sometimes, errors take place during an experiment. These errors can happen in one of two ways. A type I error is when the null hypothesis is rejected when it is true. This is also known as a false positive. A type II error occurs when the null hypothesis is not rejected when it is false. This is also known as a false negative, according to the University of California, Berkeley . 

A hypothesis can be rejected or modified, but it can never be proved correct 100% of the time. For example, a scientist can form a hypothesis stating that if a certain type of tomato has a gene for red pigment, that type of tomato will be red. During research, the scientist then finds that each tomato of this type is red. Though the findings confirm the hypothesis, there may be a tomato of that type somewhere in the world that isn't red. Thus, the hypothesis is true, but it may not be true 100% of the time.

Scientific theory vs. scientific hypothesis

The best hypotheses are simple. They deal with a relatively narrow set of phenomena. But theories are broader; they generally combine multiple hypotheses into a general explanation for a wide range of phenomena, according to the University of California, Berkeley . For example, a hypothesis might state, "If animals adapt to suit their environments, then birds that live on islands with lots of seeds to eat will have differently shaped beaks than birds that live on islands with lots of insects to eat." After testing many hypotheses like these, Charles Darwin formulated an overarching theory: the theory of evolution by natural selection.

"Theories are the ways that we make sense of what we observe in the natural world," Tanner said. "Theories are structures of ideas that explain and interpret facts." 

  • Read more about writing a hypothesis, from the American Medical Writers Association.
  • Find out why a hypothesis isn't always necessary in science, from The American Biology Teacher.
  • Learn about null and alternative hypotheses, from Prof. Essa on YouTube .

Encyclopedia Britannica. Scientific Hypothesis. Jan. 13, 2022. https://www.britannica.com/science/scientific-hypothesis

Karl Popper, "The Logic of Scientific Discovery," Routledge, 1959.

California State University, Bakersfield, "Formatting a testable hypothesis." https://www.csub.edu/~ddodenhoff/Bio100/Bio100sp04/formattingahypothesis.htm  

Karl Popper, "Conjectures and Refutations," Routledge, 1963.

Price, P., Jhangiani, R., & Chiang, I., "Research Methods of Psychology — 2nd Canadian Edition," BCcampus, 2015.‌

University of Miami, "The Scientific Method" http://www.bio.miami.edu/dana/161/evolution/161app1_scimethod.pdf  

William M.K. Trochim, "Research Methods Knowledge Base," https://conjointly.com/kb/hypotheses-explained/  

University of California, Berkeley, "Multiple Hypothesis Testing and False Discovery Rate" https://www.stat.berkeley.edu/~hhuang/STAT141/Lecture-FDR.pdf  

University of California, Berkeley, "Science at multiple levels" https://undsci.berkeley.edu/article/0_0_0/howscienceworks_19

Sign up for the Live Science daily newsletter now

Get the world’s most fascinating discoveries delivered straight to your inbox.

Alina Bradford

'Uncharted territory': El Niño to flip to La Niña in what could be the hottest year on record

What's the largest waterfall in the world?

Scientists may have pinpointed the true origin of the Hope Diamond and other pristine gemstones

Most Popular

  • 2 Nightmare fish may explain how our 'fight or flight' response evolved
  • 3 Lyrid meteor shower 2024: How to watch stunning shooting stars and 'fireballs' during the event's peak this week
  • 4 Scientists are one step closer to knowing the mass of ghostly neutrinos — possibly paving the way to new physics
  • 5 What's the largest waterfall in the world?
  • 2 Enormous dinosaur dubbed Shiva 'The Destroyer' is one of the biggest ever discovered
  • 3 2,500-year-old skeletons with legs chopped off may be elites who received 'cruel' punishment in ancient China
  • 4 Rare 'porcelain gallbladder' found in 100-year-old unmarked grave at Mississippi mental asylum cemetery

for a hypothesis to be scientific it must

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Chemistry LibreTexts

1.5: Hypothesis, Theories, and Laws

  • Last updated
  • Save as PDF
  • Page ID 464575

  Learning Objectives

  • Describe the difference between hypothesis and theory as scientific terms.
  • Describe the difference between a theory and scientific law.

Although many have taken science classes throughout the course of their studies, people often have incorrect or misleading ideas about some of the most important and basic principles in science. Most students have heard of hypotheses, theories, and laws, but what do these terms really mean? Prior to reading this section, consider what you have learned about these terms before. What do these terms mean to you? What do you read that contradicts or supports what you thought?

What is a Fact?

A fact is a basic statement established by experiment or observation. All facts are true under the specific conditions of the observation.

What is a Hypothesis?

One of the most common terms used in science classes is a "hypothesis". The word can have many different definitions, depending on the context in which it is being used:

  • An educated guess: a scientific hypothesis provides a suggested solution based on evidence.
  • Prediction: if you have ever carried out a science experiment, you probably made this type of hypothesis when you predicted the outcome of your experiment.
  • Tentative or proposed explanation: hypotheses can be suggestions about why something is observed. In order for it to be scientific, however, a scientist must be able to test the explanation to see if it works and if it is able to correctly predict what will happen in a situation. For example, "if my hypothesis is correct, we should see ___ result when we perform ___ test."
A hypothesis is very tentative; it can be easily changed.

What is a Theory?

The United States National Academy of Sciences describes what a theory is as follows:

"Some scientific explanations are so well established that no new evidence is likely to alter them. The explanation becomes a scientific theory. In everyday language a theory means a hunch or speculation. Not so in science. In science, the word theory refers to a comprehensive explanation of an important feature of nature supported by facts gathered over time. Theories also allow scientists to make predictions about as yet unobserved phenomena."

"A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experimentation. Such fact-supported theories are not "guesses" but reliable accounts of the real world. The theory of biological evolution is more than "just a theory." It is as factual an explanation of the universe as the atomic theory of matter (stating that everything is made of atoms) or the germ theory of disease (which states that many diseases are caused by germs). Our understanding of gravity is still a work in progress. But the phenomenon of gravity, like evolution, is an accepted fact.

Note some key features of theories that are important to understand from this description:

  • Theories are explanations of natural phenomena. They aren't predictions (although we may use theories to make predictions). They are explanations as to why we observe something.
  • Theories aren't likely to change. They have a large amount of support and are able to satisfactorily explain numerous observations. Theories can, indeed, be facts. Theories can change, but it is a long and difficult process. In order for a theory to change, there must be many observations or pieces of evidence that the theory cannot explain.
  • Theories are not guesses. The phrase "just a theory" has no room in science. To be a scientific theory carries a lot of weight; it is not just one person's idea about something
Theories aren't likely to change.

What is a Law?

Scientific laws are similar to scientific theories in that they are principles that can be used to predict the behavior of the natural world. Both scientific laws and scientific theories are typically well-supported by observations and/or experimental evidence. Usually scientific laws refer to rules for how nature will behave under certain conditions, frequently written as an equation. Scientific theories are more overarching explanations of how nature works and why it exhibits certain characteristics. As a comparison, theories explain why we observe what we do and laws describe what happens.

For example, around the year 1800, Jacques Charles and other scientists were working with gases to, among other reasons, improve the design of the hot air balloon. These scientists found, after many, many tests, that certain patterns existed in the observations on gas behavior. If the temperature of the gas is increased, the volume of the gas increased. This is known as a natural law. A law is a relationship that exists between variables in a group of data. Laws describe the patterns we see in large amounts of data, but do not describe why the patterns exist.

What is a Belief?

A belief is a statement that is not scientifically provable. Beliefs may or may not be incorrect; they just are outside the realm of science to explore.

Laws vs. Theories

A common misconception is that scientific theories are rudimentary ideas that will eventually graduate into scientific laws when enough data and evidence has accumulated. A theory does not change into a scientific law with the accumulation of new or better evidence. Remember, theories are explanations and laws are patterns we see in large amounts of data, frequently written as an equation. A theory will always remain a theory; a law will always remain a law.

Video \(\PageIndex{1}\): What’s the difference between a scientific law and theory?

  • A hypothesis is a tentative explanation that can be tested by further investigation.
  • A theory is a well-supported explanation of observations.
  • A scientific law is a statement that summarizes the relationship between variables.
  • An experiment is a controlled method of testing a hypothesis.

Contributions & Attributions

Marisa Alviar-Agnew  ( Sacramento City College )

Henry Agnew (UC Davis)

What Is a Hypothesis? (Science)

If...,Then...

Angela Lumsden/Getty Images

  • Scientific Method
  • Chemical Laws
  • Periodic Table
  • Projects & Experiments
  • Biochemistry
  • Physical Chemistry
  • Medical Chemistry
  • Chemistry In Everyday Life
  • Famous Chemists
  • Activities for Kids
  • Abbreviations & Acronyms
  • Weather & Climate
  • Ph.D., Biomedical Sciences, University of Tennessee at Knoxville
  • B.A., Physics and Mathematics, Hastings College

A hypothesis (plural hypotheses) is a proposed explanation for an observation. The definition depends on the subject.

In science, a hypothesis is part of the scientific method. It is a prediction or explanation that is tested by an experiment. Observations and experiments may disprove a scientific hypothesis, but can never entirely prove one.

In the study of logic, a hypothesis is an if-then proposition, typically written in the form, "If X , then Y ."

In common usage, a hypothesis is simply a proposed explanation or prediction, which may or may not be tested.

Writing a Hypothesis

Most scientific hypotheses are proposed in the if-then format because it's easy to design an experiment to see whether or not a cause and effect relationship exists between the independent variable and the dependent variable . The hypothesis is written as a prediction of the outcome of the experiment.

  • Null Hypothesis and Alternative Hypothesis

Statistically, it's easier to show there is no relationship between two variables than to support their connection. So, scientists often propose the null hypothesis . The null hypothesis assumes changing the independent variable will have no effect on the dependent variable.

In contrast, the alternative hypothesis suggests changing the independent variable will have an effect on the dependent variable. Designing an experiment to test this hypothesis can be trickier because there are many ways to state an alternative hypothesis.

For example, consider a possible relationship between getting a good night's sleep and getting good grades. The null hypothesis might be stated: "The number of hours of sleep students get is unrelated to their grades" or "There is no correlation between hours of sleep and grades."

An experiment to test this hypothesis might involve collecting data, recording average hours of sleep for each student and grades. If a student who gets eight hours of sleep generally does better than students who get four hours of sleep or 10 hours of sleep, the hypothesis might be rejected.

But the alternative hypothesis is harder to propose and test. The most general statement would be: "The amount of sleep students get affects their grades." The hypothesis might also be stated as "If you get more sleep, your grades will improve" or "Students who get nine hours of sleep have better grades than those who get more or less sleep."

In an experiment, you can collect the same data, but the statistical analysis is less likely to give you a high confidence limit.

Usually, a scientist starts out with the null hypothesis. From there, it may be possible to propose and test an alternative hypothesis, to narrow down the relationship between the variables.

Example of a Hypothesis

Examples of a hypothesis include:

  • If you drop a rock and a feather, (then) they will fall at the same rate.
  • Plants need sunlight in order to live. (if sunlight, then life)
  • Eating sugar gives you energy. (if sugar, then energy)
  • White, Jay D.  Research in Public Administration . Conn., 1998.
  • Schick, Theodore, and Lewis Vaughn.  How to Think about Weird Things: Critical Thinking for a New Age . McGraw-Hill Higher Education, 2002.
  • Null Hypothesis Definition and Examples
  • Definition of a Hypothesis
  • What Are the Elements of a Good Hypothesis?
  • Six Steps of the Scientific Method
  • Independent Variable Definition and Examples
  • What Are Examples of a Hypothesis?
  • Understanding Simple vs Controlled Experiments
  • Scientific Method Flow Chart
  • Scientific Method Vocabulary Terms
  • What Is a Testable Hypothesis?
  • Null Hypothesis Examples
  • What 'Fail to Reject' Means in a Hypothesis Test
  • How To Design a Science Fair Experiment
  • What Is an Experiment? Definition and Design
  • Hypothesis Test for the Difference of Two Population Proportions

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Biology LibreTexts

1.1: Scientific Investigation

  • Last updated
  • Save as PDF
  • Page ID 6252

f-d:435e8764890fecd329109b26d93007ac98a94efca67551c572f2efcf IMAGE_TINY IMAGE_TINY.1

Chances are you've heard of the scientific method. But what exactly is the scientific method?

Is it a precise and exact way that all science must be done? Or is it a series of steps that most scientists generally follow, but may be modified for the benefit of an individual investigation?

The Scientific Method

"We also discovered that science is cool and fun because you get to do stuff that no one has ever done before." In the article Blackawton bees, published by eight to ten year old students: Biology Letters (2010) http://rsbl.royalsocietypublishing.org/content/early/2010/12/18/rsbl.2010.1056.abstract .

There are basic methods of gaining knowledge that are common to all of science. At the heart of science is the scientific investigation, which is done by following the scientific method . A scientific investigation is a plan for asking questions and testing possible answers. It generally follows the steps listed in Figure below . See http://www.youtube.com/watch?v=KZaCy5Z87FA for an overview of the scientific method.

Steps of a Scientific Investigation

Steps of a Scientific Investigation. A scientific investigation typically has these steps. Scientists often develop their own steps they follow in a scientific investigation. Shown here is a simplification of how a scientific investigation is done.

Making Observations

A scientific investigation typically begins with observations. You make observations all the time. Let’s say you take a walk in the woods and observe a moth, like the one in Figure below, resting on a tree trunk. You notice that the moth has spots on its wings that look like eyes. You think the eye spots make the moth look like the face of an owl.

Marbled_emperor_moth_heniocha_dyops.jpg

Figure 2: Marbled emperor moth Heniocha dyops in Botswana. (CC-SA-BY-4.0; Charlesjsharp ).

Does this moth remind you of an owl?

Asking a Question

Observations often lead to questions. For example, you might ask yourself why the moth has eye spots that make it look like an owl’s face. What reason might there be for this observation?

Forming a Hypothesis

The next step in a scientific investigation is forming a hypothesis . A hypothesis is a possible answer to a scientific question, but it isn’t just any answer. A hypothesis must be based on scientific knowledge, and it must be logical. A hypothesis also must be falsifiable. In other words, it must be possible to make observations that would disprove the hypothesis if it really is false. Assume you know that some birds eat moths and that owls prey on other birds. From this knowledge, you reason that eye spots scare away birds that might eat the moth. This is your hypothesis.

Testing the Hypothesis

To test a hypothesis, you first need to make a prediction based on the hypothesis. A prediction is a statement that tells what will happen under certain conditions. It can be expressed in the form: If A occurs, then B will happen. Based on your hypothesis, you might make this prediction: If a moth has eye spots on its wings, then birds will avoid eating it.

Next, you must gather evidence to test your prediction. Evidence is any type of data that may either agree or disagree with a prediction, so it may either support or disprove a hypothesis. Evidence may be gathered by an experiment . Assume that you gather evidence by making more observations of moths with eye spots. Perhaps you observe that birds really do avoid eating moths with eye spots. This evidence agrees with your prediction.

Drawing Conclusions

Evidence that agrees with your prediction supports your hypothesis. Does such evidence prove that your hypothesis is true? No; a hypothesis cannot be proven conclusively to be true. This is because you can never examine all of the possible evidence, and someday evidence might be found that disproves the hypothesis. Nonetheless, the more evidence that supports a hypothesis, the more likely the hypothesis is to be true.

Communicating Results

The last step in a scientific investigation is communicating what you have learned with others. This is a very important step because it allows others to test your hypothesis. If other researchers get the same results as yours, they add support to the hypothesis. However, if they get different results, they may disprove the hypothesis.

When scientists share their results, they should describe their methods and point out any possible problems with the investigation. For example, while you were observing moths, perhaps your presence scared birds away. This introduces an error into your investigation. You got the results you predicted (the birds avoided the moths while you were observing them), but not for the reason you hypothesized. Other researchers might be able to think of ways to avoid this error in future studies.

The Scientific Method Made Easy explains scientific method: http://www.youtube.com/watch?v=zcavPAFiG14 (9:55).

As you view The Scientific Method Made Easy, focus on these concepts: the relationship between evidence, conclusions and theories, the "ground rules" of scientific research, the steps in a scientific procedure, the meaning of the "replication of results," the meaning of "falsifiable," the outcome when the scientific method is not followed.

Discovering the Scientific Method

A summery video of the scientific method, using the identification of DNA structure as an example, is shown in this video by MIT students: https://www.youtube.com/watch?v=5eDNgeEUtMg .

Why I do Science

Dan Costa, Ph.D. is a professor of Biology at the University of California, Santa Cruz, and has been studying marine life for well over 40 years. He is a leader in using satellite tags, time and depth recorders and other sophisticated electronic tags to gather information about the amazing depths to which elephant seals dive, their migration routes and how they use oceanographic features to hunt for prey as far as the international dateline and the Alaskan Aleutian Islands. In the following KQED video, Dr. Costa discusses why he is a scientist: http://science.kqed.org/quest/video/why-i-do-science-dan-costa/ .

  • At the heart of science is the scientific investigation, which is done by following the scientific method. A scientific investigation is a plan for asking questions and testing possible answers.
  • A scientific investigation typically begins with observations. Observations often lead to questions.
  • A hypothesis is a possible logical answer to a scientific question, based on scientific knowledge.
  • A prediction is a statement that tells what will happen under certain conditions.
  • Evidence is any type of data that may either agree or disagree with a prediction, so it may either support or disprove a hypothesis. Conclusions may be formed from evidence.
  • The last step in a scientific investigation is the communication of results with others.

Explore More

Explore more i.

Use this resource to answer the questions that follow.

  • Steps of the Scientific Method at http://www.sciencebuddies.org/science-fair-projects/project_scientific_method.shtml#overviewofthescientificmethod .
  • Describe what is means to "Ask a Question."
  • Describe what it means to "Construct a Hypothesis."
  • How does a scientist conduct a fair test?
  • What does a scientist do if the hypothesis is not supported?

Explore More II

  • SciMeth Matching at http://www.studystack.com/matching-2497 .
  • Outline the steps of a scientific investigation.
  • What is a scientific hypothesis? What characteristics must a hypothesis have to be useful in science?
  • Give an example of a scientific question that could be investigated with an experiment. Then give an example of question that could not be investigated.
  • Can a hypothesis be proven true? Why or why not?
  • Why do scientists communicate their results?
  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How to Write a Great Hypothesis

Hypothesis Definition, Format, Examples, and Tips

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

for a hypothesis to be scientific it must

Amy Morin, LCSW, is a psychotherapist and international bestselling author. Her books, including "13 Things Mentally Strong People Don't Do," have been translated into more than 40 languages. Her TEDx talk,  "The Secret of Becoming Mentally Strong," is one of the most viewed talks of all time.

for a hypothesis to be scientific it must

Verywell / Alex Dos Diaz

  • The Scientific Method

Hypothesis Format

Falsifiability of a hypothesis.

  • Operationalization

Hypothesis Types

Hypotheses examples.

  • Collecting Data

A hypothesis is a tentative statement about the relationship between two or more variables. It is a specific, testable prediction about what you expect to happen in a study. It is a preliminary answer to your question that helps guide the research process.

Consider a study designed to examine the relationship between sleep deprivation and test performance. The hypothesis might be: "This study is designed to assess the hypothesis that sleep-deprived people will perform worse on a test than individuals who are not sleep-deprived."

At a Glance

A hypothesis is crucial to scientific research because it offers a clear direction for what the researchers are looking to find. This allows them to design experiments to test their predictions and add to our scientific knowledge about the world. This article explores how a hypothesis is used in psychology research, how to write a good hypothesis, and the different types of hypotheses you might use.

The Hypothesis in the Scientific Method

In the scientific method , whether it involves research in psychology, biology, or some other area, a hypothesis represents what the researchers think will happen in an experiment. The scientific method involves the following steps:

  • Forming a question
  • Performing background research
  • Creating a hypothesis
  • Designing an experiment
  • Collecting data
  • Analyzing the results
  • Drawing conclusions
  • Communicating the results

The hypothesis is a prediction, but it involves more than a guess. Most of the time, the hypothesis begins with a question which is then explored through background research. At this point, researchers then begin to develop a testable hypothesis.

Unless you are creating an exploratory study, your hypothesis should always explain what you  expect  to happen.

In a study exploring the effects of a particular drug, the hypothesis might be that researchers expect the drug to have some type of effect on the symptoms of a specific illness. In psychology, the hypothesis might focus on how a certain aspect of the environment might influence a particular behavior.

Remember, a hypothesis does not have to be correct. While the hypothesis predicts what the researchers expect to see, the goal of the research is to determine whether this guess is right or wrong. When conducting an experiment, researchers might explore numerous factors to determine which ones might contribute to the ultimate outcome.

In many cases, researchers may find that the results of an experiment  do not  support the original hypothesis. When writing up these results, the researchers might suggest other options that should be explored in future studies.

In many cases, researchers might draw a hypothesis from a specific theory or build on previous research. For example, prior research has shown that stress can impact the immune system. So a researcher might hypothesize: "People with high-stress levels will be more likely to contract a common cold after being exposed to the virus than people who have low-stress levels."

In other instances, researchers might look at commonly held beliefs or folk wisdom. "Birds of a feather flock together" is one example of folk adage that a psychologist might try to investigate. The researcher might pose a specific hypothesis that "People tend to select romantic partners who are similar to them in interests and educational level."

Elements of a Good Hypothesis

So how do you write a good hypothesis? When trying to come up with a hypothesis for your research or experiments, ask yourself the following questions:

  • Is your hypothesis based on your research on a topic?
  • Can your hypothesis be tested?
  • Does your hypothesis include independent and dependent variables?

Before you come up with a specific hypothesis, spend some time doing background research. Once you have completed a literature review, start thinking about potential questions you still have. Pay attention to the discussion section in the  journal articles you read . Many authors will suggest questions that still need to be explored.

How to Formulate a Good Hypothesis

To form a hypothesis, you should take these steps:

  • Collect as many observations about a topic or problem as you can.
  • Evaluate these observations and look for possible causes of the problem.
  • Create a list of possible explanations that you might want to explore.
  • After you have developed some possible hypotheses, think of ways that you could confirm or disprove each hypothesis through experimentation. This is known as falsifiability.

In the scientific method ,  falsifiability is an important part of any valid hypothesis. In order to test a claim scientifically, it must be possible that the claim could be proven false.

Students sometimes confuse the idea of falsifiability with the idea that it means that something is false, which is not the case. What falsifiability means is that  if  something was false, then it is possible to demonstrate that it is false.

One of the hallmarks of pseudoscience is that it makes claims that cannot be refuted or proven false.

The Importance of Operational Definitions

A variable is a factor or element that can be changed and manipulated in ways that are observable and measurable. However, the researcher must also define how the variable will be manipulated and measured in the study.

Operational definitions are specific definitions for all relevant factors in a study. This process helps make vague or ambiguous concepts detailed and measurable.

For example, a researcher might operationally define the variable " test anxiety " as the results of a self-report measure of anxiety experienced during an exam. A "study habits" variable might be defined by the amount of studying that actually occurs as measured by time.

These precise descriptions are important because many things can be measured in various ways. Clearly defining these variables and how they are measured helps ensure that other researchers can replicate your results.

Replicability

One of the basic principles of any type of scientific research is that the results must be replicable.

Replication means repeating an experiment in the same way to produce the same results. By clearly detailing the specifics of how the variables were measured and manipulated, other researchers can better understand the results and repeat the study if needed.

Some variables are more difficult than others to define. For example, how would you operationally define a variable such as aggression ? For obvious ethical reasons, researchers cannot create a situation in which a person behaves aggressively toward others.

To measure this variable, the researcher must devise a measurement that assesses aggressive behavior without harming others. The researcher might utilize a simulated task to measure aggressiveness in this situation.

Hypothesis Checklist

  • Does your hypothesis focus on something that you can actually test?
  • Does your hypothesis include both an independent and dependent variable?
  • Can you manipulate the variables?
  • Can your hypothesis be tested without violating ethical standards?

The hypothesis you use will depend on what you are investigating and hoping to find. Some of the main types of hypotheses that you might use include:

  • Simple hypothesis : This type of hypothesis suggests there is a relationship between one independent variable and one dependent variable.
  • Complex hypothesis : This type suggests a relationship between three or more variables, such as two independent and dependent variables.
  • Null hypothesis : This hypothesis suggests no relationship exists between two or more variables.
  • Alternative hypothesis : This hypothesis states the opposite of the null hypothesis.
  • Statistical hypothesis : This hypothesis uses statistical analysis to evaluate a representative population sample and then generalizes the findings to the larger group.
  • Logical hypothesis : This hypothesis assumes a relationship between variables without collecting data or evidence.

A hypothesis often follows a basic format of "If {this happens} then {this will happen}." One way to structure your hypothesis is to describe what will happen to the  dependent variable  if you change the  independent variable .

The basic format might be: "If {these changes are made to a certain independent variable}, then we will observe {a change in a specific dependent variable}."

A few examples of simple hypotheses:

  • "Students who eat breakfast will perform better on a math exam than students who do not eat breakfast."
  • "Students who experience test anxiety before an English exam will get lower scores than students who do not experience test anxiety."​
  • "Motorists who talk on the phone while driving will be more likely to make errors on a driving course than those who do not talk on the phone."
  • "Children who receive a new reading intervention will have higher reading scores than students who do not receive the intervention."

Examples of a complex hypothesis include:

  • "People with high-sugar diets and sedentary activity levels are more likely to develop depression."
  • "Younger people who are regularly exposed to green, outdoor areas have better subjective well-being than older adults who have limited exposure to green spaces."

Examples of a null hypothesis include:

  • "There is no difference in anxiety levels between people who take St. John's wort supplements and those who do not."
  • "There is no difference in scores on a memory recall task between children and adults."
  • "There is no difference in aggression levels between children who play first-person shooter games and those who do not."

Examples of an alternative hypothesis:

  • "People who take St. John's wort supplements will have less anxiety than those who do not."
  • "Adults will perform better on a memory task than children."
  • "Children who play first-person shooter games will show higher levels of aggression than children who do not." 

Collecting Data on Your Hypothesis

Once a researcher has formed a testable hypothesis, the next step is to select a research design and start collecting data. The research method depends largely on exactly what they are studying. There are two basic types of research methods: descriptive research and experimental research.

Descriptive Research Methods

Descriptive research such as  case studies ,  naturalistic observations , and surveys are often used when  conducting an experiment is difficult or impossible. These methods are best used to describe different aspects of a behavior or psychological phenomenon.

Once a researcher has collected data using descriptive methods, a  correlational study  can examine how the variables are related. This research method might be used to investigate a hypothesis that is difficult to test experimentally.

Experimental Research Methods

Experimental methods  are used to demonstrate causal relationships between variables. In an experiment, the researcher systematically manipulates a variable of interest (known as the independent variable) and measures the effect on another variable (known as the dependent variable).

Unlike correlational studies, which can only be used to determine if there is a relationship between two variables, experimental methods can be used to determine the actual nature of the relationship—whether changes in one variable actually  cause  another to change.

The hypothesis is a critical part of any scientific exploration. It represents what researchers expect to find in a study or experiment. In situations where the hypothesis is unsupported by the research, the research still has value. Such research helps us better understand how different aspects of the natural world relate to one another. It also helps us develop new hypotheses that can then be tested in the future.

Thompson WH, Skau S. On the scope of scientific hypotheses .  R Soc Open Sci . 2023;10(8):230607. doi:10.1098/rsos.230607

Taran S, Adhikari NKJ, Fan E. Falsifiability in medicine: what clinicians can learn from Karl Popper [published correction appears in Intensive Care Med. 2021 Jun 17;:].  Intensive Care Med . 2021;47(9):1054-1056. doi:10.1007/s00134-021-06432-z

Eyler AA. Research Methods for Public Health . 1st ed. Springer Publishing Company; 2020. doi:10.1891/9780826182067.0004

Nosek BA, Errington TM. What is replication ?  PLoS Biol . 2020;18(3):e3000691. doi:10.1371/journal.pbio.3000691

Aggarwal R, Ranganathan P. Study designs: Part 2 - Descriptive studies .  Perspect Clin Res . 2019;10(1):34-36. doi:10.4103/picr.PICR_154_18

Nevid J. Psychology: Concepts and Applications. Wadworth, 2013.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

1.2 The Scientific Methods

Section learning objectives.

By the end of this section, you will be able to do the following:

  • Explain how the methods of science are used to make scientific discoveries
  • Define a scientific model and describe examples of physical and mathematical models used in physics
  • Compare and contrast hypothesis, theory, and law

Teacher Support

The learning objectives in this section will help your students master the following standards:

  • (A) know the definition of science and understand that it has limitations, as specified in subsection (b)(2) of this section;
  • (B) know that scientific hypotheses are tentative and testable statements that must be capable of being supported or not supported by observational evidence. Hypotheses of durable explanatory power which have been tested over a wide variety of conditions are incorporated into theories;
  • (C) know that scientific theories are based on natural and physical phenomena and are capable of being tested by multiple independent researchers. Unlike hypotheses, scientific theories are well-established and highly-reliable explanations, but may be subject to change as new areas of science and new technologies are developed;
  • (D) distinguish between scientific hypotheses and scientific theories.

Section Key Terms

[OL] Pre-assessment for this section could involve students sharing or writing down an anecdote about when they used the methods of science. Then, students could label their thought processes in their anecdote with the appropriate scientific methods. The class could also discuss their definitions of theory and law, both outside and within the context of science.

[OL] It should be noted and possibly mentioned that a scientist , as mentioned in this section, does not necessarily mean a trained scientist. It could be anyone using methods of science.

Scientific Methods

Scientists often plan and carry out investigations to answer questions about the universe around us. These investigations may lead to natural laws. Such laws are intrinsic to the universe, meaning that humans did not create them and cannot change them. We can only discover and understand them. Their discovery is a very human endeavor, with all the elements of mystery, imagination, struggle, triumph, and disappointment inherent in any creative effort. The cornerstone of discovering natural laws is observation. Science must describe the universe as it is, not as we imagine or wish it to be.

We all are curious to some extent. We look around, make generalizations, and try to understand what we see. For example, we look up and wonder whether one type of cloud signals an oncoming storm. As we become serious about exploring nature, we become more organized and formal in collecting and analyzing data. We attempt greater precision, perform controlled experiments (if we can), and write down ideas about how data may be organized. We then formulate models, theories, and laws based on the data we have collected, and communicate those results with others. This, in a nutshell, describes the scientific method that scientists employ to decide scientific issues on the basis of evidence from observation and experiment.

An investigation often begins with a scientist making an observation . The scientist observes a pattern or trend within the natural world. Observation may generate questions that the scientist wishes to answer. Next, the scientist may perform some research about the topic and devise a hypothesis . A hypothesis is a testable statement that describes how something in the natural world works. In essence, a hypothesis is an educated guess that explains something about an observation.

[OL] An educated guess is used throughout this section in describing a hypothesis to combat the tendency to think of a theory as an educated guess.

Scientists may test the hypothesis by performing an experiment . During an experiment, the scientist collects data that will help them learn about the phenomenon they are studying. Then the scientists analyze the results of the experiment (that is, the data), often using statistical, mathematical, and/or graphical methods. From the data analysis, they draw conclusions. They may conclude that their experiment either supports or rejects their hypothesis. If the hypothesis is supported, the scientist usually goes on to test another hypothesis related to the first. If their hypothesis is rejected, they will often then test a new and different hypothesis in their effort to learn more about whatever they are studying.

Scientific processes can be applied to many situations. Let’s say that you try to turn on your car, but it will not start. You have just made an observation! You ask yourself, "Why won’t my car start?" You can now use scientific processes to answer this question. First, you generate a hypothesis such as, "The car won’t start because it has no gasoline in the gas tank." To test this hypothesis, you put gasoline in the car and try to start it again. If the car starts, then your hypothesis is supported by the experiment. If the car does not start, then your hypothesis is rejected. You will then need to think up a new hypothesis to test such as, "My car won’t start because the fuel pump is broken." Hopefully, your investigations lead you to discover why the car won’t start and enable you to fix it.

A model is a representation of something that is often too difficult (or impossible) to study directly. Models can take the form of physical models, equations, computer programs, or simulations—computer graphics/animations. Models are tools that are especially useful in modern physics because they let us visualize phenomena that we normally cannot observe with our senses, such as very small objects or objects that move at high speeds. For example, we can understand the structure of an atom using models, without seeing an atom with our own eyes. Although images of single atoms are now possible, these images are extremely difficult to achieve and are only possible due to the success of our models. The existence of these images is a consequence rather than a source of our understanding of atoms. Models are always approximate, so they are simpler to consider than the real situation; the more complete a model is, the more complicated it must be. Models put the intangible or the extremely complex into human terms that we can visualize, discuss, and hypothesize about.

Scientific models are constructed based on the results of previous experiments. Even still, models often only describe a phenomenon partially or in a few limited situations. Some phenomena are so complex that they may be impossible to model them in their entirety, even using computers. An example is the electron cloud model of the atom in which electrons are moving around the atom’s center in distinct clouds ( Figure 1.12 ), that represent the likelihood of finding an electron in different places. This model helps us to visualize the structure of an atom. However, it does not show us exactly where an electron will be within its cloud at any one particular time.

As mentioned previously, physicists use a variety of models including equations, physical models, computer simulations, etc. For example, three-dimensional models are often commonly used in chemistry and physics to model molecules. Properties other than appearance or location are usually modelled using mathematics, where functions are used to show how these properties relate to one another. Processes such as the formation of a star or the planets, can also be modelled using computer simulations. Once a simulation is correctly programmed based on actual experimental data, the simulation can allow us to view processes that happened in the past or happen too quickly or slowly for us to observe directly. In addition, scientists can also run virtual experiments using computer-based models. In a model of planet formation, for example, the scientist could alter the amount or type of rocks present in space and see how it affects planet formation.

Scientists use models and experimental results to construct explanations of observations or design solutions to problems. For example, one way to make a car more fuel efficient is to reduce the friction or drag caused by air flowing around the moving car. This can be done by designing the body shape of the car to be more aerodynamic, such as by using rounded corners instead of sharp ones. Engineers can then construct physical models of the car body, place them in a wind tunnel, and examine the flow of air around the model. This can also be done mathematically in a computer simulation. The air flow pattern can be analyzed for regions smooth air flow and for eddies that indicate drag. The model of the car body may have to be altered slightly to produce the smoothest pattern of air flow (i.e., the least drag). The pattern with the least drag may be the solution to increasing fuel efficiency of the car. This solution might then be incorporated into the car design.

Using Models and the Scientific Processes

Be sure to secure loose items before opening the window or door.

In this activity, you will learn about scientific models by making a model of how air flows through your classroom or a room in your house.

  • One room with at least one window or door that can be opened
  • Work with a group of four, as directed by your teacher. Close all of the windows and doors in the room you are working in. Your teacher may assign you a specific window or door to study.
  • Before opening any windows or doors, draw a to-scale diagram of your room. First, measure the length and width of your room using the tape measure. Then, transform the measurement using a scale that could fit on your paper, such as 5 centimeters = 1 meter.
  • Your teacher will assign you a specific window or door to study air flow. On your diagram, add arrows showing your hypothesis (before opening any windows or doors) of how air will flow through the room when your assigned window or door is opened. Use pencil so that you can easily make changes to your diagram.
  • On your diagram, mark four locations where you would like to test air flow in your room. To test for airflow, hold a strip of single ply tissue paper between the thumb and index finger. Note the direction that the paper moves when exposed to the airflow. Then, for each location, predict which way the paper will move if your air flow diagram is correct.
  • Now, each member of your group will stand in one of the four selected areas. Each member will test the airflow Agree upon an approximate height at which everyone will hold their papers.
  • When you teacher tells you to, open your assigned window and/or door. Each person should note the direction that their paper points immediately after the window or door was opened. Record your results on your diagram.
  • Did the airflow test data support or refute the hypothetical model of air flow shown in your diagram? Why or why not? Correct your model based on your experimental evidence.
  • With your group, discuss how accurate your model is. What limitations did it have? Write down the limitations that your group agreed upon.
  • Yes, you could use your model to predict air flow through a new window. The earlier experiment of air flow would help you model the system more accurately.
  • Yes, you could use your model to predict air flow through a new window. The earlier experiment of air flow is not useful for modeling the new system.
  • No, you cannot model a system to predict the air flow through a new window. The earlier experiment of air flow would help you model the system more accurately.
  • No, you cannot model a system to predict the air flow through a new window. The earlier experiment of air flow is not useful for modeling the new system.

This Snap Lab! has students construct a model of how air flows in their classroom. Each group of four students will create a model of air flow in their classroom using a scale drawing of the room. Then, the groups will test the validity of their model by placing weathervanes that they have constructed around the room and opening a window or door. By observing the weather vanes, students will see how air actually flows through the room from a specific window or door. Students will then correct their model based on their experimental evidence. The following material list is given per group:

  • One room with at least one window or door that can be opened (An optimal configuration would be one window or door per group.)
  • Several pieces of construction paper (at least four per group)
  • Strips of single ply tissue paper
  • One tape measure (long enough to measure the dimensions of the room)
  • Group size can vary depending on the number of windows/doors available and the number of students in the class.
  • The room dimensions could be provided by the teacher. Also, students may need a brief introduction in how to make a drawing to scale.
  • This is another opportunity to discuss controlled experiments in terms of why the students should hold the strips of tissue paper at the same height and in the same way. One student could also serve as a control and stand far away from the window/door or in another area that will not receive air flow from the window/door.
  • You will probably need to coordinate this when multiple windows or doors are used. Only one window or door should be opened at a time for best results. Between openings, allow a short period (5 minutes) when all windows and doors are closed, if possible.

Answers to the Grasp Check will vary, but the air flow in the new window or door should be based on what the students observed in their experiment.

Scientific Laws and Theories

A scientific law is a description of a pattern in nature that is true in all circumstances that have been studied. That is, physical laws are meant to be universal , meaning that they apply throughout the known universe. Laws are often also concise, whereas theories are more complicated. A law can be expressed in the form of a single sentence or mathematical equation. For example, Newton’s second law of motion , which relates the motion of an object to the force applied ( F ), the mass of the object ( m ), and the object’s acceleration ( a ), is simply stated using the equation

Scientific ideas and explanations that are true in many, but not all situations in the universe are usually called principles . An example is Pascal’s principle , which explains properties of liquids, but not solids or gases. However, the distinction between laws and principles is sometimes not carefully made in science.

A theory is an explanation for patterns in nature that is supported by much scientific evidence and verified multiple times by multiple researchers. While many people confuse theories with educated guesses or hypotheses, theories have withstood more rigorous testing and verification than hypotheses.

[OL] Explain to students that in informal, everyday English the word theory can be used to describe an idea that is possibly true but that has not been proven to be true. This use of the word theory often leads people to think that scientific theories are nothing more than educated guesses. This is not just a misconception among students, but among the general public as well.

As a closing idea about scientific processes, we want to point out that scientific laws and theories, even those that have been supported by experiments for centuries, can still be changed by new discoveries. This is especially true when new technologies emerge that allow us to observe things that were formerly unobservable. Imagine how viewing previously invisible objects with a microscope or viewing Earth for the first time from space may have instantly changed our scientific theories and laws! What discoveries still await us in the future? The constant retesting and perfecting of our scientific laws and theories allows our knowledge of nature to progress. For this reason, many scientists are reluctant to say that their studies prove anything. By saying support instead of prove , it keeps the door open for future discoveries, even if they won’t occur for centuries or even millennia.

[OL] With regard to scientists avoiding using the word prove , the general public knows that science has proven certain things such as that the heart pumps blood and the Earth is round. However, scientists should shy away from using prove because it is impossible to test every single instance and every set of conditions in a system to absolutely prove anything. Using support or similar terminology leaves the door open for further discovery.

Check Your Understanding

  • Models are simpler to analyze.
  • Models give more accurate results.
  • Models provide more reliable predictions.
  • Models do not require any computer calculations.
  • They are the same.
  • A hypothesis has been thoroughly tested and found to be true.
  • A hypothesis is a tentative assumption based on what is already known.
  • A hypothesis is a broad explanation firmly supported by evidence.
  • A scientific model is a representation of something that can be easily studied directly. It is useful for studying things that can be easily analyzed by humans.
  • A scientific model is a representation of something that is often too difficult to study directly. It is useful for studying a complex system or systems that humans cannot observe directly.
  • A scientific model is a representation of scientific equipment. It is useful for studying working principles of scientific equipment.
  • A scientific model is a representation of a laboratory where experiments are performed. It is useful for studying requirements needed inside the laboratory.
  • The hypothesis must be validated by scientific experiments.
  • The hypothesis must not include any physical quantity.
  • The hypothesis must be a short and concise statement.
  • The hypothesis must apply to all the situations in the universe.
  • A scientific theory is an explanation of natural phenomena that is supported by evidence.
  • A scientific theory is an explanation of natural phenomena without the support of evidence.
  • A scientific theory is an educated guess about the natural phenomena occurring in nature.
  • A scientific theory is an uneducated guess about natural phenomena occurring in nature.
  • A hypothesis is an explanation of the natural world with experimental support, while a scientific theory is an educated guess about a natural phenomenon.
  • A hypothesis is an educated guess about natural phenomenon, while a scientific theory is an explanation of natural world with experimental support.
  • A hypothesis is experimental evidence of a natural phenomenon, while a scientific theory is an explanation of the natural world with experimental support.
  • A hypothesis is an explanation of the natural world with experimental support, while a scientific theory is experimental evidence of a natural phenomenon.

Use the Check Your Understanding questions to assess students’ achievement of the section’s learning objectives. If students are struggling with a specific objective, the Check Your Understanding will help identify which objective and direct students to the relevant content.

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute Texas Education Agency (TEA). The original material is available at: https://www.texasgateway.org/book/tea-physics . Changes were made to the original material, including updates to art, structure, and other content updates.

Access for free at https://openstax.org/books/physics/pages/1-introduction
  • Authors: Paul Peter Urone, Roger Hinrichs
  • Publisher/website: OpenStax
  • Book title: Physics
  • Publication date: Mar 26, 2020
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/physics/pages/1-introduction
  • Section URL: https://openstax.org/books/physics/pages/1-2-the-scientific-methods

© Jan 19, 2024 Texas Education Agency (TEA). The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

Research Hypothesis In Psychology: Types, & Examples

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

A research hypothesis, in its plural form “hypotheses,” is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method .

Hypotheses connect theory to data and guide the research process towards expanding scientific understanding

Some key points about hypotheses:

  • A hypothesis expresses an expected pattern or relationship. It connects the variables under investigation.
  • It is stated in clear, precise terms before any data collection or analysis occurs. This makes the hypothesis testable.
  • A hypothesis must be falsifiable. It should be possible, even if unlikely in practice, to collect data that disconfirms rather than supports the hypothesis.
  • Hypotheses guide research. Scientists design studies to explicitly evaluate hypotheses about how nature works.
  • For a hypothesis to be valid, it must be testable against empirical evidence. The evidence can then confirm or disprove the testable predictions.
  • Hypotheses are informed by background knowledge and observation, but go beyond what is already known to propose an explanation of how or why something occurs.
Predictions typically arise from a thorough knowledge of the research literature, curiosity about real-world problems or implications, and integrating this to advance theory. They build on existing literature while providing new insight.

Types of Research Hypotheses

Alternative hypothesis.

The research hypothesis is often called the alternative or experimental hypothesis in experimental research.

It typically suggests a potential relationship between two key variables: the independent variable, which the researcher manipulates, and the dependent variable, which is measured based on those changes.

The alternative hypothesis states a relationship exists between the two variables being studied (one variable affects the other).

A hypothesis is a testable statement or prediction about the relationship between two or more variables. It is a key component of the scientific method. Some key points about hypotheses:

  • Important hypotheses lead to predictions that can be tested empirically. The evidence can then confirm or disprove the testable predictions.

In summary, a hypothesis is a precise, testable statement of what researchers expect to happen in a study and why. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.

An experimental hypothesis predicts what change(s) will occur in the dependent variable when the independent variable is manipulated.

It states that the results are not due to chance and are significant in supporting the theory being investigated.

The alternative hypothesis can be directional, indicating a specific direction of the effect, or non-directional, suggesting a difference without specifying its nature. It’s what researchers aim to support or demonstrate through their study.

Null Hypothesis

The null hypothesis states no relationship exists between the two variables being studied (one variable does not affect the other). There will be no changes in the dependent variable due to manipulating the independent variable.

It states results are due to chance and are not significant in supporting the idea being investigated.

The null hypothesis, positing no effect or relationship, is a foundational contrast to the research hypothesis in scientific inquiry. It establishes a baseline for statistical testing, promoting objectivity by initiating research from a neutral stance.

Many statistical methods are tailored to test the null hypothesis, determining the likelihood of observed results if no true effect exists.

This dual-hypothesis approach provides clarity, ensuring that research intentions are explicit, and fosters consistency across scientific studies, enhancing the standardization and interpretability of research outcomes.

Nondirectional Hypothesis

A non-directional hypothesis, also known as a two-tailed hypothesis, predicts that there is a difference or relationship between two variables but does not specify the direction of this relationship.

It merely indicates that a change or effect will occur without predicting which group will have higher or lower values.

For example, “There is a difference in performance between Group A and Group B” is a non-directional hypothesis.

Directional Hypothesis

A directional (one-tailed) hypothesis predicts the nature of the effect of the independent variable on the dependent variable. It predicts in which direction the change will take place. (i.e., greater, smaller, less, more)

It specifies whether one variable is greater, lesser, or different from another, rather than just indicating that there’s a difference without specifying its nature.

For example, “Exercise increases weight loss” is a directional hypothesis.

hypothesis

Falsifiability

The Falsification Principle, proposed by Karl Popper , is a way of demarcating science from non-science. It suggests that for a theory or hypothesis to be considered scientific, it must be testable and irrefutable.

Falsifiability emphasizes that scientific claims shouldn’t just be confirmable but should also have the potential to be proven wrong.

It means that there should exist some potential evidence or experiment that could prove the proposition false.

However many confirming instances exist for a theory, it only takes one counter observation to falsify it. For example, the hypothesis that “all swans are white,” can be falsified by observing a black swan.

For Popper, science should attempt to disprove a theory rather than attempt to continually provide evidence to support a research hypothesis.

Can a Hypothesis be Proven?

Hypotheses make probabilistic predictions. They state the expected outcome if a particular relationship exists. However, a study result supporting a hypothesis does not definitively prove it is true.

All studies have limitations. There may be unknown confounding factors or issues that limit the certainty of conclusions. Additional studies may yield different results.

In science, hypotheses can realistically only be supported with some degree of confidence, not proven. The process of science is to incrementally accumulate evidence for and against hypothesized relationships in an ongoing pursuit of better models and explanations that best fit the empirical data. But hypotheses remain open to revision and rejection if that is where the evidence leads.
  • Disproving a hypothesis is definitive. Solid disconfirmatory evidence will falsify a hypothesis and require altering or discarding it based on the evidence.
  • However, confirming evidence is always open to revision. Other explanations may account for the same results, and additional or contradictory evidence may emerge over time.

We can never 100% prove the alternative hypothesis. Instead, we see if we can disprove, or reject the null hypothesis.

If we reject the null hypothesis, this doesn’t mean that our alternative hypothesis is correct but does support the alternative/experimental hypothesis.

Upon analysis of the results, an alternative hypothesis can be rejected or supported, but it can never be proven to be correct. We must avoid any reference to results proving a theory as this implies 100% certainty, and there is always a chance that evidence may exist which could refute a theory.

How to Write a Hypothesis

  • Identify variables . The researcher manipulates the independent variable and the dependent variable is the measured outcome.
  • Operationalized the variables being investigated . Operationalization of a hypothesis refers to the process of making the variables physically measurable or testable, e.g. if you are about to study aggression, you might count the number of punches given by participants.
  • Decide on a direction for your prediction . If there is evidence in the literature to support a specific effect of the independent variable on the dependent variable, write a directional (one-tailed) hypothesis. If there are limited or ambiguous findings in the literature regarding the effect of the independent variable on the dependent variable, write a non-directional (two-tailed) hypothesis.
  • Make it Testable : Ensure your hypothesis can be tested through experimentation or observation. It should be possible to prove it false (principle of falsifiability).
  • Clear & concise language . A strong hypothesis is concise (typically one to two sentences long), and formulated using clear and straightforward language, ensuring it’s easily understood and testable.

Consider a hypothesis many teachers might subscribe to: students work better on Monday morning than on Friday afternoon (IV=Day, DV= Standard of work).

Now, if we decide to study this by giving the same group of students a lesson on a Monday morning and a Friday afternoon and then measuring their immediate recall of the material covered in each session, we would end up with the following:

  • The alternative hypothesis states that students will recall significantly more information on a Monday morning than on a Friday afternoon.
  • The null hypothesis states that there will be no significant difference in the amount recalled on a Monday morning compared to a Friday afternoon. Any difference will be due to chance or confounding factors.

More Examples

  • Memory : Participants exposed to classical music during study sessions will recall more items from a list than those who studied in silence.
  • Social Psychology : Individuals who frequently engage in social media use will report higher levels of perceived social isolation compared to those who use it infrequently.
  • Developmental Psychology : Children who engage in regular imaginative play have better problem-solving skills than those who don’t.
  • Clinical Psychology : Cognitive-behavioral therapy will be more effective in reducing symptoms of anxiety over a 6-month period compared to traditional talk therapy.
  • Cognitive Psychology : Individuals who multitask between various electronic devices will have shorter attention spans on focused tasks than those who single-task.
  • Health Psychology : Patients who practice mindfulness meditation will experience lower levels of chronic pain compared to those who don’t meditate.
  • Organizational Psychology : Employees in open-plan offices will report higher levels of stress than those in private offices.
  • Behavioral Psychology : Rats rewarded with food after pressing a lever will press it more frequently than rats who receive no reward.

Print Friendly, PDF & Email

1 The ideas in the previous two paragraphs are from the fascinating course "Philosophy of Science" by Jeffery L. Kasser, published by The Teaching Company

[ Chapter 1 Objectives ]

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • R Soc Open Sci
  • v.10(8); 2023 Aug
  • PMC10465209

On the scope of scientific hypotheses

William hedley thompson.

1 Department of Applied Information Technology, University of Gothenburg, Gothenburg, Sweden

2 Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden

3 Department of Pedagogical, Curricular and Professional Studies, Faculty of Education, University of Gothenburg, Gothenburg, Sweden

4 Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden

Associated Data

This article has no additional data.

Hypotheses are frequently the starting point when undertaking the empirical portion of the scientific process. They state something that the scientific process will attempt to evaluate, corroborate, verify or falsify. Their purpose is to guide the types of data we collect, analyses we conduct, and inferences we would like to make. Over the last decade, metascience has advocated for hypotheses being in preregistrations or registered reports, but how to formulate these hypotheses has received less attention. Here, we argue that hypotheses can vary in specificity along at least three independent dimensions: the relationship, the variables, and the pipeline. Together, these dimensions form the scope of the hypothesis. We demonstrate how narrowing the scope of a hypothesis in any of these three ways reduces the hypothesis space and that this reduction is a type of novelty. Finally, we discuss how this formulation of hypotheses can guide researchers to formulate the appropriate scope for their hypotheses and should aim for neither too broad nor too narrow a scope. This framework can guide hypothesis-makers when formulating their hypotheses by helping clarify what is being tested, chaining results to previous known findings, and demarcating what is explicitly tested in the hypothesis.

1.  Introduction

Hypotheses are an important part of the scientific process. However, surprisingly little attention is given to hypothesis-making compared to other skills in the scientist's skillset within current discussions aimed at improving scientific practice. Perhaps this lack of emphasis is because the formulation of the hypothesis is often considered less relevant, as it is ultimately the scientific process that will eventually decide the veracity of the hypothesis. However, there are more hypotheses than scientific studies as selection occurs at various stages: from funder selection and researcher's interests. So which hypotheses are worthwhile to pursue? Which hypotheses are the most effective or pragmatic for extending or enhancing our collective knowledge? We consider the answer to these questions by discussing how broad or narrow a hypothesis can or should be (i.e. its scope).

We begin by considering that the two statements below are both hypotheses and vary in scope:

  • H 1 : For every 1 mg decrease of x , y will increase by, on average, 2.5 points.
  • H 2 : Changes in x 1 or x 2 correlate with y levels in some way.

Clearly, the specificity of the two hypotheses is very different. H 1 states a precise relationship between two variables ( x and y ), while H 2 specifies a vaguer relationship and does not specify which variables will show the relationship. However, they are both still hypotheses about how x and y relate to each other. This claim of various degrees of the broadness of hypotheses is, in and of itself, not novel. In Epistemetrics, Rescher [ 1 ], while drawing upon the physicist Duhem's work, develops what he calls Duhem's Law. This law considers a trade-off between certainty or precision in statements about physics when evaluating them. Duhem's Law states that narrower hypotheses, such as H 1 above, are more precise but less likely to be evaluated as true than broader ones, such as H 2 above. Similarly, Popper, when discussing theories, describes the reverse relationship between content and probability of a theory being true, i.e. with increased content, there is a decrease in probability and vice versa [ 2 ]. Here we will argue that it is important that both H 1 and H 2 are still valid scientific hypotheses, and their appropriateness depends on certain scientific questions.

The question of hypothesis scope is relevant since there are multiple recent prescriptions to improve science, ranging from topics about preregistrations [ 3 ], registered reports [ 4 ], open science [ 5 ], standardization [ 6 ], generalizability [ 7 ], multiverse analyses [ 8 ], dataset reuse [ 9 ] and general questionable research practices [ 10 ]. Within each of these issues, there are arguments to demarcate between confirmatory and exploratory research or normative prescriptions about how science should be done (e.g. science is ‘bad’ or ‘worse’ if code/data are not open). Despite all these discussions and improvements, much can still be done to improve hypothesis-making. A recent evaluation of preregistered studies in psychology found that over half excluded the preregistered hypotheses [ 11 ]. Further, evaluations of hypotheses in ecology showed that most hypotheses are not explicitly stated [ 12 , 13 ]. Other research has shown that obfuscated hypotheses are more prevalent in retracted research [ 14 ]. There have been recommendations for simpler hypotheses in psychology to avoid misinterpretations and misspecifications [ 15 ]. Finally, several evaluations of preregistration practices have found that a significant proportion of articles do not abide by their stated hypothesis or add additional hypotheses [ 11 , 16 – 18 ]. In sum, while multiple efforts exist to improve scientific practice, our hypothesis-making could improve.

One of our intentions is to provide hypothesis-makers with tools to assist them when making hypotheses. We consider this useful and timely as, with preregistrations becoming more frequent, the hypothesis-making process is now open and explicit . However, preregistrations are difficult to write [ 19 ], and preregistered articles can change or omit hypotheses [ 11 ] or they are vague and certain degrees of freedom hard to control for [ 16 – 18 ]. One suggestion has been to do less confirmatory research [ 7 , 20 ]. While we agree that all research does not need to be confirmatory, we also believe that not all preregistrations of confirmatory work must test narrow hypotheses. We think there is a possible point of confusion that the specificity in preregistrations, where researcher degrees of freedom should be stated, necessitates the requirement that the hypothesis be narrow. Our belief that this confusion is occurring is supported by the study Akker et al . [ 11 ] where they found that 18% of published psychology studies changed their preregistered hypothesis (e.g. its direction), and 60% of studies selectively reported hypotheses in some way. It is along these lines that we feel the framework below can be useful to help formulate appropriate hypotheses to mitigate these identified issues.

We consider this article to be a discussion of the researcher's different choices when formulating hypotheses and to help link hypotheses over time. Here we aim to deconstruct what aspects there are in the hypothesis about their specificity. Throughout this article, we intend to be neutral to many different philosophies of science relating to the scientific method (i.e. how one determines the veracity of a hypothesis). Our idea of neutrality here is that whether a researcher adheres to falsification, verification, pragmatism, or some other philosophy of science, then this framework can be used when formulating hypotheses. 1

The framework this article advocates for is that there are (at least) three dimensions that hypotheses vary along regarding their narrowness and broadness: the selection of relationships, variables, and pipelines. We believe this discussion is fruitful for the current debate regarding normative practices as some positions make, sometimes implicit, commitments about which set of hypotheses the scientific community ought to consider good or permissible. We proceed by outlining a working definition of ‘scientific hypothesis' and then discuss how it relates to theory. Then, we justify how hypotheses can vary along the three dimensions. Using this framework, we then discuss the scopes in relation to appropriate hypothesis-making and an argument about what constitutes a scientifically novel hypothesis. We end the article with practical advice for researchers who wish to use this framework.

2.  The scientific hypothesis

In this section, we will describe a functional and descriptive role regarding how scientists use hypotheses. Jeong & Kwon [ 21 ] investigated and summarized the different uses the concept of ‘hypothesis’ had in philosophical and scientific texts. They identified five meanings: assumption, tentative explanation, tentative cause, tentative law, and prediction. Jeong & Kwon [ 21 ] further found that researchers in science and philosophy used all the different definitions of hypotheses, although there was some variance in frequency between fields. Here we see, descriptively , that the way researchers use the word ‘hypothesis’ is diverse and has a wide range in specificity and function. However, whichever meaning a hypothesis has, it aims to be true, adequate, accurate or useful in some way.

Not all hypotheses are ‘scientific hypotheses'. For example, consider the detective trying to solve a crime and hypothesizing about the perpetrator. Such a hypothesis still aims to be true and is a tentative explanation but differs from the scientific hypothesis. The difference is that the researcher, unlike the detective, evaluates the hypothesis with the scientific method and submits the work for evaluation by the scientific community. Thus a scientific hypothesis entails a commitment to evaluate the statement with the scientific process . 2 Additionally, other types of hypotheses can exist. As discussed in more detail below, scientific theories generate not only scientific hypotheses but also contain auxiliary hypotheses. The latter refers to additional assumptions considered to be true and not explicitly evaluated. 3

Next, the scientific hypothesis is generally made antecedent to the evaluation. This does not necessitate that the event (e.g. in archaeology) or the data collection (e.g. with open data reuse) must be collected before the hypothesis is made, but that the evaluation of the hypothesis cannot happen before its formulation. This claim state does deny the utility of exploratory hypothesis testing of post hoc hypotheses (see [ 25 ]). However, previous results and exploration can generate new hypotheses (e.g. via abduction [ 22 , 26 – 28 ], which is the process of creating hypotheses from evidence), which is an important part of science [ 29 – 32 ], but crucially, while these hypotheses are important and can be the conclusion of exploratory work, they have yet to be evaluated (by whichever method of choice). Hence, they still conform to the antecedency requirement. A further way to justify the antecedency is seen in the practice of formulating a post hoc hypothesis, and considering it to have been evaluated is seen as a questionable research practice (known as ‘hypotheses after results are known’ or HARKing [ 33 ]). 4

While there is a varying range of specificity, is the hypothesis a critical part of all scientific work, or is it reserved for some subset of investigations? There are different opinions regarding this. Glass and Hall, for example, argue that the term only refers to falsifiable research, and model-based research uses verification [ 36 ]. However, this opinion does not appear to be the consensus. Osimo and Rumiati argue that any model based on or using data is never wholly free from hypotheses, as hypotheses can, even implicitly, infiltrate the data collection [ 37 ]. For our definition, we will consider hypotheses that can be involved in different forms of scientific evaluation (i.e. not just falsification), but we do not exclude the possibility of hypothesis-free scientific work.

Finally, there is a debate about whether theories or hypotheses should be linguistic or formal [ 38 – 40 ]. Neither side in this debate argues that verbal or formal hypotheses are not possible, but instead, they discuss normative practices. Thus, for our definition, both linguistic and formal hypotheses are considered viable.

Considering the above discussion, let us summarize the scientific process and the scientific hypothesis: a hypothesis guides what type of data are sampled and what analysis will be done. With the new observations, evidence is analysed or quantified in some way (often using inferential statistics) to judge the hypothesis's truth value, utility, credibility, or likelihood. The following working definition captures the above:

  • Scientific hypothesis : an implicit or explicit statement that can be verbal or formal. The hypothesis makes a statement about some natural phenomena (via an assumption, explanation, cause, law or prediction). The scientific hypothesis is made antecedent to performing a scientific process where there is a commitment to evaluate it.

For simplicity, we will only use the term ‘hypothesis’ for ‘scientific hypothesis' to refer to the above definition for the rest of the article except when it is necessary to distinguish between other types of hypotheses. Finally, this definition could further be restrained in multiple ways (e.g. only explicit hypotheses are allowed, or assumptions are never hypotheses). However, if the definition is more (or less) restrictive, it has little implication for the argument below.

3.  The hypothesis, theory and auxiliary assumptions

While we have a definition of the scientific hypothesis, we have yet to link it with how it relates to scientific theory, where there is frequently some interconnection (i.e. a hypothesis tests a scientific theory). Generally, for this paper, we believe our argument applies regardless of how scientific theory is defined. Further, some research lacks theory, sometimes called convenience or atheoretical studies [ 41 ]. Here a hypothesis can be made without a wider theory—and our framework fits here too. However, since many consider hypotheses to be defined or deducible from scientific theory, there is an important connection between the two. Therefore, we will briefly clarify how hypotheses relate to common formulations of scientific theory.

A scientific theory is generally a set of axioms or statements about some objects, properties and their relations relating to some phenomena. Hypotheses can often be deduced from the theory. Additionally, a theory has boundary conditions. The boundary conditions specify the domain of the theory stating under what conditions it applies (e.g. all things with a central neural system, humans, women, university teachers) [ 42 ]. Boundary conditions of a theory will consequently limit all hypotheses deduced from the theory. For example, with a boundary condition ‘applies to all humans’, then the subsequent hypotheses deduced from the theory are limited to being about humans. While this limitation of the hypothesis by the theory's boundary condition exists, all the considerations about a hypothesis scope detailed below still apply within the boundary conditions. Finally, it is also possible (depending on the definition of scientific theory) for a hypothesis to test the same theory under different boundary conditions. 5

The final consideration relating scientific theory to scientific hypotheses is auxiliary hypotheses. These hypotheses are theories or assumptions that are considered true simultaneously with the theory. Most philosophies of science from Popper's background knowledge [ 24 ], Kuhn's paradigms during normal science [ 44 ], and Laktos' protective belt [ 45 ] all have their own versions of this auxiliary or background information that is required for the hypothesis to test the theory. For example, Meelh [ 46 ] auxiliary theories/assumptions are needed to go from theoretical terms to empirical terms (e.g. neural activity can be inferred from blood oxygenation in fMRI research or reaction time to an indicator of cognition) and auxiliary theories about instruments (e.g. the experimental apparatus works as intended) and more (see also Other approaches to categorizing hypotheses below). As noted in the previous section, there is a difference between these auxiliary hypotheses, regardless of their definition, and the scientific hypothesis defined above. Recall that our definition of the scientific hypothesis included a commitment to evaluate it. There are no such commitments with auxiliary hypotheses, but rather they are assumed to be correct to test the theory adequately. This distinction proves to be important as auxiliary hypotheses are still part of testing a theory but are separate from the hypothesis to be evaluated (discussed in more detail below).

4.  The scope of hypotheses

In the scientific hypothesis section, we defined the hypothesis and discussed how it relates back to the theory. In this section, we want to defend two claims about hypotheses:

  • (A1) Hypotheses can have different scopes . Some hypotheses are narrower in their formulation, and some are broader.
  • (A2) The scope of hypotheses can vary along three dimensions relating to relationship selection , variable selection , and pipeline selection .

A1 may seem obvious, but it is important to establish what is meant by narrower and broader scope. When a hypothesis is very narrow, it is specific. For example, it might be specific about the type of relationship between some variables. In figure 1 , we make four different statements regarding the relationship between x and y . The narrowest hypothesis here states ‘there is a positive linear relationship with a magnitude of 0.5 between x and y ’ ( figure 1 a ), and the broadest hypothesis states ‘there is a relationship between x and y ’ ( figure 1 d ). Note that many other hypotheses are possible that are not included in this example (such as there being no relationship).

An external file that holds a picture, illustration, etc.
Object name is rsos230607f01.jpg

Examples of narrow and broad hypotheses between x and y . Circles indicate a set of possible relationships with varying slopes that can pivot or bend.

We see that the narrowest of these hypotheses claims a type of relationship (linear), a direction of the relationship (positive) and a magnitude of the relationship (0.5). As the hypothesis becomes broader, the specific magnitude disappears ( figure 1 b ), the relationship has additional options than just being linear ( figure 1 c ), and finally, the direction of the relationship disappears. Crucially, all the examples in figure 1 can meet the above definition of scientific hypotheses. They are all statements that can be evaluated with the same scientific method. There is a difference between these statements, though— they differ in the scope of the hypothesis . Here we have justified A1.

Within this framework, when we discuss whether a hypothesis is narrower or broader in scope, this is a relation between two hypotheses where one is a subset of the other. This means that if H 1 is narrower than H 2 , and if H 1 is true, then H 2 is also true. This can be seen in figure 1 a–d . Suppose figure 1 a , the narrowest of all the hypotheses, is true. In that case, all the other broader statements are also true (i.e. a linear correlation of 0.5 necessarily entails that there is also a positive linear correlation, a linear correlation, and some relationship). While this property may appear trivial, it entails that it is only possible to directly compare the hypothesis scope between two hypotheses (i.e. their broadness or narrowness) where one is the subset of the other. 6

4.1. Sets, disjunctions and conjunctions of elements

The above restraint defines the scope as relations between sets. This property helps formalize the framework of this article. Below, when we discuss the different dimensions that can impact the scope, these become represented as a set. Each set contains elements. Each element is a permissible situation that allows the hypothesis to be accepted. We denote elements as lower case with italics (e.g. e 1 , e 2 , e 3 ) and sets as bold upper case (e.g. S ). Each of the three different dimensions discussed below will be formalized as sets, while the total number of elements specifies their scope.

Let us reconsider the above restraint about comparing hypotheses as narrower or broader. This can be formally shown if:

  • e 1 , e 2 , e 3 are elements of S 1 ; and
  • e 1 and e 2 are elements of S 2 ,

then S 2 is narrower than S 1 .

Each element represents specific propositions that, if corroborated, would support the hypothesis. Returning to figure 1 a , b , the following statements apply to both:

  • ‘There is a positive linear relationship between x and y with a slope of 0.5’.

Whereas the following two apply to figure 1 b but not figure 1 a :

  • ‘There is a positive linear relationship between x and y with a slope of 0.4’ ( figure 1 b ).
  • ‘There is a positive linear relationship between x and y with a slope of 0.3’ ( figure 1 b ).

Figure 1 b allows for a considerably larger number of permissible situations (which is obvious as it allows for any positive linear relationship). When formulating the hypothesis in figure 1 b , we do not need to specify every single one of these permissible relationships. We can simply specify all possible positive slopes, which entails the set of permissible elements it includes.

That broader hypotheses have more elements in their sets entails some important properties. When we say S contains the elements e 1 , e 2 , and e 3 , the hypothesis is corroborated if e 1 or e 2 or e 3 is the case. This means that the set requires only one of the elements to be corroborated for the hypothesis to be considered correct (i.e. the positive linear relationship needs to be 0.3 or 0.4 or 0.5). Contrastingly, we will later see cases when conjunctions of elements occur (i.e. both e 1 and e 2 are the case). When a conjunction occurs, in this formulation, the conjunction itself becomes an element in the set (i.e. ‘ e 1 and e 2 ’ is a single element). Figure 2 illustrates how ‘ e 1 and e 2 ’ is narrower than ‘ e 1 ’, and ‘ e 1 ’ is narrower than ‘ e 1 or e 2 ’. 7 This property relating to the conjunction being narrower than individual elements is explained in more detail in the pipeline selection section below.

An external file that holds a picture, illustration, etc.
Object name is rsos230607f02.jpg

Scope as sets. Left : four different sets (grey, red, blue and purple) showing different elements which they contain. Right : a list of each colour explaining which set is a subset of the other (thereby being ‘narrower’).

4.2. Relationship selection

We move to A2, which is to show the different dimensions that a hypothesis scope can vary along. We have already seen an example of the first dimension of a hypothesis in figure 1 , the relationship selection . Let R denote the set of all possible configurations of relationships that are permissible for the hypothesis to be considered true. For example, in the narrowest formulation above, there was one allowed relationship for the hypothesis to be true. Consequently, the size of R (denoted | R |) is one. As discussed above, in the second narrowest formulation ( figure 1 b ), R has more possible relationships where it can still be considered true:

  • r 1 = ‘a positive linear relationship of 0.1’
  • r 2 = ‘a positive linear relationship of 0.2’
  • r 3 = ‘a positive linear relationship of 0.3’.

Additionally, even broader hypotheses will be compatible with more types of relationships. In figure 1 c , d , nonlinear and negative relationships are also possible relationships included in R . For this broader statement to be affirmed, more elements are possible to be true. Thus if | R | is greater (i.e. contains more possible configurations for the hypothesis to be true), then the hypothesis is broader. Thus, the scope of relating to the relationship selection is specified by | R |. Finally, if |R H1 | > |R H2 | , then H 1 is broader than H 2 regarding the relationship selection.

Figure 1 is an example of the relationship narrowing. That the relationship became linear is only an example and does not necessitate a linear relationship or that this scope refers only to correlations. An alternative example of a relationship scope is a broad hypothesis where there is no knowledge about the distribution of some data. In such situations, one may assume a uniform relationship or a Cauchy distribution centred at zero. Over time the specific distribution can be hypothesized. Thereafter, the various parameters of the distribution can be hypothesized. At each step, the hypothesis of the distribution gets further specified to narrower formulations where a smaller set of possible relationships are included (see [ 47 , 48 ] for a more in-depth discussion about how specific priors relate to more narrow tests). Finally, while figure 1 was used to illustrate the point of increasingly narrow relationship hypotheses, it is more likely to expect the narrowest relationship, within fields such as psychology, to have considerable uncertainty and be formulated with confidence or credible intervals (i.e. we will rarely reach point estimates).

4.3. Variable selection

We have demonstrated that relationship selection can affect the scope of a hypothesis. Additionally, at least two other dimensions can affect the scope of a hypothesis: variable selection and pipeline selection . The variable selection in figure 1 was a single bivariate relationship (e.g. x 's relationship with y ). However, it is not always the case that we know which variables will be involved. For example, in neuroimaging, we can be confident that one or more brain regions will be processing some information following a stimulus. Still, we might not be sure which brain region(s) this will be. Consequently, our hypothesis becomes broader because we have selected more variables. The relationship selection may be identical for each chosen variable, but the variable selection becomes broader. We can consider the following three hypotheses to be increasing in their scope:

  • H 1 : x relates to y with relationship R .
  • H 2 : x 1 or x 2 relates to y with relationship R .
  • H 3 : x 1 or x 2 or x 3 relates to y with relationship R .

For H 1 –H 3 above, we assume that R is the same. Further, we assume that there is no interaction between these variables.

In the above examples, we have multiple x ( x 1 , x 2 , x 3 , … , x n ). Again, we can symbolize the variable selection as a non-empty set XY , containing either a single variable or many variables. Our motivation for designating it XY is that the variable selection can include multiple possibilities for both the independent variable ( x ) and the dependent variable ( y ). Like with relationship selection, we can quantify the broadness between two hypotheses with the size of the set XY . Consequently, | XY | denotes the total scope concerning variable selection. Thus, in the examples above | XY H1 | < | XY H2 | < | XY H3 |. Like with relationship selection, hypotheses that vary in | XY | still meet the definition of a hypothesis. 8

An obvious concern for many is that a broader XY is much easier to evaluate as correct. Generally, when | XY 1 | > | XY 2 |, there is a greater chance of spurious correlations when evaluating XY 1 . This concern is an issue relating to the evaluation of hypotheses (e.g. applying statistics to the evaluation), which will require additional assumptions relating to how to evaluate the hypotheses. Strategies to deal with this apply some correction or penalization for multiple statistical testing [ 49 ] or partial pooling and regularizing priors [ 50 , 51 ]. These strategies aim to evaluate a broader variable selection ( x 1 or x 2 ) on equal or similar terms to a narrow variable selection ( x 1 ).

4.4. Pipeline selection

Scientific studies require decisions about how to perform the analysis. This scope considers transformations applied to the raw data ( XY raw ) to achieve some derivative ( XY ). These decisions can also involve selection procedures that drop observations deemed unreliable, standardizing, correcting confounding variables, or different philosophies. We can call the array of decisions and transformations used as the pipeline . A hypothesis varies in the number of pipelines:

  • H 1 : XY has a relationship(s) R with pipeline p 1 .
  • H 2 : XY has a relationship(s) R with pipeline p 1 or pipeline p 2 .
  • H 3 : XY has a relationship(s) R with pipeline p 1 or pipeline p 2 , or pipeline p 3 .

Importantly, the pipeline here considers decisions regarding how the hypothesis shapes the data collection and transformation. We do not consider this to include decisions made regarding the assumptions relating to the statistical inference as those relate to operationalizing the evaluation of the hypothesis and not part of the hypothesis being evaluated (these assumptions are like auxiliary hypotheses, which are assumed to be true but not explicitly evaluated).

Like with variable selection ( XY ) and relationship selection ( R ), we can see that pipelines impact the scope of hypotheses. Again, we can symbolize the pipeline selection with a set P . As previously, | P | will denote the dimension of the pipeline selection. In the case of pipeline selection, we are testing the same variables, looking for the same relationship, but processing the variables or relationships with different pipelines to evaluate the relationship. Consequently, | P H1 | < | P H2 | < | P H3 |.

These issues regarding pipelines have received attention as the ‘garden of forking paths' [ 52 ]. Here, there are calls for researchers to ensure that their entire pipeline has been specified. Additionally, recent work has highlighted the diversity of results based on multiple analytical pipelines [ 53 , 54 ]. These results are often considered a concern, leading to calls that results should be pipeline resistant.

The wish for pipeline-resistant methods entails that hypotheses, in their narrowest form, are possible for all pipelines. Consequently, a narrower formulation will entail that this should not impact the hypothesis regardless of which pipeline is chosen. Thus the conjunction of pipelines is narrower than single pipelines. Consider the following three scenarios:

  • H 3 : XY has a relationship(s) R with pipeline p 1 and pipeline p 2 .

In this instance, since H 1 is always true if H 3 is true, thus H 3 is a narrower formulation than H 1 . Consequently, | P H3 | < | P H1 | < | P H2 |. Decreasing the scope of the pipeline dimension also entails the increase in conjunction of pipelines (i.e. creating pipeline-resistant methods) rather than just the reduction of disjunctional statements.

4.5. Combining the dimensions

In summary, we then have three different dimensions that independently affect the scope of the hypothesis. We have demonstrated the following general claim regarding hypotheses:

  • The variables XY have a relationship R with pipeline P .

And that the broadness and narrowness of a hypothesis depend on how large the three sets XY , R and P are. With this formulation, we can conclude that hypotheses have a scope that can be determined with a 3-tuple argument of (| R |, | XY |, | P |).

While hypotheses can be formulated along these three dimensions and generally aim to be reduced, it does not entail that these dimensions behave identically. For example, the relationship dimensions aim to reduce the number of elements as far as possible (e.g. to an interval). Contrastingly, for both variables and pipeline, the narrower hypothesis can reduce to single variables/pipelines or become narrower still and become conjunctions where all variables/pipelines need to corroborate the hypothesis (i.e. regardless of which method one follows, the hypothesis is correct).

5.  Additional possible dimensions

No commitment is being made about the exhaustive nature of there only being three dimensions that specify the hypothesis scope. Other dimensions may exist that specify the scope of a hypothesis. For example, one might consider the pipeline dimension as two different dimensions. The first would consider the experimental pipeline dimension regarding all variables relating to the experimental setup to collect data, and the latter would be the analytical pipeline dimension regarding the data analysis of any given data snapshot. Another possible dimension is adding the number of situations or contexts under which the hypothesis is valid. For example, any restraint such as ‘in a vacuum’, ‘under the speed of light’, or ‘in healthy human adults' could be considered an additional dimension of the hypothesis. There is no objection to whether these should be additional dimensions of the hypothesis. However, as stated above, these usually follow from the boundary conditions of the theory.

6.  Specifying the scope versus assumptions

We envision that this framework can help hypothesis-makers formulate hypotheses (in research plans, registered reports, preregistrations etc.). Further, using this framework while formulating hypotheses can help distinguish between auxiliary hypotheses and parts of the scientific hypothesis being tested. When writing preregistrations, it can frequently occur that some step in the method has two alternatives (e.g. a preprocessing step), and there is not yet reason to choose one over the other, and the researcher needs to make a decision. These following scenarios are possible:

  • 1. Narrow pipeline scope . The researcher evaluates the hypothesis with both pipeline variables (i.e. H holds for both p 1 and p 2 where p 1 and p 2 can be substituted with each other in the pipeline).
  • 2. Broad pipeline scope. The researcher evaluates the hypothesis with both pipeline variables, and only one needs to be correct (i.e. H holds for either p 1 or p 2 where p 1 and p 2 can be substituted with each other in the pipeline). The result of this experiment may help motivate choosing either p 1 or p 2 in future studies.
  • 3. Auxiliary hypothesis. Based on some reason (e.g. convention), the researcher assumes p 1 and evaluates H assuming p 1 is true.

Here we see that the same pipeline step can be part of either the auxiliary hypotheses or the pipeline scope. This distinction is important because if (3) is chosen, the decision becomes an assumption that is not explicitly tested by the hypothesis. Consequently, a researcher confident in the hypothesis may state that the auxiliary hypothesis p 1 was incorrect, and they should retest their hypothesis using different assumptions. In the cases where this decision is part of the pipeline scope, the hypothesis is intertwined with this decision, removing the eventual wiggle-room to reject auxiliary hypotheses that were assumed. Furthermore, starting with broader pipeline hypotheses that gradually narrow down can lead to a more well-motivated protocol for approaching the problem. Thus, this framework can help researchers while writing their hypotheses in, for example, preregistrations because they can consider when they are committing to a decision, assuming it, or when they should perhaps test a broader hypothesis with multiple possible options (discussed in more detail in §11 below).

7.  The reduction of scope in hypothesis space

Having established that different scopes of a hypothesis are possible, we now consider how the hypotheses change over time. In this section, we consider how the scope of the hypothesis develops ideally within science.

Consider a new research question. A large number of hypotheses are possible. Let us call this set of all possible hypotheses the hypothesis space . Hypotheses formulated within this space can be narrower or broader based on the dimensions discussed previously ( figure 3 ).

An external file that holds a picture, illustration, etc.
Object name is rsos230607f03.jpg

Example of hypothesis space. The hypothesis scope is expressed as cuboids in three dimensions (relationship ( R ), variable ( XY ), pipeline ( P )). The hypothesis space is the entire possible space within the three dimensions. Three hypotheses are shown in the hypothesis space (H 1 , H 2 , H 3 ). H 2 and H 3 are subsets of H 1 .

After the evaluation of the hypothesis with the scientific process, the hypothesis will be accepted or rejected. 9 The evaluation could be done through falsification or via verification, depending on the philosophy of science commitments. Thereafter, other narrower formulations of the hypothesis can be formulated by reducing the relationship, variable or pipeline scope. If a narrower hypothesis is accepted, more specific details about the subject matter are known, or a theory has been refined in greater detail. A narrower hypothesis will entail a more specific relationship, variable or pipeline detailed in the hypothesis. Consequently, hypotheses linked to each other in this way will become narrower over time along one or more dimensions. Importantly, considering that the conjunction of elements is narrower than single elements for pipelines and variables, this process of narrower hypotheses will lead to more general hypotheses (i.e. they have to be applied in all conditions and yield less flexibility when they do not apply). 10

Considering that the scopes of hypotheses were defined as sets above, some properties can be deduced from this framework about how narrower hypotheses relate to broader hypotheses. Let us consider three hypotheses (H 1 , H 2 , and H 3 ; figure 3 ). H 2 and H 3 are non-overlapping subsets of H 1 . Thus H 2 and H 3 are both narrower in scope than H 1 . Thus the following is correct:

  • P1: If H 1 is false, then H 2 is false, and H 2 does not need to be evaluated.
  • P2: If H 2 is true, then the broader H 1 is true, and H 1 does not need to be evaluated.
  • P3: If H 1 is true and H 2 is false, some other hypothesis H 3 of similar scope to H 2 is possible.

For example, suppose H 1 is ‘there is a relationship between x and y ’, H 2 is ‘there is a positive relationship between x and y ’, and H 3 is ‘a negative relationship between x and y ’. In that case, it becomes apparent how each of these follows. 11 Logically, many deductions from set theory are possible but will not be explored here. Instead, we will discuss two additional consequences of hypothesis scopes: scientific novelty and applications for the researcher who formulates a hypothesis.

P1–P3 have been formulated as hypotheses being true or false. In practice, hypotheses are likely evaluated probabilistically (e.g. ‘H 1 is likely’ or ‘there is evidence in support of H 1 ’). In these cases, P1–P3 can be rephrased to account for this by substituting true/false with statements relating to evidence. For example, P2 could read: ‘If there is evidence in support of H 2 , then there is evidence in support of H 1 , and H 1 does not need to be evaluated’.

8.  Scientific novelty as the reduction of scope

Novelty is a key concept that repeatedly occurs in multiple aspects of the scientific enterprise, from funding to publishing [ 55 ]. Generally, scientific progress establishes novel results based on some new hypothesis. Consequently, the new hypothesis for the novel results must be narrower than previously established knowledge (i.e. the size of the scopes is reduced). Otherwise, the result is trivial and already known (see P2 above). Thus, scientific work is novel if the scientific process produces a result based on hypotheses with either a smaller | R |, | XY |, or | P | compared to previous work.

This framework of dimensions of the scope of a hypothesis helps to demarcate when a hypothesis and the subsequent result are novel. If previous studies have established evidence for R 1 (e.g. there is a positive relationship between x and y ), a hypothesis will be novel if and only if it is narrower than R 1 . Thus, if R 2 is narrower in scope than R 1 (i.e. | R 2 | < | R 1 |), R 2 is a novel hypothesis.

Consider the following example. Study 1 hypothesizes, ‘There is a positive relationship between x and y ’. It identifies a linear relationship of 0.6. Next, Study 2 hypothesizes, ‘There is a specific linear relationship between x and y that is 0.6’. Study 2 also identifies the relationship of 0.6. Since this was a narrower hypothesis, Study 2 is novel despite the same result. Frequently, researchers claim that they are the first to demonstrate a relationship. Being the first to demonstrate a relationship is not the final measure of novelty. Having a narrower hypothesis than previous researchers is a sign of novelty as it further reduces the hypothesis space.

Finally, it should be noted that novelty is not the only objective of scientific work. Other attributes, such as improving the certainty of a current hypothesis (e.g. through replications), should not be overlooked. Additional scientific explanations and improved theories are other aspects. Additionally, this definition of novelty relating to hypothesis scope does not exclude other types of novelty (e.g. new theories or paradigms).

9.  How broad should a hypothesis be?

Given the previous section, it is elusive to conclude that the hypothesis should be as narrow as possible as it entails maximal knowledge gain and scientific novelty when formulating hypotheses. Indeed, many who advocate for daring or risky tests seem to hold this opinion. For example, Meehl [ 46 ] argues that we should evaluate theories based on point (or interval) prediction, which would be compatible with very narrow versions of relationships. We do not necessarily think that this is the most fruitful approach. In this section, we argue that hypotheses should aim to be narrower than current knowledge , but too narrow may be problematic .

Let us consider the idea of confirmatory analyses. These studies will frequently keep the previous hypothesis scopes regarding P and XY but aim to become more specific regarding R (i.e. using the same method and the same variables to detect a more specific relationship). A very daring or narrow hypothesis is to minimize R to include the fewest possible relationships. However, it becomes apparent that simply pursuing specificness or daringness is insufficient for selecting relevant hypotheses. Consider a hypothetical scenario where a researcher believes virtual reality use leads people to overestimate the amount of exercise they have done. If unaware of previous studies on this project, an apt hypothesis is perhaps ‘increased virtual reality usage correlates with a less accuracy of reported exercise performed’ (i.e. R is broad). However, a more specific and more daring hypothesis would be to specify the relationship further. Thus, despite not knowing if there is a relationship at all, a more daring hypothesis could be: ‘for every 1 h of virtual reality usage, there will be, on average, a 0.5% decrease in the accuracy of reported exercise performed’ (i.e. R is narrow). We believe it would be better to establish the broader hypothesis in any scenario here for the first experiment. Otherwise, if we fail to confirm the more specific formulation, we could reformulate another equally narrow relative to the broader hypothesis. This process of tweaking a daring hypothesis could be pursued ad infinitum . Such a situation will neither quickly identify the true hypothesis nor effectively use limited research resources.

By first discounting a broader hypothesis that there is no relationship, it will automatically discard all more specific formulations of that relationship in the hypothesis space. Returning to figure 3 , it will be better to establish H 1 before attempting H 2 or H 3 to ensure the correct area in the hypothesis space is being investigated. To provide an analogy: when looking for a needle among hay, first identify which farm it is at, then which barn, then which haystack, then which part of the haystack it is at before we start picking up individual pieces of hay. Thus, it is preferable for both pragmatic and cost-of-resource reasons to formulate sufficiently broad hypotheses to navigate the hypothesis space effectively.

Conversely, formulating too broad a relationship scope in a hypothesis when we already have evidence for narrower scope would be superfluous research (unless the evidence has been called into question by, for example, not being replicated). If multiple studies have supported the hypothesis ‘there is a 20-fold decrease in mortality after taking some medication M’, it would be unnecessary to ask, ‘Does M have any effect?’.

Our conclusion is that the appropriate scope of a hypothesis, and its three dimensions, follow a Goldilocks-like principle where too broad is superfluous and not novel, while too narrow is unnecessary or wasteful. Considering the scope of one's hypothesis and how it relates to previous hypotheses' scopes ensures one is asking appropriate questions.

Finally, there has been a recent trend in psychology that hypotheses should be formal [ 38 , 56 – 60 ]. Formal theories are precise since they are mathematical formulations entailing that their interpretations are clear (non-ambiguous) compared to linguistic theories. However, this literature on formal theories often refers to ‘precise predictions’ and ‘risky testing’ while frequently referencing Meehl, who advocates for narrow hypotheses (e.g. [ 38 , 56 , 59 ]). While perhaps not intended by any of the proponents, one interpretation of some of these positions is that hypotheses derived from formal theories will be narrow hypotheses (i.e. the quality of being ‘precise’ can mean narrow hypotheses with risky tests and non-ambiguous interpretations simultaneously). However, the benefit from the clarity (non-ambiguity) that formal theories/hypotheses bring also applies to broad formal hypotheses as well. They can include explicit but formalized versions of uncertain relationships, multiple possible pipelines, and large sets of variables. For example, a broad formal hypothesis can contain a hyperparameter that controls which distribution the data fit (broad relationship scope), or a variable could represent a set of formalized explicit pipelines (broad pipeline scope) that will be tested. In each of these instances, it is possible to formalize non-ambiguous broad hypotheses from broad formal theories that do not yet have any justification for being overly narrow. In sum, our argumentation here stating that hypotheses should not be too narrow is not an argument against formal theories but rather that hypotheses (derived from formal theories) do not necessarily have to be narrow.

10.  Other approaches to categorizing hypotheses

The framework we present here is a way of categorizing hypotheses into (at least) three dimensions regarding the hypothesis scope, which we believe is accessible to researchers and help link scientific work over time while also trying to remain neutral with regard to a specific philosophy of science. Our proposal does not aim to be antagonistic or necessarily contradict other categorizing schemes—but we believe that our framework provides benefits.

One recent categorization scheme is the Theoretical (T), Auxiliary (A), Statistical (S) and Inferential (I) assumption model (together becoming the TASI model) [ 61 , 62 ]. Briefly, this model considers theory to generate theoretical hypotheses. To translate from theoretical unobservable terms (e.g. personality, anxiety, mass), auxiliary assumptions are needed to generate an empirical hypothesis. Statistical assumptions are often needed to test the empirical hypothesis (e.g. what is the distribution, is it skewed or not) [ 61 , 62 ]. Finally, additional inferential assumptions are needed to generalize to a larger population (e.g. was there a random and independent sampling from defined populations). The TASI model is insightful and helpful in highlighting the distance between a theory and the observation that would corroborate/contradict it. Part of its utility is to bring auxiliary hypotheses into the foreground, to improve comparisons between studies and improve theory-based interventions [ 63 , 64 ].

We do agree with the importance of being aware of or stating the auxiliary hypotheses, but there are some differences between the frameworks. First, the number of auxiliary assumptions in TASI can be several hundred [ 62 ], whereas our framework will consider some of them as part of the pipeline dimension. Consider the following four assumptions: ‘the inter-stimulus interval is between 2000 ms and 3000 ms', ‘the data will be z-transformed’, ‘subjects will perform correctly’, and ‘the measurements were valid’. According to the TASI model, all these will be classified similarly as auxiliary assumptions. Contrarily, within our framework, it is possible to consider the first two as part of the pipeline dimension and the latter two as auxiliary assumptions, and consequently, the first two become integrated as part of the hypothesis being tested and the latter two auxiliary assumptions. A second difference between the frameworks relates to non-theoretical studies (convenience, applied or atheoretical). Our framework allows for the possibility that the hypothesis space generated by theoretical and convenience studies can interact and inform each other within the same framework . Contrarily, in TASI, the theory assumptions no longer apply, and a different type of hypothesis model is needed; these assumptions must be replaced by another group of assumptions (where ‘substantive application assumptions' replace the T and the A, becoming SSI) [ 61 ]. Finally, part of our rationale for our framework is to be able to link and track hypotheses and hypothesis development together over time, so our classification scheme has different utility.

Another approach which has some similar utility to this framework is theory construction methodology (TCM) [ 57 ]. The similarity here is that TCM aims to be a practical guide to improve theory-making in psychology. It is an iterative process which relates theory, phenomena and data. Here hypotheses are not an explicit part of the model. However, what is designated as ‘proto theory’ could be considered a hypothesis in our framework as they are a product of abduction, shaping the theory space. Alternatively, what is deduced to evaluate the theory can also be considered a hypothesis. We consider both possible and that our framework can integrate with these two steps, especially since TCM does not have clear guidelines for how to do each step.

11.  From theory to practice: implementing this framework

We believe that many practising researchers can relate to many aspects of this framework. But, how can a researcher translate the above theoretical framework to their work? The utility of this framework lies in bringing these three scopes of a hypothesis together and explaining how each can be reduced. We believe researchers can use this framework to describe their current practices more clearly. Here we discuss how it can be helpful for researchers when formulating, planning, preregistering, and discussing the evaluation of their scientific hypotheses. These practical implications are brief, and future work can expand on the connection between the full interaction between hypothesis space and scope. Furthermore, both authors have the most experience in cognitive neuroscience, and some of the practical implications may revolve around this type of research and may not apply equally to other fields.

11.1. Helping to form hypotheses

Abduction, according to Peirce, is a hypothesis-making exercise [ 22 , 26 – 28 ]. Given some observations, a general testable explanation of the phenomena is formed. However, when making the hypothesis, this statement will have a scope (either explicitly or implicitly). Using our framework, the scope can become explicit. The hypothesis-maker can start with ‘The variables XY have a relationship R with pipeline P ’ as a scaffold to form the hypothesis. From here, the hypothesis-maker can ‘fill in the blanks’, explicitly adding each of the scopes. Thus, when making a hypothesis via abduction and using our framework, the hypothesis will have an explicit scope when it is made. By doing this, there is less chance that a formulated hypothesis is unclear, ambiguous, and needs amending at a later stage.

11.2. Assisting to clearly state hypotheses

A hypothesis is not just formulated but also communicated. Hypotheses are stated in funding applications, preregistrations, registered reports, and academic articles. Further, preregistered hypotheses are often omitted or changed in the final article [ 11 ], and hypotheses are not always explicitly stated in articles [ 12 ]. How can this framework help to make better hypotheses? Similar to the previous point, filling in the details of ‘The variables XY have a relationship R with pipeline P ’ is an explicit way to communicate the hypothesis. Thinking about each of these dimensions should entail an appropriate explicit scope and, hopefully, less variation between preregistered and reported hypotheses. The hypothesis does not need to be a single sentence, and details of XY and P will often be developed in the methods section of the text. However, using this template as a starting point can help ensure the hypothesis is stated, and the scope of all three dimensions has been communicated.

11.3. Helping to promote explicit and broad hypotheses instead of vague hypotheses

There is an important distinction between vague hypotheses and broad hypotheses, and this framework can help demarcate between them. A vague statement would be: ‘We will quantify depression in patients after treatment’. Here there is uncertainty relating to how the researcher will go about doing the experiment (i.e. how will depression be quantified?). However, a broad statement can be uncertain, but the uncertainty is part of the hypothesis: ‘Two different mood scales (S 1 or S 2 ) will be given to patients and test if only one (or both) changed after treatment’. This latter statement is transparently saying ‘S 1 or S 2 ’ is part of a broad hypothesis—the uncertainty is whether the two different scales are quantifying the same construct. We keep this uncertainty within the broad hypothesis, which will get evaluated, whereas a vague hypothesis has uncertainty as part of the interpretation of the hypothesis. This framework can be used when formulating hypotheses to help be broad (where needed) but not vague.

11.4. Which hypothesis should be chosen?

When considering the appropriate scope above, we argued for a Goldilocks-like principle of determining the hypothesis that is not too broad or too narrow. However, when writing, for example, a preregistration, how does one identify this sweet spot? There is no easy or definite universal answer to this question. However, one possible way is first to identify the XY , R , and P of previous hypotheses. From here, identify what a non-trivial step is to improve our knowledge of the research area. So, for example, could you be more specific about the exact nature of the relationship between the variables? Does the pipeline correspond to today's scientific standards, or were some suboptimal decisions made? Is there another population that you think the previous result also applies to? Do you think that maybe a more specific construct or subpopulation might explain the previous result? Could slightly different constructs (perhaps easier to quantify) be used to obtain a similar relationship? Are there even more constructs to which this relationship should apply simultaneously? Are you certain of the direction of the relationship? Answering affirmatively to any of these questions will likely make a hypothesis narrower and connect to previous research while being clear and explicit. Moreover, depending on the research question, answering any of these may be sufficiently narrow to be a non-trivial innovation. However, there are many other ways to make a hypothesis narrower than these guiding questions.

11.5. The confirmatory–exploratory continuum

Research is often dichotomized into confirmatory (testing a hypothesis) or exploratory (without a priori hypotheses). With this framework, researchers can consider how their research acts on some hypothesis space. Confirmatory and exploratory work has been defined in terms of how each interacts with the researcher's degrees of freedom (where confirmatory aims to reduce while exploratory utilizes them [ 30 ]). Both broad confirmatory and narrow exploratory research are possible using this definition and possible within this framework. How research interacts with the hypothesis space helps demarcate it. For example, if a hypothesis reduces the scope, it becomes more confirmatory, and trying to understand data given the current scope would be more exploratory work. This further could help demarcate when exploration is useful. Future theoretical work can detail how different types of research impact the hypothesis space in more detail.

11.6. Understanding when multiverse analyses are needed

Researchers writing a preregistration may face many degrees of freedom they have to choose from, and different researchers may motivate different choices. If, when writing such a preregistration, there appears to be little evidential support for certain degrees of freedom over others, the researcher is left with the option to either make more auxiliary assumptions or identify when an investigation into the pipeline scope is necessary by conducting a multiverse analysis that tests the impact of the different degrees of freedom on the result (see [ 8 ]). Thus, when applying this framework to explicitly state what pipeline variables are part of the hypothesis or an auxiliary assumption, the researcher can identify when it might be appropriate to conduct a multiverse analysis because they are having difficulty formulating hypotheses.

11.7. Describing novelty

Academic journals and research funders often ask for novelty, but the term ‘novelty’ can be vague and open to various interpretations [ 55 ]. This framework can be used to help justify the novelty of research. For example, consider a scenario where a previous study has established a psychological construct (e.g. well-being) that correlates with a certain outcome measure (e.g. long-term positive health outcomes). This framework can be used to explicitly justify novelty by (i) providing a more precise understanding of the relationship (e.g. linear or linear–plateau) or (ii) identifying more specific variables related to well-being or health outcomes. Stating how some research is novel is clearer than merely stating that the work is novel. This practice might even help journals and funders identify what type of novelty they would like to reward. In sum, this framework can help identify and articulate how research is novel.

11.8. Help to identify when standardization of pipelines is beneficial or problematic to a field

Many consider standardization in a field to be important for ensuring the comparability of results. Standardization of methods and tools entails that the pipeline P is identical (or at least very similar) across studies. However, in such cases, the standardized pipeline becomes an auxiliary assumption representing all possible pipelines. Therefore, while standardized pipelines have their benefits, this assumption becomes broader without validating (e.g. via multiverse analysis) which pipelines a standardized P represents. In summary, because this framework helps distinguish between auxiliary assumptions and explicit parts of the hypothesis and identifies when a multiverse analysis is needed, it can help determine when standardizations of pipelines are representative (narrower hypotheses) or assumptive (broader hypotheses).

12.  Conclusion

Here, we have argued that the scope of a hypothesis is made up of three dimensions: the relationship ( R ), variable ( XY ) and pipeline ( P ) selection. Along each of these dimensions, the scope can vary. Different types of scientific enterprises will often have hypotheses that vary the size of the scopes. We have argued that this focus on the scope of the hypothesis along these dimensions helps the hypothesis-maker formulate their hypotheses for preregistrations while also helping demarcate auxiliary hypotheses (assumed to be true) from the hypothesis (those being evaluated during the scientific process).

Hypotheses are an essential part of the scientific process. Considering what type of hypothesis is sufficient or relevant is an essential job of the researcher that we think has been overlooked. We hope this work promotes an understanding of what a hypothesis is and how its formulation and reduction in scope is an integral part of scientific progress. We hope it also helps clarify how broad hypotheses need not be vague or inappropriate.

Finally, we applied this idea of scopes to scientific progress and considered how to formulate an appropriate hypothesis. We have also listed several ways researchers can practically implement this framework today. However, there are other practicalities of this framework that future work should explore. For example, it could be used to differentiate and demarcate different scientific contributions (e.g. confirmatory studies, exploration studies, validation studies) with how their hypotheses interact with the different dimensions of the hypothesis space. Further, linking hypotheses over time within this framework can be a foundation for open hypothesis-making by promoting explicit links to previous work and detailing the reduction of the hypothesis space. This framework helps quantify the contribution to the hypothesis space of different studies and helps clarify what aspects of hypotheses can be relevant at different times.

Acknowledgements

We thank Filip Gedin, Kristoffer Sundberg, Jens Fust, and James Steele for valuable feedback on earlier versions of this article. We also thank Mark Rubin and an unnamed reviewer for valuable comments that have improved the article.

1 While this is our intention, we cannot claim that every theory has been accommodated.

2 Similar requirements of science being able to evaluate the hypothesis can be found in pragmatism [ 22 ], logical positivism [ 23 ] and falsification [ 24 ].

3 Although when making inferences about a failed evaluation of a scientific hypothesis it is possible, due to underdetermination, to reject the auxiliary hypothesis instead of rejecting the hypothesis. However, that rejection occurs at a later inference stage. The evaluation using the scientific method aims to test the scientific hypothesis, not the auxiliary assumptions.

4 Although some have argued that this practice is not as problematic or questionable (see [ 34 , 35 ]).

5 Alternatively, theories sometimes expand their boundary conditions. A theory that was previously about ‘humans' can be used with a more inclusive boundary condition. Thus it is possible for the hypothesis-maker to use a theory about humans (decision making) and expand it to fruit flies or plants (see [ 43 ]).

6 A similarity exists here with Popper, where he uses set theory in a similar way to compare theories (not hypotheses). Popper also discusses how theories with overlapping sets but neither is a subset are also comparable (see [ 24 , §§32–34]). We do not exclude this possibility but can require additional assumptions.

7 When this could be unclear, we place the element within quotation marks.

8 Here, we have assumed that there is no interaction between these variables in variable selection. If an interaction between x 1 and x 2 is hypothesized, this should be viewed as a different variable compared to ‘ x 1 or x 2 ’. The motivation behind this is because the hypothesis ‘ x 1 or x 2 ’ is not a superset of the interaction (i.e. ‘ x 1 or x 2 ’ is not necessarily true when the interaction is true). The interaction should, in this case, be considered a third variable (e.g. I( x 1 , x 2 )) and the hypothesis ‘ x 1 or x 2 or I( x 1 , x 2 )’ is broader than ‘ x 1 or x 2 ’.

9 Or possibly ambiguous or inconclusive.

10 This formulation of scope is compatible with different frameworks from the philosophy of science. For example, by narrowing the scope would in a Popperian terminology mean prohibiting more basic statements (thus a narrower hypothesis has a higher degree of falsifiability). The reduction of scope in the relational dimension would in Popperian terminology mean increase in precision (e.g. a circle is more precise than an ellipse since circles are a subset of possible ellipses), whereas reduction in variable selection and pipeline dimension would mean increase universality (e.g. ‘all heavenly bodies' is more universal than just ‘planets') [ 24 ]. For Meehl the reduction of the relationship dimension would amount to decreasing the relative tolerance of a theory to the Spielraum [ 46 ] .

11 If there is no relationship between x and y , we do not need to test if there is a positive relationship. If we know there is a positive relationship between x and y , we do not need to test if there is a relationship. If we know there is a relationship but there is not a positive relationship, then it is possible that they have a negative relationship.

Data accessibility

Declaration of ai use.

We have not used AI-assisted technologies in creating this article.

Authors' contributions

W.H.T.: conceptualization, investigation, writing—original draft, writing—review and editing; S.S.: investigation, writing—original draft, writing—review and editing.

Both authors gave final approval for publication and agreed to be held accountable for the work performed therein.

Conflict of interest declaration

We declare we have no competing interests.

We received no funding for this study.

Chapter 2: Psychological Research

The scientific method.

photograph of the word "research" from a dictionary with a pen pointing at the word.

Scientists are engaged in explaining and understanding how the world around them works, and they are able to do so by coming up with theories that generate hypotheses that are testable and falsifiable. Theories that stand up to their tests are retained and refined, while those that do not are discarded or modified. In this way, research enables scientists to separate fact from simple opinion. Having good information generated from research aids in making wise decisions both in public policy and in our personal lives. In this section, you’ll see how psychologists use the scientific method to study and understand behavior.

Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people’s authority, and blind luck. While many of us feel confident in our abilities to decipher and interact with the world around us, history is filled with examples of how very wrong we can be when we fail to recognize the need for evidence in supporting claims. At various times in history, we would have been certain that the sun revolved around a flat earth, that the earth’s continents did not move, and that mental illness was caused by possession (Figure 1). It is through systematic scientific research that we divest ourselves of our preconceived notions and superstitions and gain an objective understanding of ourselves and our world.

A skull has a large hole bored through the forehead.

Figure 1 . Some of our ancestors, believed that trephination—the practice of making a hole in the skull—allowed evil spirits to leave the body, thus curing mental illness.

The goal of all scientists is to better understand the world around them. Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is empirical : It is grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing.

While behavior is observable, the mind is not. If someone is crying, we can see behavior. However, the reason for the behavior is more difficult to determine. Is the person crying due to being sad, in pain, or happy? Sometimes we can learn the reason for someone’s behavior by simply asking a question, like “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly, or is incapable of answering. For example, infants would not be able to explain why they are crying. In such circumstances, the psychologist must be creative in finding ways to better understand behavior. This module explores how scientific knowledge is generated, and how important that knowledge is in informing decisions in our personal lives and in the public domain.

The Process of Scientific Research

Flowchart of the scientific method. It begins with make an observation, then ask a question, form a hypothesis that answers the question, make a prediction based on the hypothesis, do an experiment to test the prediction, analyze the results, prove the hypothesis correct or incorrect, then report the results.

Figure 2 . The scientific method is a process for gathering data and processing information. It provides well-defined steps to standardize how scientific knowledge is gathered through a logical, rational problem-solving method.

Scientific knowledge is advanced through a process known as the scientific method. Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those empirical observations lead to more ideas that are tested against the real world, and so on.

The basic steps in the scientific method are:

  • Observe a natural phenomenon and define a question about it
  • Make a hypothesis, or potential solution to the question
  • Test the hypothesis
  • If the hypothesis is true, find more evidence or find counter-evidence
  • If the hypothesis is false, create a new hypothesis or try again
  • Draw conclusions and repeat–the scientific method is never-ending, and no result is ever considered perfect

In order to ask an important question that may improve our understanding of the world, a researcher must first observe natural phenomena. By making observations, a researcher can define a useful question. After finding a question to answer, the researcher can then make a prediction (a hypothesis) about what he or she thinks the answer will be. This prediction is usually a statement about the relationship between two or more variables. After making a hypothesis, the researcher will then design an experiment to test his or her hypothesis and evaluate the data gathered. These data will either support or refute the hypothesis. Based on the conclusions drawn from the data, the researcher will then find more evidence to support the hypothesis, look for counter-evidence to further strengthen the hypothesis, revise the hypothesis and create a new experiment, or continue to incorporate the information gathered to answer the research question.

Video 1.  The Scientific Method explains the basic steps taken for most scientific inquiry.

The Basic Principles of the Scientific Method

Two key concepts in the scientific approach are theory and hypothesis. A theory is a well-developed set of ideas that propose an explanation for observed phenomena that can be used to make predictions about future observations. A hypothesis is a testable prediction that is arrived at logically from a theory. It is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests (Figure 3).

A diagram has four boxes: the top is labeled “theory,” the right is labeled “hypothesis,” the bottom is labeled “research,” and the left is labeled “observation.” Arrows flow in the direction from top to right to bottom to left and back to the top, clockwise. The top right arrow is labeled “use the hypothesis to form a theory,” the bottom right arrow is labeled “design a study to test the hypothesis,” the bottom left arrow is labeled “perform the research,” and the top left arrow is labeled “create or modify the theory.”

Figure 3 . The scientific method of research includes proposing hypotheses, conducting research, and creating or modifying theories based on results.

Other key components in following the scientific method include verifiability, predictability, falsifiability, and fairness. Verifiability means that an experiment must be replicable by another researcher. To achieve verifiability, researchers must make sure to document their methods and clearly explain how their experiment is structured and why it produces certain results.

Predictability in a scientific theory implies that the theory should enable us to make predictions about future events. The precision of these predictions is a measure of the strength of the theory.

Falsifiability refers to whether a hypothesis can be disproved. For a hypothesis to be falsifiable, it must be logically possible to make an observation or do a physical experiment that would show that there is no support for the hypothesis. Even when a hypothesis cannot be shown to be false, that does not necessarily mean it is not valid. Future testing may disprove the hypothesis. This does not mean that a hypothesis has to be shown to be false, just that it can be tested.

To determine whether a hypothesis is supported or not supported, psychological researchers must conduct hypothesis testing using statistics. Hypothesis testing is a type of statistics that determines the probability of a hypothesis being true or false. If hypothesis testing reveals that results were “statistically significant,” this means that there was support for the hypothesis and that the researchers can be reasonably confident that their result was not due to random chance. If the results are not statistically significant, this means that the researchers’ hypothesis was not supported.

Fairness implies that all data must be considered when evaluating a hypothesis. A researcher cannot pick and choose what data to keep and what to discard or focus specifically on data that support or do not support a particular hypothesis. All data must be accounted for, even if they invalidate the hypothesis.

Applying the Scientific Method

To see how this process works, let’s consider a specific theory and a hypothesis that might be generated from that theory. As you’ll learn in a later module, the James-Lange theory of emotion asserts that emotional experience relies on the physiological arousal associated with the emotional state. If you walked out of your home and discovered a very aggressive snake waiting on your doorstep, your heart would begin to race, and your stomach churn. According to the James-Lange theory, these physiological changes would result in your feeling of fear. A hypothesis that could be derived from this theory might be that a person who is unaware of the physiological arousal that the sight of the snake elicits will not feel fear.

Remember that a good scientific hypothesis is falsifiable, or capable of being shown to be incorrect. Recall from the introductory module that Sigmund Freud had lots of interesting ideas to explain various human behaviors (Figure 4). However, a major criticism of Freud’s theories is that many of his ideas are not falsifiable; for example, it is impossible to imagine empirical observations that would disprove the existence of the id, the ego, and the superego—the three elements of personality described in Freud’s theories. Despite this, Freud’s theories are widely taught in introductory psychology texts because of their historical significance for personality psychology and psychotherapy, and these remain the root of all modern forms of therapy.

(a)A photograph shows Freud holding a cigar. (b) The mind’s conscious and unconscious states are illustrated as an iceberg floating in water. Beneath the water’s surface in the “unconscious” area are the id, ego, and superego. The area just below the water’s surface is labeled “preconscious.” The area above the water’s surface is labeled “conscious.”

Figure 4 . Many of the specifics of (a) Freud’s theories, such as (b) his division of the mind into id, ego, and superego, have fallen out of favor in recent decades because they are not falsifiable. In broader strokes, his views set the stage for much of psychological thinking today, such as the unconscious nature of the majority of psychological processes.

In contrast, the James-Lange theory does generate falsifiable hypotheses, such as the one described above. Some individuals who suffer significant injuries to their spinal columns are unable to feel the bodily changes that often accompany emotional experiences. Therefore, we could test the hypothesis by determining how emotional experiences differ between individuals who have the ability to detect these changes in their physiological arousal and those who do not. In fact, this research has been conducted and while the emotional experiences of people deprived of an awareness of their physiological arousal may be less intense, they still experience emotion (Chwalisz, Diener, & Gallagher, 1988).

Link to Learning

Want to participate in a study? Visit this Psychological Research on the Net website and click on a link that sounds interesting to you in order to participate in online research.

Why the Scientific Method Is Important for Psychology

The use of the scientific method is one of the main features that separates modern psychology from earlier philosophical inquiries about the mind. Compared to chemistry, physics, and other “natural sciences,” psychology has long been considered one of the “social sciences” because of the subjective nature of the things it seeks to study. Many of the concepts that psychologists are interested in—such as aspects of the human mind, behavior, and emotions—are subjective and cannot be directly measured. Psychologists often rely instead on behavioral observations and self-reported data, which are considered by some to be illegitimate or lacking in methodological rigor. Applying the scientific method to psychology, therefore, helps to standardize the approach to understanding its very different types of information.

The scientific method allows psychological data to be replicated and confirmed in many instances, under different circumstances, and by a variety of researchers. Through replication of experiments, new generations of psychologists can reduce errors and broaden the applicability of theories. It also allows theories to be tested and validated instead of simply being conjectures that could never be verified or falsified. All of this allows psychologists to gain a stronger understanding of how the human mind works.

Scientific articles published in journals and psychology papers written in the style of the American Psychological Association (i.e., in “APA style”) are structured around the scientific method. These papers include an Introduction, which introduces the background information and outlines the hypotheses; a Methods section, which outlines the specifics of how the experiment was conducted to test the hypothesis; a Results section, which includes the statistics that tested the hypothesis and state whether it was supported or not supported, and a Discussion and Conclusion, which state the implications of finding support for, or no support for, the hypothesis. Writing articles and papers that adhere to the scientific method makes it easy for future researchers to repeat the study and attempt to replicate the results.

Today, scientists agree that good research is ethical in nature and is guided by a basic respect for human dignity and safety. However, as you will read in the Tuskegee Syphilis Study, this has not always been the case. Modern researchers must demonstrate that the research they perform is ethically sound. This section presents how ethical considerations affect the design and implementation of research conducted today.

Research Involving Human Participants

Any experiment involving the participation of human subjects is governed by extensive, strict guidelines designed to ensure that the experiment does not result in harm. Any research institution that receives federal support for research involving human participants must have access to an institutional review board (IRB) . The IRB is a committee of individuals often made up of members of the institution’s administration, scientists, and community members (Figure 1). The purpose of the IRB is to review proposals for research that involves human participants. The IRB reviews these proposals with the principles mentioned above in mind, and generally, approval from the IRB is required in order for the experiment to proceed.

A photograph shows a group of people seated around tables in a meeting room.

Figure 5 . An institution’s IRB meets regularly to review experimental proposals that involve human participants. (credit: modification of work by Lowndes Area Knowledge Exchange (LAKE)/Flickr)

An institution’s IRB requires several components in any experiment it approves. For one, each participant must sign an informed consent form before they can participate in the experiment. An informed consent form provides a written description of what participants can expect during the experiment, including potential risks and implications of the research. It also lets participants know that their involvement is completely voluntary and can be discontinued without penalty at any time. Furthermore, informed consent guarantees that any data collected in the experiment will remain completely confidential. In cases where research participants are under the age of 18, the parents or legal guardians are required to sign the informed consent form.

While the informed consent form should be as honest as possible in describing exactly what participants will be doing, sometimes deception is necessary to prevent participants’ knowledge of the exact research question from affecting the results of the study. Deception involves purposely misleading experiment participants in order to maintain the integrity of the experiment, but not to the point where the deception could be considered harmful. For example, if we are interested in how our opinion of someone is affected by their attire, we might use deception in describing the experiment to prevent that knowledge from affecting participants’ responses. In cases where deception is involved, participants must receive a full debriefing upon conclusion of the study—complete, honest information about the purpose of the experiment, how the data collected will be used, the reasons why deception was necessary, and information about how to obtain additional information about the study.

Dig Deeper: Ethics and the Tuskegee Syphilis Study

Unfortunately, the ethical guidelines that exist for research today were not always applied in the past. In 1932, poor, rural, black, male sharecroppers from Tuskegee, Alabama, were recruited to participate in an experiment conducted by the U.S. Public Health Service, with the aim of studying syphilis in black men (Figure 6). In exchange for free medical care, meals, and burial insurance, 600 men agreed to participate in the study. A little more than half of the men tested positive for syphilis, and they served as the experimental group (given that the researchers could not randomly assign participants to groups, this represents a quasi-experiment). The remaining syphilis-free individuals served as the control group. However, those individuals that tested positive for syphilis were never informed that they had the disease.

While there was no treatment for syphilis when the study began, by 1947 penicillin was recognized as an effective treatment for the disease. Despite this, no penicillin was administered to the participants in this study, and the participants were not allowed to seek treatment at any other facilities if they continued in the study. Over the course of 40 years, many of the participants unknowingly spread syphilis to their wives (and subsequently their children born from their wives) and eventually died because they never received treatment for the disease. This study was discontinued in 1972 when the experiment was discovered by the national press (Tuskegee University, n.d.). The resulting outrage over the experiment led directly to the National Research Act of 1974 and the strict ethical guidelines for research on humans described in this chapter. Why is this study unethical? How were the men who participated and their families harmed as a function of this research?

A photograph shows a person administering an injection.

Figure 6 . A participant in the Tuskegee Syphilis Study receives an injection.

Visit this CDC website to learn more about the Tuskegee Syphilis Study.

Research Involving Animal Subjects

A photograph shows a rat.

Figure 7 . Rats, like the one shown here, often serve as the subjects of animal research.

This does not mean that animal researchers are immune to ethical concerns. Indeed, the humane and ethical treatment of animal research subjects is a critical aspect of this type of research. Researchers must design their experiments to minimize any pain or distress experienced by animals serving as research subjects.

Whereas IRBs review research proposals that involve human participants, animal experimental proposals are reviewed by an Institutional Animal Care and Use Committee (IACUC) . An IACUC consists of institutional administrators, scientists, veterinarians, and community members. This committee is charged with ensuring that all experimental proposals require the humane treatment of animal research subjects. It also conducts semi-annual inspections of all animal facilities to ensure that the research protocols are being followed. No animal research project can proceed without the committee’s approval.

  • Modification and adaptation. Provided by : Lumen Learning. License : CC BY-SA: Attribution-ShareAlike
  • Psychology and the Scientific Method: From Theory to Conclusion, content on the scientific method principles. Provided by : Boundless. Located at : https://courses.lumenlearning.com/boundless-psychology/ . License : CC BY-SA: Attribution-ShareAlike
  • Introduction to Psychological Research, Why is Research Important?, Ethics. Authored by : OpenStax College. Located at : http://cnx.org/contents/[email protected]:Hp5zMFYB@9/Why-Is-Research-Important . License : CC BY: Attribution . License Terms : Download for free at http://cnx.org/contents/[email protected]
  • Research picture. Authored by : Mediterranean Center of Medical Sciences. Provided by : Flickr. Located at : https://www.flickr.com/photos/mcmscience/17664002728 . License : CC BY: Attribution

Footer Logo Lumen Candela

Privacy Policy

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Published: 26 June 2023

GREENER principles for environmentally sustainable computational science

  • Loïc Lannelongue   ORCID: orcid.org/0000-0002-9135-1345 1 , 2 , 3 , 4 ,
  • Hans-Erik G. Aronson   ORCID: orcid.org/0000-0002-1702-1671 5 ,
  • Alex Bateman 6 ,
  • Ewan Birney 6 ,
  • Talia Caplan   ORCID: orcid.org/0000-0001-8990-1435 7 ,
  • Martin Juckes   ORCID: orcid.org/0000-0003-1770-2132 8 ,
  • Johanna McEntyre 6 ,
  • Andrew D. Morris 5 ,
  • Gerry Reilly 5 &
  • Michael Inouye 1 , 2 , 3 , 4 , 9 , 10 , 11  

Nature Computational Science volume  3 ,  pages 514–521 ( 2023 ) Cite this article

7500 Accesses

8 Citations

103 Altmetric

Metrics details

  • Computational science
  • Environmental impact
  • Scientific community

The carbon footprint of scientific computing is substantial, but environmentally sustainable computational science (ESCS) is a nascent field with many opportunities to thrive. To realize the immense green opportunities and continued, yet sustainable, growth of computer science, we must take a coordinated approach to our current challenges, including greater awareness and transparency, improved estimation and wider reporting of environmental impacts. Here, we present a snapshot of where ESCS stands today and introduce the GREENER set of principles, as well as guidance for best practices moving forward.

Similar content being viewed by others

for a hypothesis to be scientific it must

Current state and call for action to accomplish findability, accessibility, interoperability, and reusability of low carbon energy data

Valeria Jana Schwanitz, August Wierling, … Demet Suna

for a hypothesis to be scientific it must

Green chemistry as just chemistry

Mary Kate M. Lane, Holly E. Rudel, … Julie B. Zimmerman

for a hypothesis to be scientific it must

Estimating a social cost of carbon for global energy consumption

Ashwin Rode, Tamma Carleton, … Jiacan Yuan

Scientific research and development have transformed and immeasurably improved the human condition, whether by building instruments to unveil the mysteries of the universe, developing treatments to fight cancer or improving our understanding of the human genome. Yet, science can, and frequently does, impact the environment, and the magnitude of these impacts is not always well understood. Given the connection between climate change and human health, it is becoming increasingly apparent to biomedical researchers in particular, as well as their funders, that the environmental effects of research should be taken into account 1 , 2 , 3 , 4 , 5 .

Recent studies have begun to elucidate the environmental impacts of scientific research, with an initial focus on scientific conferences and experimental laboratories 6 . The 2019 Fall Meeting of the American Geophysical Union was estimated to emit 80,000 metric tonnes of CO 2 equivalent (tCO 2 e), equivalent to the average weekly emissions of the city of Edinburgh, UK 7 (CO 2 e, or CO 2 -equivalent, summarizes the global warming impacts of a range of greenhouse gases (GHGs) and is the standard metric for carbon footprints, although its accuracy is sometimes debated 8 ) The annual meeting of the Society for Neuroscience was estimated to emit 22,000 tCO 2 e, approximately the annual carbon footprint of 1,000 medium-sized laboratories 9 . The life-cycle impact (including construction and usage) of university buildings has been estimated at ~0.125 tCO 2 e m −2  yr −1 (ref. 10 ), and the yearly carbon footprint of a typical life-science laboratory at ~20 tCO 2 e (ref. 9 ). The Laboratory Efficiency Assessment Framework (LEAF) is a widely adopted standard to monitor and reduce the carbon footprint of laboratory-based research 11 . Other recent frameworks can help to raise awareness: GES 1point5 12 provides an open-source tool to estimate the carbon footprint of research laboratories and covers buildings, procurement, commuting and travel, and the Environmental Responsibility 5-R Framework provides guidelines for ecologically conscious research 13 .

With the increasing scale of high-performance and cloud computing, the computational sciences are susceptible to having silent and unintended environmental impacts. The sector of information and communication technologies (ICT) was responsible for between 1.8% and 2.8% of global GHG emissions in 2020 14 —more than aviation (1.9% 15 )—and, if unchecked, the ICT carbon footprint could grow exponentially in coming years 14 . Although the environmental impact of experimental ‘wet’ laboratories is more immediately obvious, with their large pieces of equipment and high plastic and reagent usage, the impact of algorithms is less clear and often underestimated. The risks of seeking performance at any cost and the importance of considering energy usage and sustainability when developing new hardware for high-performance computing (HPC) was raised as early as 2007 16 . Since then, continuous improvements have been made by developing new hardware, building lower-energy data centers and implementing more efficient HPC systems 17 , 18 . However, it is only in the past five years that these concerns have reached HPC users, in particular researchers. Notably, the field of artificial intelligence (AI) has first taken note of its environmental impacts, in particular those of the very large language models developed 19 , 20 , 21 , 22 , 23 . It is unclear, however, to what extent this has led the field towards more sustainable research practices. A small number of studies have also been performed in other fields, including bioinformatics 24 , astronomy and astrophysics 25 , 26 , 27 , 28 , particle physics 29 , neuroscience 30 and computational social sciences 31 . Health data science is starting to address the subject, but a recent systematic review found only 25 publications in the field over the past 12 years 32 . In addition to the environmental effects of electricity usage, manufacturing and disposal of hardware, there are also concerns around data centers’ water usage and land footprint 33 . Notably, computational science, in particular AI, has the potential to help fight climate change, for example, by improving the efficiency of wind farms, by facilitating low-carbon urban mobility and by better understanding and anticipating severe weather events 34 .

In this Perspective we highlight the nascent field of environmentally sustainable computational science (ESCS)—what we have learned from the research so far, and what scientists can do to mitigate their environmental impacts. In doing so, we present GREENER (Governance, Responsibility, Estimation, Energy and embodied impacts, New collaborations, Education and Research; Fig. 1 ), a set of principles for how the computational science community could lead the way in sustainable research practices, maximizing computational science’s benefit to both humanity and the environment.

figure 1

The GREENER principles enable cultural change (blue arrows), which in turn facilitates their implementation (green arrows) and triggers a virtuous circle.

Environmental impacts of the computational sciences

The past three years have seen increased concerns regarding the carbon footprint of computations, and only recently have tools 21 , 35 , 36 , 37 and guidelines 38 been widely available to computational scientists to allow them to estimate their carbon footprint and be more environmentally sustainable.

Most calculators that estimate the carbon footprint of computations are targeted at machine learning tasks and so are primarily suited to Python pipelines, graphics processing units (GPUs) and/or cloud computing 36 , 37 , 39 , 40 . Python libraries have the benefit of integrating well into machine learning pipelines or online calculators for cloud GPUs 21 , 41 . Recently, a flexible online tool, the Green Algorithms calculator 35 , enabled the estimation of the carbon footprint for nearly any computational task, empowering sustainability metrics across fields, hardware, computing platforms and locations.

Some publications, such as ref. 38 , have listed simple actions that computational scientists can take regarding their environmental impact, including estimating the carbon footprint of running algorithms, both a posteriori to acknowledge the impact of a project and before starting as part of a cost–benefit analysis. A 2020 report from The Royal Society formalizes this with the notion of ‘energy proportionality’, meaning the environmental impacts of an innovation must be outweighed by its environmental or societal benefits 34 . It is also important to minimize electronic waste by keeping devices for longer and using second-hand hardware when possible. A 2021 report by the World Health Organization 42 warns of the dramatic effect of e-waste on population health, particularly children. The unregulated informal recycling industry, which handles more than 80% of the 53 million tonnes of e-waste, causes a high level of water, soil and air pollution, often in low- and middle-income countries 43 . The up to 56 million informal waste workers are also exposed to hazardous chemicals such as heavy metals and persistent organic pollutants 42 . Scientists can also choose energy-efficient hardware and computing facilities, while favoring those powered by green energy. Writing efficient code can substantially reduce the carbon footprint as well, and this can be done alongside making hardware requirements and carbon footprints clear when releasing new software. The Green Software Foundation ( https://greensoftware.foundation ) promotes carbon-aware coding to reduce the operational carbon footprint of the softwares used in all aspects of society. There is, however, a rebound effect to making algorithms and hardware more efficient: instead of reducing computing usage, increased efficiency encourages more analyses to be performed, which leads to a revaluation of the cost–benefit but often results in increased carbon footprints. The rebound effect is a key example of why research practice should adapt to technological advances so that they lead to carbon footprint reductions.

GREENER computational science

ESCS is an emerging field, but one that is of rapidly increasing importance given the climate crisis. In the following, our proposed set of principles (Fig. 1 ) outlines the main axes where progress is needed, where opportunities lie and where we believe efforts should be concentrated.

Governance and responsibility

Everyone involved in computational science has a role to play in making the field more sustainable, and many do already, from grassroots movements to large institutions. Individual and institutional responsibility is a necessary step to ensure transparency and reduction of GHG emission. Here we highlight key stakeholders alongside existing initiatives and future opportunities for involvement.

Grassroots initiatives led by graduate students, early career researchers and laboratory technicians have shown great success in tackling the carbon footprint of laboratory work, including Green Labs Netherlands 44 , the Nottingham Technical Sustainability Working Group or the Digital Humanities Climate Coalition 45 . International coalitions such as the Sustainable Research (SuRe) Symposium, initially set up for wet laboratories, have started to address the impact of computing as well. IT teams in HPC centers are naturally key, both in terms of training and ensuring that the appropriate information is logged so that scientists can follow the carbon footprints of their work. Principal investigators can encourage their teams to think about this issue and provide access to suitable training when needed.

Simultaneously, top–down approaches are needed, with funding bodies and journals occupying key positions in both incentivizing carbon-footprint reduction and in promoting transparency. Funding bodies can directly influence the researchers they fund and those applying for funding via their funding policies. They can require estimates of carbon footprints to be included in funding applications as part of ‘environmental impacts statements’. Many funding bodies include sustainability in their guidelines already; see, for example, the UK’s NIHR carbon reduction guidelines 1 , the brief mention of the environment in UKRI’s terms and conditions 46 , and the Wellcome Trust’s carbon-offsetting travel policy 47 .

Although these are important first steps, bolder action is needed to meet the urgency of climate change. For example, UKRI’s digital research infrastructure scoping project 48 , which seeks to provide a roadmap to net zero for its digital infrastructure, sends a clear message that sustainable research includes minimizing the GHG emissions from computation. The project not only raises awareness but will hopefully result in reductions in GHG emissions.

Large research institutes are key to managing and expanding centralized data infrastructures and trusted research environments (TREs). For example, EMBL’s European Bioinformatics Institute manages more than 40 data resources 49 , including AlphaFold DB 50 , which contains over 200,000,000 predicted protein structures that can be searched, browsed and retrieved according to the FAIR principles (findable, accessible, interoperable, reusable) 51 . As a consequence, researchers do not need to run the carbon-intensive AlphaFold algorithm for themselves and instead can just query the database. AlphaFold DB was queried programmatically over 700 million times and the web page was accessed 2.4 million times between August 2021 and October 2022. Institutions also have a role in making procurement decisions carefully, taking into account both the manufacturing and operational footprint of hardware purchases. This is critical, as the lifetime footprint of a computational facility is largely determined by the date it is purchased. Facilities could also better balance investment decisions, with a focus on attracting staff based on sustainable and efficient working environments, rather than high-powered hardware 52 .

However, increases in the efficiencies of digital technology alone are unlikely to prove sufficient in ensuring sustainable resource use 53 . Alongside these investments, funding bodies should support a shift towards more positive, inclusive and green research cultures, recognizing that more data or bigger models do not always translate into greater insights and that a ‘fit for purpose’ approach can ultimately be more efficient. Organizations such as Health Data Research UK and the UK Health Data Research Alliance have a key convening role in ensuring that awareness is raised around the climate impact of both infrastructure investment and computational methods.

Journals may incentivize authors to acknowledge and indeed estimate the carbon footprint of the work presented. Some authors already do this voluntarily (for example, refs. 54 , 55 , 56 , 57 , 58 , 59 ), mostly in bioinformatics and machine learning so far, but there is potential to expand it to other areas of computational science. In some instances, showing that a new tool is greener can be an argument in support of a new method 60 .

International societies in charge of organizing annual conferences may help scientists reduce the carbon footprint of presenting their work by offering hybrid options. The COVID-19 pandemic boosted virtual and hybrid meetings, which have a lower carbon footprint while increasing access and diversity 7 , 61 . Burtscher and colleagues found that running the annual meeting of the European Astronomical Society online emitted >3,000-fold less CO 2 e than the in-person meeting (0.582 tCO 2 e compared to 1,855 tCO 2 e) 25 . Institutions are starting to tackle this; for example, the University of Cambridge has released new travel guidelines encouraging virtual meetings whenever feasible and restricting flights to essential travel, while also acknowledging that different career stages have different needs 62 .

Industry partners will also need to be part of the discussion. Acknowledging and reducing computing environmental impact comes with added challenges in industry, such as shareholder interests and/or public relations. While the EU has backed some initiatives helping ICT-reliant companies to address their carbon footprint, such as ICTfootprint.eu, other major stakeholders have expressed skepticism regarding the environmental issues of machine learning models 63 , 64 . Although challenging, tech industry engagement and inclusion is nevertheless essential for tackling GHG emissions.

Estimate and report the energy consumption of algorithms

Estimating and monitoring the carbon footprint of computations is an essential step towards sustainable research as it identifies inefficiencies and opportunities for improvement. User-level metrics are crucial to understanding environmental impacts and promoting personal responsibility. In some HPC situations, particularly in academia, the financial cost of running computations is negligible and scientists may have the impression of unlimited and inconsequential computing capacity. Quantifying the carbon footprint of individual projects helps raise awareness of the true costs of research.

Although progress has been made in estimating energy usage and carbon footprints over the past few years, there are still barriers that prevent the routine estimation of environmental impacts. From task-agnostic, general-purpose calculators 35 and task-specific packages 36 , 37 , 65 to server-side softwares 66 , 67 , each estimation tool is a trade-off between ease of use and accuracy. A recent primer 68 discusses these different options in more detail and provides recommendations as to which approach fits a particular need.

Regardless of the calculator used, for these tools to work effectively and for scientists to have an accurate representation of their energy consumption, it is important to understand the power management for different components. For example, the power usage of processing cores such as central processing units (CPUs) and GPUs is not a readily available metric; instead, thermal design power (meaning, how much heat the chip can be expected to dissipate in a normal setting) is used. Although an acceptable approximation, it has also been shown to substantially underestimate power usage in some situations 69 . The efficiency of data centers is measured by the power usage effectiveness (PUE), which quantifies how much energy is needed for non-computing tasks, mainly cooling (efficient data centers have PUEs close to 1). This metric is widely used, with large cloud providers reporting low PUEs (for example, 1.11 for Google 70 compared to a global average of 1.57 71 ), but discrepancies in how it is calculated can limit PUE interpretation and thus its impact 72 , 73 , 74 . A standard from the International Organization for Standardization is trying to address this 75 . Unfortunately, the PUE of a particular data center, whether cloud or institutional, is rarely publicly documented. Thus, an important step is the data science and infrastructure community making both hardware and data centers’ energy consumption metrics available to their users and the public. Ultimately, tackling unnecessary carbon footprints will require transparency 34 .

Tackling energy and embodied impacts through new collaborations

Minimizing carbon intensity (meaning the carbon footprint of producing electricity) is one of the most immediately impactful ways to reduce GHG emissions. Carbon intensities depend largely on geographical location, with up to three orders of magnitude between the top and bottom performing high-income countries in terms of low carbon energies (from 0.10 gCO 2 e kWh −1 in Iceland to 770 gCO 2 e kWh −1 in Australia 76 ). Changing the carbon intensity of a local state or national government is nearly always impractical as it would necessitate protracted campaigns to change energy policies. An alternative is to relocate computations to low-carbon settings and countries, but, depending on the type of facility or the sensitivity of the data, this may not always be possible. New inter-institutional cooperation may open up opportunities to enable access to low-carbon data centers in real time.

It is, however, essential to recognize and account for inequalities between countries in terms of access to green energy sources. International cooperation is key to providing scientists from low- and middle-income countries (LMICs), who frequently only have high-carbon-intensity options available to them, access to low-carbon computing infrastructures for their work. In the longer term, international partnerships between organizations and nations can help build low-carbon computing capacity in LMICs.

Furthermore, the footprint of user devices should not be forgotten. In one estimate, the energy footprint of streaming a video to a laptop is mainly on the laptop (72%), with 23% used in transmission and a mere 5% at the data center 77 . Zero clients (user devices with no compute or storage capacity) can be used in some research use cases and drastically reduce the client-side footprint 78 .

It can be tempting to reduce the environmental impacts of computing to electricity needs, as these are the easiest ones to estimate. However, water usage, ecological impacts and embodied carbon footprints from manufacturing should also be addressed. For example, for personal hardware, such as laptops, 70–80% of the life-cycle impact of these devices comes from manufacturing only 79 , as it involves mining raw materials and assembling the different components, which require water and energy. Moreover, manufacturing often takes place in countries that have a higher carbon intensity for power generation and a slower transition to zero-carbon power 80 . Currently, hardware renewal policies, either for work computers or servers in data centers, are often closely dependent on warranties and financial costs, with environmental costs rarely considered. For hardware used in data centers, regular updates may be both financially and environmentally friendly, as efficiency gains may offset manufacturing impacts. Estimating these environmental impacts will allow HPC teams to know for sure. Reconditioned and remanufactured laptops and servers are available, but growth of this sector is currently limited by negative consumer perception 81 . Major suppliers of hardware are making substantial commitments, such as 100% renewable energy supply by 2030 82 or net zero by 2050 83 .

Another key consideration is data storage. Scientific datasets are now measured in petabytes (PB). In genomics, the popular UK Biobank cohort 84 is expected to reach 15 PB by 2025 85 , and the first image of a black hole required the collection of 5 PB of data 86 . The carbon footprint of storing data depends on numerous factors, but based on some manufacturers’ estimations, the order of magnitude of the life-cycle footprint of storing 1 TB of data for a year is ~10 kg CO 2 e (refs. 87 , 88 ). This issue is exacerbated by the duplication of such datasets in order for each institution, and sometimes each research group, to have a copy. Centralized and collaborative computing resources (such as TREs) holding both data and computing hardware may help alleviate redundant resources. TRE efforts in the UK span both health (for example, NHS Digital 89 ) and administrative data (for example, the SAIL databank on the UK Secure Research Platform 90 and the Office for National Statistics Secure Research Service 91 ). Large (hyperscale) data centers are expected to be more energy-efficient 92 , but they may also encourage unnecessary increases in the scale of computing (rebound effect).

The importance of dedicated education and research efforts for ESCS

Education is essential to raise awareness with different stakeholders. In lieu of incorporating some aspects into more formal undergraduate programs, integrating sustainability into computational training courses is a tangible first step toward reducing carbon footprints. An example is the ‘Green Computing’ Workshop on Education at the 2022 conference on Intelligent Systems for Molecular Biology.

Investing in research that will catalyze innovation in the field of ESCS is a crucial role for funders and institutions to play. Although global data centers’ workloads have increased more than sixfold between 2010 and 2018, their total electricity usage has been approximately stable due to the use of power-efficient hardware 93 , but environmentally sustainable investments will be needed to perpetuate this trend. Initiatives like Wellcome’s Research Sustainability project 94 , which look to highlight key gaps where investment could deliver the next generation of ESCS tools and technology, are key to ensuring that growth in energy demand beyond current efficiency trends can be managed in a sustainable way. Similarly, the UKRI Data and Analytics Research Environments UK program (DARE UK) needs to ensure that sustainability is a key evaluation criterion for funding and infrastructure investments for the next generation of TREs.

Recent studies found that the most widely used programming languages in research, such as R and Python 95 , tend to be the least energy-efficient ones 96 , 97 , and, although it is unlikely that forcing the community to switch to more efficient languages would benefit the environment in the short term (due to inefficient coding for example), this highlights the importance of having trained research software engineers within research groups to ensure that the algorithms used are efficiently implemented. There is also scope to use current tools more efficiently by better understanding and monitoring how coding choices impact carbon footprints. Algorithms also come with high memory requirements, sometimes using more energy than processors 98 . Unfortunately, memory power usage remains poorly optimized, as speed of access is almost always favored over energy efficiency 99 . Providing users and software engineers with the flexibility to opt for energy efficiency would present an opportunity for a reduction in GHG emissions 100 , 101 .

Cultural change

In parallel to the technological reductions in energy usage and carbon footprints, research practices will also need to change to avoid rebound effects 38 . Similar to the aviation industry, there is a tendency to count on technology to solve sustainability concerns without having to change usage 102 (that is, waiting on computing to become zero-carbon rather than acting on how we use it). Cultural change in the computing community to reconsider how we think about computing costs will be necessary. Research strategies at all levels will need to consider environmental impacts and corresponding approaches to carbon footprint minimization. The upcoming extension of the LEAF standard for computational laboratories will provide researchers with tangible tools to do so. Day to day, there is a need to solve trade-offs between the speed of computation, accuracy and GHG emissions, keeping in mind the goal of GHG reduction. These changes in scientific practices are challenging, but, importantly, there are synergies between open computational science and green computing 103 . For example, making code, data and models FAIR so that other scientists avoid unnecessary computations can increase the reach and impact of a project. FAIR practices can result in highly efficient code implementations, reduce the need to retrain models, and reduce unnecessary data generation/storage, thus reducing the overall carbon footprint. As a result, green computing and FAIR practices may both stimulate innovation and reduce financial costs.

Moreover, computational science has downstream effects on carbon footprints in other areas. In the biomedical sciences, developments in machine learning and computer vision impact the speed and scale of medical imaging processing. Discoveries in health data science make their way to clinicians and patients through, for example, connected devices. In each of these cases and many others, environmental impacts propagate through the whole digital health sector 32 . Yet, here too synergies exist. In many cases, such as telemedicine, there may be a net benefit in terms of both carbon and patient care, provided that all impacts have been carefully accounted for. These questions are beginning to be tackled in medicine, such as assessments of the environmental impact of telehealth 104 or studies into ways to sustainably handle large volumes of medical imaging data 105 . For the latter, NHS Digital (the UK’s national provider of information, data and IT systems for health and social care) has released guidelines to this effect 106 . Outside the biomedical field, there are immense but, so far, unrealized opportunities for similar efforts.

The computational sciences have an opportunity to lead the way in sustainability, which may be achieved through the GREENER principles for ESCS (Fig. 1 ): Governance, Responsibility, Estimation, Energy and embodied impacts, New collaborations, Education and Research. This will require more transparency on environmental impacts. Although some tools already exist to estimate carbon footprints, more specialized ones will be needed alongside a clearer understanding of the carbon footprint of hardware and facilities, as well as more systematic monitoring and acknowledgment of carbon footprints. Measurement is a first step, followed by a reduction in GHG emissions. This can be achieved with better training and sensible policies for renewing hardware and storing data. Cooperation, open science and equitable access to low-carbon computing facilities will also be crucial 107 . Computing practices will need to adapt to include carbon footprints in cost–benefit analyses, as well as consider the environmental impacts of downstream applications. The development of sustainable solutions will need particularly careful consideration, as they frequently have the least benefit for populations, often in LMICs, who suffer the most from climate change 22 , 108 . All stakeholders have a role to play, from funding bodies, journals and institutions to HPC teams and early career researchers. There is now a window of time and an immense opportunity to transform computational science into an exemplar of broad societal impact and sustainability.

NIHR Carbon Reduction Guidelines (National Institute for Health and Care Research, 2019); https://www.nihr.ac.uk/documents/nihr-carbon-reduction-guidelines/21685

NHS Becomes the World’s First National Health System to Commit to Become ‘Carbon Net Zero’, Backed by Clear Deliverables and Milestones (NHS England, 2020); https://www.england.nhs.uk/2020/10/nhs-becomes-the-worlds-national-health-system-to-commit-to-become-carbon-net-zero-backed-by-clear-deliverables-and-milestones/

Climate and COVID-19: converging crises. Lancet 397 , 71 (2021).

Marazziti, D. et al. Climate change, environment pollution, COVID-19 pandemic and mental health. Sci. Total Environ. 773 , 145182 (2021).

Article   Google Scholar  

Wellcome Commissions Report on Science’s Environmental Impact (Wellcome, 2022); https://wellcome.org/news/wellcome-commissions-report-sciences-environmental-impact

Towards Climate Sustainability of the Academic System in Europe and Beyond (ALLEA, 2022); https://doi.org/10.26356/climate-sust-acad

Klöwer, M., Hopkins, D., Allen, M. & Higham, J. An analysis of ways to decarbonize conference travel after COVID-19. Nature 583 , 356–359 (2020).

Allen, M. R. et al. A solution to the misrepresentations of CO 2 -equivalent emissions of short-lived climate pollutants under ambitious mitigation. npj Clim. Atmos. Sci. 1 , 16 (2018).

Nathans, J. & Sterling, P. How scientists can reduce their carbon footprint. eLife 5 , e15928 (2016).

Helmers, E., Chang, C. C. & Dauwels, J. Carbon footprinting of universities worldwide part II: first quantification of complete embodied impacts of two campuses in Germany and Singapore. Sustainability 14 , 3865 (2022).

Marshall-Cook, J. & Farley, M. Sustainable Science and the Laboratory Efficiency Assessment Framework ( LEAF ) (UCL, 2023).

Mariette, J. et al. An open-source tool to assess the carbon footprint of research. Environ. Res. Infrastruct. Sustain. 2 , 035008 (2022).

Murray, D. S. et al. The environmental responsibility framework: a toolbox for recognizing and promoting ecologically conscious research. Earth’s Future 11 , e2022EF002964 (2023).

Freitag, C. et al. The real climate and transformative impact of ICT: a critique of estimates, trends and regulations. Patterns 2 , 100340 (2021).

Ritchie, H. Climate change and flying: what share of global CO 2 emissions come from aviation? Our World in Data (22 October 2022); https://ourworldindata.org/co2-emissions-from-aviation

Feng, W. & Cameron, K. The Green500 list: encouraging sustainable supercomputing. Computer 40 , 50–55 (2007).

Garg, S. K., Yeo, C. S., Anandasivam, A. & Buyya, R. Environment-conscious scheduling of HPC applications on distributed cloud-oriented data centers. J. Parallel Distrib. Comput. 71 , 732–749 (2011).

Article   MATH   Google Scholar  

Katal, A., Dahiya, S. & Choudhury, T. Energy efficiency in cloud computing data centers: a survey on software technologies. Clust. Comput. https://doi.org/10.1007/s10586-022-03713-0 (2022).

Strubell, E., Ganesh, A. & McCallum, A. Energy and policy considerations for deep learning in NLP. In Proc. 57th Annual Meeting of the Association for Computational Linguistics 3645–3650 (Association for Computational Linguistics, 2019); https://doi.org/10.18653/v1/P19-1355

Schwartz, R., Dodge, J., Smith, N. A. & Etzioni, O. Green AI. Preprint at https://arxiv.org/abs/1907.10597 (2019).

Lacoste, A., Luccioni, A., Schmidt, V. & Dandres, T. Quantifying the carbon emissions of machine learning. Preprint at https://arxiv.org/abs/1910.09700 (2019).

Bender, E. M., Gebru, T., McMillan-Major, A. & Shmitchell, S. On the dangers of stochastic parrots: can language models be too big? In Proc. 2021 ACM Conference on Fairness , Accountability and Transparency 610–623 (Association for Computing Machinery, 2021); https://doi.org/10.1145/3442188.3445922

Memmel, E., Menzen, C., Schuurmans, J., Wesel, F. & Batselier, K. Towards Green AI with tensor networks—sustainability and innovation enabled by efficient algorithms. Preprint at https://doi.org/10.48550/arXiv.2205.12961 (2022).

Grealey, J. et al. The carbon footprint of bioinformatics. Mol. Biol. Evol. 39 , msac034 (2022).

Burtscher, L. et al. The carbon footprint of large astronomy meetings. Nat. Astron. 4 , 823–825 (2020).

Jahnke, K. et al. An astronomical institute’s perspective on meeting the challenges of the climate crisis. Nat. Astron. 4 , 812–815 (2020).

Stevens, A. R. H., Bellstedt, S., Elahi, P. J. & Murphy, M. T. The imperative to reduce carbon emissions in astronomy. Nat. Astron. 4 , 843–851 (2020).

Portegies Zwart, S. The ecological impact of high-performance computing in astrophysics. Nat. Astron. 4 , 819–822 (2020).

Bloom, K. et al. Climate impacts of particle physics. Preprint at https://arxiv.org/abs/2203.12389 (2022).

Aron, A. R. et al. How can neuroscientists respond to the climate emergency? Neuron 106 , 17–20 (2020).

Leslie, D. Don’t ‘ Research Fast and Break Things ’: on the Ethics of Computational Social Science (Zenodo, 2022); https://doi.org/10.5281/zenodo.6635569

Samuel, G. & Lucassen, A. M. The environmental sustainability of data-driven health research: a scoping review. Digit. Health 8 , 205520762211112 (2022).

Al Kez, D., Foley, A. M., Laverty, D., Del Rio, D. F. & Sovacool, B. Exploring the sustainability challenges facing digitalization and internet data centers. J. Clean. Prod. 371 , 133633 (2022).

Digital Technology and the Planet—Harnessing Computing to Achieve Net Zero (The Royal Society, 2020); https://royalsociety.org/topics-policy/projects/digital-technology-and-the-planet/

Lannelongue, L., Grealey, J. & Inouye, M. Green algorithms: quantifying the carbon footprint of computation. Adv. Sci. 8 , 2100707 (2021).

Henderson, P. et al. Towards the systematic reporting of the energy and carbon footprints of machine learning. J. Mach. Learn. Res. 21 , 10039–10081 (2020).

MathSciNet   Google Scholar  

Anthony, L. F. W., Kanding, B. & Selvan, R. Carbontracker: tracking and predicting the carbon footprint of training deep learning models. Preprint at https://arxiv.org/abs/2007.03051 (2020).

Lannelongue, L., Grealey, J., Bateman, A. & Inouye, M. Ten simple rules to make your computing more environmentally sustainable. PLoS Comput. Biol. 17 , e1009324 (2021).

Valeye, F. Tracarbon. GitHub https://github.com/fvaleye/tracarbon (2022).

Trébaol, T. CUMULATOR—a Tool to Quantify and Report the Carbon Footprint of Machine Learning Computations and Communication in Academia and Healthcare (École Polytechnique Fédérale de Lausanne, 2020).

Cloud Carbon Footprint —An open source tool to measure and analyze cloud carbon emissions. https://www.cloudcarbonfootprint.org/ (2023).

Children and Digital Dumpsites: E-Waste Exposure and Child Health (World Health Organization, 2021); https://apps.who.int/iris/handle/10665/341718

Sepúlveda, A. et al. A review of the environmental fate and effects of hazardous substances released from electrical and electronic equipments during recycling: examples from China and India. Environ. Impact Assess. Rev. 30 , 28–41 (2010).

Franssen, T. & Johnson, H. The Implementation of LEAF at Public Research Organisations in the Biomedical Sciences: a Report on Organisational Dynamics (Zenodo, 2021); https://doi.org/10.5281/ZENODO.5771609

DHCC Information, Measurement and Practice Action Group. A Researcher Guide to Writing a Climate Justice Oriented Data Management Plan (Zenodo, 2022); https://doi.org/10.5281/ZENODO.6451499

UKRI. UKRI Grant Terms and Conditions (UKRI, 2022); https://www.ukri.org/wp-content/uploads/2022/04/UKRI-050422-FullEconomicCostingGrantTermsConditionsGuidance-Apr2022.pdf

Carbon Offset Policy for Travel—Grant Funding (Wellcome, 2021); https://wellcome.org/grant-funding/guidance/carbon-offset-policy-travel

Juckes, M., Pascoe, C., Woodward, L., Vanderbauwhede, W. & Weiland, M. Interim Report: Complexity, Challenges and Opportunities for Carbon Neutral Digital Research (Zenodo, 2022); https://zenodo.org/record/7016952

Thakur, M. et al. EMBL’s European Bioinformatics Institute (EMBL-EBI) in 2022. Nucleic Acids Res 51 , D9–D17 (2022).

Varadi, M. et al. AlphaFold Protein Structure Database: massively expanding the structural coverage of protein-sequence space with high-accuracy models. Nucleic Acids Res. 50 , D439–D444 (2022).

Wilkinson, M. D. et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci. Data 3 , 160018 (2016).

Bichsel, J. Research Computing : The Enabling Role of Information Technology (Educause, 2012); https://library.educause.edu/resources/2012/11/research-computing-the-enabling-role-of-information-technology

Creutzig, F. et al. Digitalization and the Anthropocene. Annu. Rev. Environ. Resour. 47 , 479–509 (2022).

Yang, L. & Chen, J. A comprehensive evaluation of microbial differential abundance analysis methods: current status and potential solutions. Microbiome 10 , 130 (2022).

Qin, Y. et al. Combined effects of host genetics and diet on human gut microbiota and incident disease in a single population cohort. Nat. Genet. 54 , 134–142 (2022).

Lannelongue, L. & Inouye, M. Inference Mechanisms and Prediction of Protein-Protein Interactions . Preprint at http://biorxiv.org/lookup/doi/10.1101/2022.02.07.479382 (2022).

Dubois, F. The Vehicle Routing Problem for Flash Floods Relief Operations (Univ. Paul Sabatier, 2022).

Thiele, L., Cranmer, M., Coulton, W., Ho, S. & Spergel, D. N. Predicting the thermal Sunyaev-Zel'dovich field using modular and equivariant set-based neural networks. Preprint at https://arxiv.org/abs/2203.00026 (2022).

Armstrong, G. et al. Efficient computation of Faith’s phylogenetic diversity with applications in characterizing microbiomes. Genome Res. 31 , 2131–2137 (2021).

Mbatchou, J. et al. Computationally efficient whole-genome regression for quantitative and binary traits. Nat. Genet. 53 , 1097–1103 (2021).

Estien, C. O., Myron, E. B., Oldfield, C. A. & Alwin, A. & Ecological Society of America Student Section Virtual scientific conferences: benefits and how to support underrepresented students. Bull. Ecol. Soc. Am. 102 , e01859 (2021).

University of Cambridge. Guidelines for Sustainable Business Travel (Univ. Cambridge, 2022); https://www.environment.admin.cam.ac.uk/files/guidelines_for_sustainable_business_travel_approved.pdf

Patterson, D. et al. Carbon emissions and large neural network training. Preprint at https://arxiv.org/abs/2104.10350 (2021).

Patterson, D. et al. The carbon footprint of machine learning training will plateau, then shrink. Computer 55 , 18–28 (2022).

Neuroimaging Pipeline Carbon Tracker Toolboxes (OHBM SEA-SIG, 2023); https://ohbm-environment.org/carbon-tracker-toolboxes/

Lannelongue, L. Green Algorithms for High Performance Computing (GitHub, 2022); https://github.com/Llannelongue/GreenAlgorithms4HPC

Carbon Footprint Reporting—Customer Carbon Footprint Tool (Amazon Web Services, 2023); https://aws.amazon.com/aws-cost-management/aws-customer-carbon-footprint-tool/

Lannelongue, L. & Inouye, M. Carbon footprint estimation for computational research. Nat. Rev. Methods Prim. 3 , 9 (2023).

Cutress, I. Why Intel Processors Draw More Power Than Expected : TDP and Turbo Explained (Anandtech, 2018); https://www.anandtech.com/show/13544/why-intel-processors-draw-more-power-than-expected-tdp-turbo

Efficiency. Google Data Centers https://www.google.com/about/datacenters/efficiency/

Uptime Institute Releases 2021 Global Data Center Survey (Facility Executive, 2021); https://facilityexecutive.com/2021/09/uptime-institute-releases-2021-global-data-center-survey/

Zoie, R. C., Mihaela, R. D. & Alexandru, S. An analysis of the power usage effectiveness metric in data centers. In Proc. 2017 5th International Symposium on Electrical and Electronics Engineering ( ISEEE ) 1–6 (IEEE, 2017); https://doi.org/10.1109/ISEEE.2017.8170650

Yuventi, J. & Mehdizadeh, R. A critical analysis of power usage effectiveness and its use in communicating data center energy consumption. Energy Build. 64 , 90–94 (2013).

Avelar, V., Azevedo, D. & French, A. (eds) PUE: A Comprehensive Examination of the Metric White Paper No. 49 (Green Grid, 2012).

Power Usage Effectiveness (PUE) (ISO/IEC); https://www.iso.org/obp/ui/#iso:std:iso-iec:30134:-2:ed-1:v1:en

2022 Country Specific Electricity Grid Greenhouse Gas Emission Factors (Carbon Footprint, 2023); https://www.carbonfootprint.com/docs/2023_02_emissions_factors_sources_for_2022_electricity_v10.pdf

Kamiya, G. The Carbon Footprint of Streaming Video: Fact-Checking the Headlines—Analysis (IEA, 2020); https://www.iea.org/commentaries/the-carbon-footprint-of-streaming-video-fact-checking-the-headlines

Rot, A., Chrobak, P. & Sobinska, M. Optimisation of the use of IT infrastructure resources in an institution of higher education: a case study. In Proc. 2019 9th International Conference on Advanced Computer Information Technologies ( ACIT ) 171–174 (IEEE, 2019); https://doi.org/10.1109/ACITT.2019.8780018

Clément, L.-P. P.-V. P., Jacquemotte, Q. E. S. & Hilty, L. M. Sources of variation in life cycle assessments of smartphones and tablet computers. Environ. Impact Assess. Rev. 84 , 106416 (2020).

Kamal, K. Y. The silicon age: trends in semiconductor devices industry. JESTR 15 , 110–115 (2022).

Gåvertsson, I., Milios, L. & Dalhammar, C. Quality labelling for re-used ICT equipment to support consumer choice in the circular economy. J. Consum. Policy 43 , 353–377 (2020).

Intel Corporate Responsibility Report 2021–2022 (Intel, 2022); https://csrreportbuilder.intel.com/pdfbuilder/pdfs/CSR-2021-22-Full-Report.pdf

TSMC Task Force on Climate-related Financial Disclosures (TSMC, 2020); https://esg.tsmc.com/download/file/TSMC_TCFD_Report_E.pdf

Bycroft, C. et al. The UK Biobank resource with deep phenotyping and genomic data. Nature 562 , 203–209 (2018).

UK Biobank Creates Cloud-Based Health Data Analysis Platform to Unleash the Imaginations of the World’s Best Scientific Minds (UK Biobank, 2020); https://www.ukbiobank.ac.uk/learn-more-about-uk-biobank/news/uk-biobank-creates-cloud-based-health-data-analysis-platform-to-unleash-the-imaginations-of-the-world-s-best-scientific-minds

Jackson, K. A picture is worth a petabyte of data. Science Node (5 June 2019).

Nguyen, B. H. et al. Architecting datacenters for sustainability: greener data storage using synthetic DNA. In Proc. Electronics Goes Green 2020 (ed. Schneider-Ramelow, F.) 105 (Fraunhofer, 2020).

Seagate Product Sustainability (Seagate, 2023); https://www.seagate.com/gb/en/global-citizenship/product-sustainability/

Madden, S. & Pollard, C. Principles and Best Practices for Trusted Research Environments (NHS England, 2021); https://transform.england.nhs.uk/blogs/principles-and-practice-for-trusted-research-environments/

Jones, K. H., Ford, D. V., Thompson, S. & Lyons, R. A profile of the SAIL Databank on the UK secure research platform. Int. J. Popul. Data Sci. 4 , 1134 (2020).

Google Scholar  

About the Secure Research Service (Office for National Statistics); https://www.ons.gov.uk/aboutus/whatwedo/statistics/requestingstatistics/secureresearchservice/aboutthesecureresearchservice

Shehabi, A. et al. United States Data Center Energy Usage Report Report no. LBNL-1005775, 1372902 (Office of Scientific and Technical Information, 2016); http://www.osti.gov/servlets/purl/1372902/

Masanet, E., Shehabi, A., Lei, N., Smith, S. & Koomey, J. Recalibrating global data center energy-use estimates. Science 367 , 984–986 (2020).

Caplan, T. Help Us Advance Environmentally Sustainable Research (Wellcome, 2022); https://medium.com/wellcome-data/help-us-advance-environmentally-sustainable-research-3c11fe2a8298

Choueiry, G. Programming Languages Popularity in 12,086 Research Papers (Quantifying Health, 2023); https://quantifyinghealth.com/programming-languages-popularity-in-research/

Pereira, R. et al. Ranking programming languages by energy efficiency. Sci. Comput. Program. 205 , 102609 (2021).

Lin, Y. & Danielsson, J. Choosing a Numerical Programming Language for Economic Research: Julia, MATLAB, Python or R (Centre for Economic Policy Research, 2022); https://cepr.org/voxeu/columns/choosing-numerical-programming-language-economic-research-julia-matlab-python-or-r

Appuswamy, R., Olma, M. & Ailamaki, A. Scaling the memory power wall with DRAM-aware data management. In Proc. 11th International Workshop on Data Management on New Hardware 1–9 (ACM, 2015); https://doi.org/10.1145/2771937.2771947

Guo, B., Yu, J., Yang, D., Leng, H. & Liao, B. Energy-efficient database systems: a systematic survey. ACM Comput. Surv. 55 , 111 (2022).

Karyakin, A. & Salem, K. An analysis of memory power consumption in database systems. In Proc. 13th International Workshop on Data Management on New Hardware—DAMON ’ 17 1–9 (ACM Press, 2017); https://doi.org/10.1145/3076113.3076117

Karyakin, A. & Salem, K. DimmStore: memory power optimization for database systems. Proc. VLDB Endow. 12 , 1499–1512 (2019).

Caset, F., Boussauw, K. & Storme, T. Meet & fly: sustainable transport academics and the elephant in the room. J. Transp. Geogr. 70 , 64–67 (2018).

Govaart, G. H., Hofmann, S. M. & Medawar, E. The sustainability argument for open science. Collabra Psychol. 8 , 35903 (2022).

Cockrell, H. C. et al. Environmental impact of telehealth use for pediatric surgery. J. Pediatr. Surg. 57 , 865–869 (2022).

Alshqaqeeq, F., McGuire, C., Overcash, M., Ali, K. & Twomey, J. Choosing radiology imaging modalities to meet patient needs with lower environmental impact. Resour. Conserv. Recycl. 155 , 104657 (2020).

Sustainability Annual Report 2020–2021 (NHS, 2021); https://digital.nhs.uk/about-nhs-digital/corporate-information-and-documents/sustainability/sustainability-reports/sustainability-annual-report-2020-21

UNESCO Recommendation on Open Science (UNESCO, 2021); https://en.unesco.org/science-sustainable-future/open-science/recommendation

Samuel, G. & Richie, C. Reimagining research ethics to include environmental sustainability: a principled approach, including a case study of data-driven health research. J. Med. Ethics https://doi.org/10.1136/jme-2022-108489 (2022).

Download references

Acknowledgements

L.L. was supported by the University of Cambridge MRC DTP (MR/S502443/1) and the BHF program grant (RG/18/13/33946). M.I. was supported by the Munz Chair of Cardiovascular Prediction and Prevention and the NIHR Cambridge Biomedical Research Centre (BRC-1215-20014; NIHR203312). M.I. was also supported by the UK Economic and Social Research 878 Council (ES/T013192/1). This work was supported by core funding from the British Heart Foundation (RG/13/13/30194; RG/18/13/33946) and the NIHR Cambridge Biomedical Research Centre (BRC-1215-20014; NIHR203312). The views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care. This work was also supported by Health Data Research UK, which is funded by the UK Medical Research Council, Engineering and Physical Sciences Research Council, Economic and Social Research Council, Department of Health and Social Care (England), Chief Scientist Office of the Scottish Government Health and Social Care Directorates, Health and Social Care Research and Development Division (Welsh Government), Public Health Agency (Northern Ireland) and the British Heart Foundation and Wellcome.

Author information

Authors and affiliations.

Cambridge Baker Systems Genomics Initiative, Department of Public Health and Primary Care, University of Cambridge, Cambridge, UK

Loïc Lannelongue & Michael Inouye

British Heart Foundation Cardiovascular Epidemiology Unit, Department of Public Health and Primary Care, University of Cambridge, Cambridge, UK

Victor Phillip Dahdaleh Heart and Lung Research Institute, University of Cambridge, Cambridge, UK

Health Data Research UK Cambridge, Wellcome Genome Campus and University of Cambridge, Cambridge, UK

Health Data Research (HDR) UK, London, UK

Hans-Erik G. Aronson, Andrew D. Morris & Gerry Reilly

European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), Wellcome Genome Campus, Hinxton, UK

Alex Bateman, Ewan Birney & Johanna McEntyre

Wellcome Trust, London, UK

Talia Caplan

RAL Space, Science and Technology Facilities Council, Harwell Campus, Didcot, UK

Martin Juckes

Cambridge Baker Systems Genomics Initiative, Baker Heart and Diabetes Institute, Melbourne, Victoria, Australia

Michael Inouye

British Heart Foundation Centre of Research Excellence, University of Cambridge, Cambridge, UK

The Alan Turing Institute, London, UK

You can also search for this author in PubMed   Google Scholar

Contributions

L.L. conceived and coordinated the manuscript. M.I. organized and edited the manuscript. All authors contributed to the writing and revision of the manuscript.

Corresponding author

Correspondence to Loïc Lannelongue .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Computational Science thanks Bernabe Dorronsoro and Kirk Cameron for their contribution to the peer review of this work. Primary Handling Editors: Kaitlin McCardle and Ananya Rastogi, in collaboration with the Nature Computational Science team.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Lannelongue, L., Aronson, HE.G., Bateman, A. et al. GREENER principles for environmentally sustainable computational science. Nat Comput Sci 3 , 514–521 (2023). https://doi.org/10.1038/s43588-023-00461-y

Download citation

Received : 06 November 2022

Accepted : 09 May 2023

Published : 26 June 2023

Issue Date : June 2023

DOI : https://doi.org/10.1038/s43588-023-00461-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Prioritize environmental sustainability in use of ai and data science methods.

  • Caroline Jay
  • David Topping

Nature Geoscience (2024)

A holistic approach to environmentally sustainable computing

  • Andrea Pazienza
  • Giovanni Baselli
  • Maria Vittoria Trussoni

Innovations in Systems and Software Engineering (2024)

The carbon footprint of computational research

Nature Computational Science (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Anthropocene newsletter — what matters in anthropocene research, free to your inbox weekly.

for a hypothesis to be scientific it must

  • Skip to main content
  • Keyboard shortcuts for audio player

Shots - Health News

  • Your Health
  • Treatments & Tests
  • Health Inc.
  • Public Health

The Science of Siblings

Gay people often have older brothers. why and does it matter.

Selena Simmons-Duffin

Selena Simmons-Duffin

Credit: Lily Padula for NPR

The Science of Siblings is a new series exploring the ways our siblings can influence us, from our money and our mental health all the way down to our very molecules. We'll be sharing these stories over the next several weeks.

This is something I learned years ago through gay bar chatter: Gay people are often the youngest kids in their families. I liked the idea right away — as a gay youngest sibling, it made me feel like there was a statistical order to things and I fit neatly into that order.

When I started to report on the science behind it, I learned it's true: There is a well-documented correlation between having older siblings (older brothers, specifically) and a person's chance of being gay. But parts of the story also struck me as strange and dark. I thought of We the Animals , Justin Torres' haunting semi-autobiographical novel about three brothers — the youngest of whom is queer — growing up in New York state. So I called Torres to get his take on the idea.

The Science of Siblings

Torres' first reaction was to find it considerably less appealing than I did. This makes sense — his latest novel, Blackouts , won a National Book Award last year, and it grapples with the sinister history of how scientists have studied sexuality. "My novel is interested in the pre-Kinsey sexology studies, specifically this one called Sex Variants ," he told me. "It's really informed by eugenics. They were looking for the cause of homosexuality in the body in order to treat it or cure it or get rid of it."

That's why, when he saw my inquiry about a statistical finding that connects sexuality and birth order, he was wary. "To be frank, I find these kinds of studies that're looking for something rooted in the body to explain sexuality to be kind of bunk. I think they rely on a really binary understanding of sexuality itself," he said.

"That's fair," I conceded. But this connection between queerness and older brothers has been found so many times in so many places that one researcher told me it's "a kind of truth" in the science of sexuality.

Rooted in a dark past

The first research on this topic did indeed begin in the 1940s and '50s, during that era of investigations into what causes homosexuality, to be able to cure it. At the time, the queer people whom scientists were studying were living in a world where this facet of their identity was dangerous. Plus, the studies themselves didn't find much, says Jan Kabátek , a senior research fellow at the University of Melbourne.

"Most of it fell flat," he told me. "But there is an exception to this, and that is the finding that men, specifically, who exhibit attraction to the same sex are likely to have more older brothers than other types of siblings."

The cover of Blackouts by Justin Torres. It is a black cover with gold type and a gold line drawing of a tiger.

In the 1990s, this was dubbed the "fraternal birth order effect." In the years since, it has been found again and again, all over the world.

"This pattern has been documented around Canada and the United States, but it goes well beyond that," says Scott Semenyna , a psychology professor at Stetson University. "There's been now many confirmations that this pattern exists in countries like Samoa. It exists in southern Mexico. It exists in places like Turkey and Brazil."

Huge study, consistent findings

An impressive recent study established that this pattern held up in an analysis of a huge sample — over 9 million people from the Netherlands. It confirmed all those earlier studies and added a twist.

"Interestingly enough — and this is quite different from what has been done before — we also showed that the same association manifests for women," explains Kabátek, one of the study's authors. Women who were in same-sex marriages were also more likely to have older brothers than other types of siblings.

At baseline, the chance that someone will be gay is pretty small. "Somewhere around 2 to 3% — we can call it 2% just for the sake of simplicity," Semenyna says. "The fraternal birth order effect shows that you're going to run into about a 33% increase in the probability of, like, male same-sex attraction for every older brother that you have."

The effect is cumulative: The more older brothers someone has, the bigger it is. If you have one older brother, your probability of being gay nudges up to about 2.6%. "And then that probability would increase another 33% if there was a second older brother, to about 3.5%," Semenyna says.

If you have five older brothers, your chance of being gay is about 8% — so, four times the baseline probability.

for a hypothesis to be scientific it must

The author, Selena Simmons-Duffin, at age 3, with her brother, David Simmons-Duffin, at age 5. The Simmons-Duffin family hide caption

The author, Selena Simmons-Duffin, at age 3, with her brother, David Simmons-Duffin, at age 5.

Still, even 8% is pretty small. "The vast majority of people who have a lot of older brothers are still going to come out opposite-sex attracted," Semenyna says. Also, plenty of gay people have no brothers at all, or they're the oldest in their families. Having older brothers is definitely not the only influence on a person's sexuality.

"But just the fact that we are observing effects that are so strong, relatively speaking, implies that there's a good chance that there is, at least partially, some biological mechanism that is driving these associations," Kabátek says.

A hypothesis, but no definitive mechanism

For decades, the leading candidate for that biological mechanism has been the "maternal immune hypothesis," Semenyna explains. "The basic version of this hypothesis is that when a male fetus is developing, the Y chromosome of the male produces proteins that are going to be recognized as foreign by the mother's immune system and it forms somewhat of an immune response to those proteins."

That immune response has some effect on the development of subsequent male fetuses, Semenyna says. The plausibility of this hypothesis was bolstered by a 2017 study that found "that mothers of gay sons have more of these antibodies that target these male-specific proteins than mothers of sons who are not gay or mothers who have no sons whatsoever," he says.

But now that Kabátek's study of the Dutch population has found that this pattern was present among women in same-sex marriages as well, there are new questions about whether this hypothesis is correct.

"One option is that the immune hypothesis works for both men and women," Kabátek says. "Of course, there can be also other explanations. It's for prospective research to make this clearer."

Fun to think about, but concerning too

In a way, I tell Justin Torres, this effect seems simple and fun to me. It's a concrete statistical finding, documented all over the world, and there's an intriguing hypothesis about why it may happen biologically. But darker undercurrents in all of it worry me, like raising a dangerous idea that becoming gay in the womb is the only version of gayness that is real — or a repackaged version of the old idea that mothers are to "blame."

Book cover for We the Animals by Justin Torres, showing three boys jumping in midair.

"It is the undercurrents that worry me immensely," he responds. "I remember when I was a kid — I have this memory of watching daytime television. I must have been staying home from school sick in the late '80s or early '90s. The host polled the audience and said, 'If there was a test [during pregnancy] and you could know if your child was gay, would you abort?' I remember being so horrified and disturbed watching all those hands go up in the audience — just feeling so hated. At that young age, I knew this thing about myself, even if I wasn't ready to admit it."

Even if tolerance for queer people in American society has grown a lot since then, he says, "I think that tolerance waxes and wanes, and I worry about that line of thinking."

At the same time, he agrees that the idea of a connection with gay people being the youngest kids in their families is kind of hilarious. "One thing that pops into my mind is, like, maybe if you're just surrounded by a lot of men, you either choose or don't choose men, right?" he laughs.

Essentially, in his view, it's fun to think about, but probably not deeper than that.

"As a humanist, I just don't know why we need to look for explanations for something as complex and joyous and weird as sexuality," Torres says.

Then again, scientists are unlikely to be able to resist that mysterious, weird complexity. Even if the joy and self-expression and community and so many other parts of queerness and sexuality will always be more than statistics can explain.

More from the Science of Siblings series:

  • A gunman stole his twin from him. This is what he's learned about grieving a sibling
  • In the womb, a brother's hormones can shape a sister's future
  • These identical twins both grew up with autism, but took very different paths
  • Science of Siblings
  • queer community
  • homosexuality

How Cloud Seeding Works and Why It’s Wrongly Blamed for Floods From Dubai to California

I n a place as dry as the desert city of Dubai, whenever they can get rain, they’ll take it.

United Arab Emirates authorities will often even try to make it rain—as they did earlier this week when the National Center of Meteorology dispatched planes to inject chemicals into the clouds to try to coax some showering.

But this time they got much more than they wanted. Dubai faced torrential downpours on Tuesday, with flooding shutting down much of the city, including schools and its major airport— killing at least one man whose car was swept away as well as at least 18 others in neighboring Oman, including a bus full of schoolchildren.

The UAE government media office said it was the heaviest rainfall recorded in 75 years and called it “ an exceptional event .” More than a typical year’s worth of water was dumped on the country in a single day.

Now, many people are pointing a finger at the “cloud seeding” operations preceding the precipitation.

“Do you think the Dubai floods might have something to do with this?” popular social media account Wide Awake Media asked on X , alongside a clip of a news report on the UAE’s weather modification program.

But experts say that while cloud seeding may have enhanced the rainfall, pinning such a devastating downpour on it is misguided.

“It is very unlikely that cloud seeding would cause a flood,” Roslyn Prinsley, the head of disaster solutions at the Australian National University Institute for Climate, Energy and Disaster Solutions, tells TIME, describing such claims as “conspiracy theories.”

It’s not the first time cloud seeding has been blamed for floods—in Dubai and around the world. In February, social media users charged officials working on a cloud seeding pilot program in California with causing storms that hit the state, despite the technology not even being used before the storms in question. And in Australia in 2022, as the nation down under experienced record rainfall, social media users recirculated an old news clip that questioned if there was a link between cloud seeding and flooding—to which fact-checkers answered: there isn’t.

Here’s what to know about cloud seeding, how and whether it even works, and what scientists say people should actually be worried about.

How does cloud seeding work?

Cloud seeding basically works by artificially recreating the process by which rain and snow naturally occur: In normal clouds, microscopic droplets of water vapor are attracted to atmospheric aerosols like dust or pollen or salt from the sea. When enough water droplets converge around these nuclei, they form ice crystals and fall.

Clouds are seeded, typically by specially equipped aircraft but also by ground-based generators, by implanting particles, commonly silver iodide, in and around selected clouds to act as nuclei and trigger the precipitation process.

Does cloud seeding even work?

Since the futuristic-sounding weather modification technique was introduced in the 1940s, it has been used regularly across the world, from the UAE to China to the United States, for a wide range of intended purposes. Mostly employed by governments grappling with drought, cloud seeding has even found itself a part of some of the biggest events in history, from clearing urban pollution and ensuring blue skies at the 2008 Beijing Olympics, to staving away Moscow-bound radioactive clouds in the wake of a nuclear disaster in Chernobyl, to hampering the movement of U.S. enemies during the war in Vietnam. (Weather modification in warfare has since been banned by the U.N.)

For decades, a rain-scarce UAE has invested heavily in cloud seeding, including granting permanent residency to experts, and funding research programs to better identify the seedability of clouds.

But the science on just how effective cloud seeding is remains inconclusive. In 2003, the U.S. National Research Council concluded that “there still is no convincing scientific proof” of its efficacy at the time. A landmark 2020 study , however, found that cloud seeding does work—but researchers are clear about its limitations.

UAE meteorological officials say that their cloud seeding operations can increase rainfall by 10-30%, while Californian authorities’ estimates for their own program sit at 5-10%. The Desert Research Institute (DRI), the state of Nevada’s research group, says cloud seeding can increase seasonal precipitation by about 10% , while the World Meteorological Organization assessed in 2019 that the impacts of cloud seeding range from next to nothing to 20%. And success in producing rain depends significantly on atmospheric conditions such as wind and cloud temperatures. 

That’s why experts agree that cloud seeding tends to get a bad rap from the public. Its impact is often overstated, and while it can enhance rainfall, other natural and unnatural factors play a much greater role in causing floods.

Are there any concerns about cloud seeding?

A number of myths are associated with cloud seeding, such as that it causes what’s known as “chem-trails,” cloud-like streaks of white in the sky. DRI says those are actually “jet contrails, and they are the aviation equivalent of visible plumes of steamy breath on a cold morning.” They have “no connection with cloud-seeding activities.”

But there are other reasons for skepticism about cloud seeding.

Critics argue that seeding clouds in one region may simply deprive another of rain, as the clouds will unleash precipitation before they were meant to. (Iran has for years accused its neighbors of “stealing their rain.”)

Others have expressed health concerns about the chemicals used to seed clouds. Silver iodide, a common substance used, may be toxic to animals , though others insist it is safe .

In a publication for the Bulletin of Atomic Scientists, Laura Kuhl, a public policy professor at Northeastern University, argues that cloud seeding may do “more harm than good” due to these uncertainties and because, given its limited effectiveness, it promotes a sense of “techno-optimism” that “can obscure deeper structural drivers of vulnerability like unsustainable water use and unequal distribution of access to water.”

What’s to blame for floods?

The severity of the recent flooding in Dubai could be in large part because the perennially dry country hasn’t developed effective drainage infrastructure to deal with intense rainfall. But experts note that a major reason for such extreme weather events is climate change, since warmer air can hold more water , which then leads to heavier rainfall and floods in some areas. 

Prinsley says that when it comes to dealing with global warming and increasingly destructive weather phenomenon, people should be more concerned about human activities that “seed” the atmosphere with greenhouse gasses than with cloud seeding.

“Climate change on top of natural weather and climate processes is the cause of much of the extreme weather that we are seeing across the world. Cloud seeding is used to make recalcitrant clouds produce some rain,” she says. “The thunderstorms themselves are much more likely to have caused the extreme flooding in Dubai due to climate change-fuelled intense rainfall—as is happening across the world.”

More Must-Reads From TIME

  • The 100 Most Influential People of 2024
  • The Revolution of Yulia Navalnaya
  • 6 Compliments That Land Every Time
  • What's the Deal With the Bitcoin Halving?
  • If You're Dating Right Now , You're Brave: Column
  • The AI That Could Heal a Divided Internet
  • Fallout Is a Brilliant Model for the Future of Video Game Adaptations
  • Want Weekly Recs on What to Watch, Read, and More? Sign Up for Worth Your Time

Contact us at [email protected]

FAA now requires reentry license to prevent spacecraft getting stuck up there

If what comes up must come down, you’ll need a license for that.

By Harri Weber | Published Apr 22, 2024 3:25 PM EDT

round earth

What happens if you design a spacecraft to survive reentry, but launch without a green light from regulators to bring it back down? As we saw with Varda Space Industries, which fired a capsule into orbit last spring to make stuff in zero gravity, you might have to park in orbit until your Federal Aviation Administration paperwork is complete.

In a new April 17 notice effective immediately, the FAA seems to be indicating that it’s looking to avoid repeats of the Varda saga, which successfully landed its capsule in Utah back in February after a roughly seven-month delay. The company aimed to grow Ritonavir crystals in space, taking advantage of the environment to potentially improve the efficacy of the HIV antiviral drug.

Private Space Flight photo

Varda Space Industries’ spacecraft, W-1, successfully landed at the Utah Test and Training Range on February 21, 2024. This marks the first time a commercial company has landed a spacecraft on United States soil. Credit: Varda Space Industries .

Without citing the incident directly, the agency said that it won’t allow “reentry vehicles” to launch without a license to return. In other words, if a company plans to bring its vehicle back, it can’t send one into space in the first place unless the FAA has preemptively deemed its reentry plans safe. The agency said it analyzes the impact vehicles may have on public health, property, and national security before issuing reentry licenses.

Without pre-approval, the FAA argues critical systems could fail or the vehicle might run out of propellant or power, before regulators and reentry operators get all their ducks in a row.  The agency says it reviews numerous details that are self-disclosed by reentry operators, including the payload’s weight, the amount of hazardous materials present, the “explosive potential of payload materials” and the planned reentry site.

Varda emphasized earlier this month that it received launch approval last year and complied with all regulatory requirements to do so. In a statement to SpaceNews , FAA associate administrator Kelvin Coleman said the agency learned “some lessons” when it approved the company to launch without a reentry license.  

As spaceflight evolves, returnable vehicles require special attention to mitigate collisions with people and property on the ground, the FAA said in its notice. “Unlike typical payloads designed to operate in outer space, a reentry vehicle has primary components that are designed to withstand reentry substantially intact and therefore have a near-guaranteed ground impact,” the FAA wrote. 

[ Related: Yes, a chunk of the space station crashed into a house in Florida ]

Like science, tech, and DIY projects?

Sign up to receive Popular Science's emails and get the highlights.

for a hypothesis to be scientific it must

The Fermi Paradox and the Berserker Hypothesis: Exploring Cosmic Silence Through Science Fiction

I n the realm of cosmic conundrums, the Fermi Paradox stands out: why, in a universe replete with billions of stars and planets, have we yet to find any signs of extraterrestrial intelligent life? The “berserker hypothesis,” a spine-chilling explanation rooted in science and popularized by science fiction, suggests a grim answer to this enduring mystery.

The concept’s moniker traces back to Fred Saberhagen’s “Berserker” series of novels, and it paints a picture of the cosmos where intelligent life forms are systematically eradicated by self-replicating probes, known as “berserkers.” These probes, initially intended to explore and report back, turn rogue and annihilate any signs of civilizations they encounter. The hypothesis emerges as a rather dark twist on the concept of von Neumann probes—machines capable of self-replication using local resources, which could theoretically colonize the galaxy rapidly.

Diving into the technicalities, the berserker hypothesis operates as a potential solution to the Hart-Tipler conjecture, which posits the lack of detectable probes as evidence that no intelligent life exists outside our solar system. Instead, this hypothesis flips the script: the absence of such probes doesn’t point to a lack of life but rather to the possibility that these probes have become cosmic predators, leaving a trail of silence in their wake.

Astronomer David Brin’s chilling summation underscores the potential severity of the hypothesis: “It need only happen once for the results of this scenario to become the equilibrium conditions in the Galaxy…because all were killed shortly after discovering radio.” If these berserker probes exist and are as efficient as theorized, then humanity’s attempts at communication with extraterrestrial beings could be akin to lighting a beacon for our own destruction.

Despite its foundation in speculative thought, the theory isn’t without its scientific evaluations. Anders Sandberg and Stuart Armstrong from the Future of Humanity Institute speculated that, given the vastness of the universe and even a slow replication rate, these berserker probes—if they existed—would likely have already found and destroyed us. It’s both a chilling and somewhat reassuring analysis that treads the line between fiction and potential reality.

Within the eclectic array of solutions to the Fermi Paradox, the berserker hypothesis stands out for its seamless blend of science fiction inspiration and scientific discourse. It connects with other notions such as the Great Filter, which suggests that life elsewhere in the universe is being systematically snuffed out before it can reach a space-faring stage, and the Dark Forest hypothesis, which posits that civilizations remain silent to avoid detection by such cosmic hunters.

Relevant articles:

– TIL about the berserker hypothesis, a proposed solution to the Fermi paradox stating the reason why we haven’t found other sentient species yet is because those species have been wiped out by self-replicating “berserker” probes.

– The Berserker Hypothesis: The Darkest Explanation Of The Fermi Paradox

– Beyond “Fermi’s Paradox” VI: What is the Berserker Hypothesis?

In the realm of cosmic conundrums, the Fermi Paradox stands out: why, in a universe replete with billions of stars and planets, have we yet to find any signs of extraterrestrial intelligent life? The “berserker hypothesis,” a spine-chilling explanation rooted in science and popularized by science fiction, suggests a grim answer to this enduring mystery. […]

IMAGES

  1. Best Example of How to Write a Hypothesis 2024

    for a hypothesis to be scientific it must

  2. Scientific hypothesis

    for a hypothesis to be scientific it must

  3. Research Hypothesis: Definition, Types, Examples and Quick Tips

    for a hypothesis to be scientific it must

  4. 🏷️ Formulation of hypothesis in research. How to Write a Strong

    for a hypothesis to be scientific it must

  5. How to Write a Strong Hypothesis in 6 Simple Steps

    for a hypothesis to be scientific it must

  6. Forming a Good Hypothesis for Scientific Research

    for a hypothesis to be scientific it must

VIDEO

  1. Hypothesis testing (ALL YOU NEED TO KNOW!)

  2. The Scientific Method: Steps, Terms and Examples

  3. Know the Difference (Between Hypothesis and Theory)

  4. what is hypothesis l what is hypothesis in research l introduction l types of hypothesis

  5. The Scientific Method: Crash Course Biology #2

  6. How Do You Make a Hypothesis?

COMMENTS

  1. Scientific hypothesis

    hypothesis. science. scientific hypothesis, an idea that proposes a tentative explanation about a phenomenon or a narrow set of phenomena observed in the natural world. The two primary features of a scientific hypothesis are falsifiability and testability, which are reflected in an "If…then" statement summarizing the idea and in the ...

  2. Scientific Hypotheses: Writing, Promoting, and Predicting Implications

    A snapshot analysis of citation activity of hypothesis articles may reveal interest of the global scientific community towards their implications across various disciplines and countries. As a prime example, Strachan's hygiene hypothesis, published in 1989,10 is still attracting numerous citations on Scopus, the largest bibliographic database ...

  3. 1.2: Science- Reproducible, Testable, Tentative, Predictive, and

    In order for a hypothesis to be scientific, a scientist must be able to test the explanation to see if it works, and if it is able to correctly predict what will happen in a situation. ... An experiment is a controlled method of testing a hypothesis. The scientific method is a method of investigation involving experimentation and observation to ...

  4. What is a scientific hypothesis?

    Thus, the hypothesis is true, but it may not be true 100% of the time. Scientific theory vs. scientific hypothesis. The best hypotheses are simple. They deal with a relatively narrow set of phenomena.

  5. Formulating Hypotheses for Different Study Designs

    A scientific hypothesis should have a sound basis on previously published literature as well as the scientist's observations. Randomly generated (a priori) hypotheses are unlikely to be proven. ... It must be considered that subsequent experiments to prove or disprove a hypothesis have an equal chance of failing or succeeding, akin to tossing a ...

  6. The scientific method (article)

    The scientific method. At the core of biology and other sciences lies a problem-solving approach called the scientific method. The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis.

  7. How to Write a Strong Hypothesis

    5. Phrase your hypothesis in three ways. To identify the variables, you can write a simple prediction in if…then form. The first part of the sentence states the independent variable and the second part states the dependent variable. If a first-year student starts attending more lectures, then their exam scores will improve.

  8. Hypothesis

    The hypothesis of Andreas Cellarius, showing the planetary motions in eccentric and epicyclical orbits.. A hypothesis (pl.: hypotheses) is a proposed explanation for a phenomenon.For a hypothesis to be a scientific hypothesis, the scientific method requires that one can test it. Scientists generally base scientific hypotheses on previous observations that cannot satisfactorily be explained ...

  9. 1.3: Hypothesis, Theories, and Laws

    In order for it to be scientific, however, a scientist must be able to test the explanation to see if it works and if it is able to correctly predict what will happen in a situation. For example, "if my hypothesis is correct, we should see ___ result when we perform ___ test." A hypothesis is very tentative; it can be easily changed.

  10. Scientific method

    A scientific hypothesis must be falsifiable, implying that it is possible to identify a possible outcome of an experiment or observation that conflicts with predictions deduced from the hypothesis; otherwise, the hypothesis cannot be meaningfully tested.

  11. 1.5: Hypothesis, Theories, and Laws

    In order for it to be scientific, however, a scientist must be able to test the explanation to see if it works and if it is able to correctly predict what will happen in a situation. For example, "if my hypothesis is correct, we should see ___ result when we perform ___ test." A hypothesis is very tentative; it can be easily changed.

  12. Biology and the scientific method review

    Not all explanations can be considered a hypothesis. A hypothesis must be testable and falsifiable in order to be valid. For example, "The universe is beautiful" is not a good hypothesis, because there is no experiment that could test this statement and show it to be false. ... Scientific theories, on the other hand, are well-tested and ...

  13. What Is a Hypothesis? The Scientific Method

    A hypothesis (plural hypotheses) is a proposed explanation for an observation. The definition depends on the subject. In science, a hypothesis is part of the scientific method. It is a prediction or explanation that is tested by an experiment. Observations and experiments may disprove a scientific hypothesis, but can never entirely prove one.

  14. 1.1: Scientific Investigation

    Forming a Hypothesis. The next step in a scientific investigation is forming a hypothesis.A hypothesis is a possible answer to a scientific question, but it isn't just any answer. A hypothesis must be based on scientific knowledge, and it must be logical. A hypothesis also must be falsifiable. In other words, it must be possible to make observations that would disprove the hypothesis if it ...

  15. Hypothesis: Definition, Examples, and Types

    A hypothesis is a tentative statement about the relationship between two or more variables. It is a specific, testable prediction about what you expect to happen in a study. It is a preliminary answer to your question that helps guide the research process. Consider a study designed to examine the relationship between sleep deprivation and test ...

  16. What Are The Steps Of The Scientific Method?

    The scientific method is a process that includes several steps: First, an observation or question arises about a phenomenon. Then a hypothesis is formulated to explain the phenomenon, which is used to make predictions about other related occurrences or to predict the results of new observations quantitatively. Finally, these predictions are put to the test through experiments or further ...

  17. 1.2 The Scientific Methods

    The hypothesis must be validated by scientific experiments. ... The hypothesis must apply to all the situations in the universe. 10. What is a scientific theory? A scientific theory is an explanation of natural phenomena that is supported by evidence.

  18. Research Hypothesis In Psychology: Types, & Examples

    A research hypothesis, in its plural form "hypotheses," is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.

  19. Hypotheses

    A scientific hypothesis must be testable, but there is a much stronger requirement that a testable hypothesis must meet before it can really be considered scientific. This criterion comes primarily from the work of the philosopher of science Karl Popper, and is called "falsifiability".

  20. Physical Science-Chapter 1 Flashcards

    For a hypothesis to be considered scientific, it must be testable--it must, in principle, be capable of being proven wrong. Science The collective findings of humans about nature, and the process of gathering and organizing knowledge about nature.

  21. On the scope of scientific hypotheses

    2. The scientific hypothesis. In this section, we will describe a functional and descriptive role regarding how scientists use hypotheses. Jeong & Kwon [] investigated and summarized the different uses the concept of 'hypothesis' had in philosophical and scientific texts.They identified five meanings: assumption, tentative explanation, tentative cause, tentative law, and prediction.

  22. The Scientific Method

    Video 1. The Scientific Method explains the basic steps taken for most scientific inquiry. The Basic Principles of the Scientific Method. Two key concepts in the scientific approach are theory and hypothesis. A theory is a well-developed set of ideas that propose an explanation for observed phenomena that can be used to make predictions about future observations.

  23. Chapter 1 Flashcards

    For a hypothesis to be considered scientific it must be testable—it must, in principle, be capable of being proven wrong. Pseudoscience. A theory or practice that is considered to be without scientific foundation but purports to use the methods of science. Science.

  24. BIO 111 Study Part 1 Flashcards

    Study with Quizlet and memorize flashcards containing terms like In order for a hypothesis to be used in science, which of the following must be true? a) The hypothesis must be proven correct. b) The hypothesis must be reproducible. c) The hypothesis represents established facts. d) The hypothesis must be popularly accepted. e) The hypothesis is testable and falsifiable., A controlled ...

  25. GREENER principles for environmentally sustainable computational science

    The carbon footprint of scientific computing is substantial, but environmentally sustainable computational science (ESCS) is a nascent field with many opportunities to thrive. To realize the ...

  26. Gay people often have older brothers. Why? And does it matter?

    The Science of Siblings is a new series exploring the ways our siblings can influence us, ... A hypothesis, but no definitive mechanism ... I must have been staying home from school sick in the ...

  27. Is Cloud Seeding to Blame for Floods? What to Know

    But the science on just how effective cloud seeding is remains inconclusive. In 2003, the U.S. National Research Council concluded that "there still is no convincing scientific proof" of its ...

  28. FAA now requires reentry license to prevent ...

    If what comes up must come down, you'll need a license for that. By Harri Weber | Published Apr 22, 2024 3:25 PM EDT The FAA said that it won't allow "reentry vehicles" to launch into ...

  29. ch 1 BIO 1406 Flashcards

    Study with Quizlet and memorize flashcards containing terms like Which of the following statements is not true of scientific experiments? A. They must be well documented. B. They yield useful results regardless of whether the hypothesis is supported or rejected. C. They must occur under carefully controlled conditions found in a laboratory., In an experiment, investigators try to control all ...

  30. The Fermi Paradox and the Berserker Hypothesis: Exploring Cosmic ...

    The "berserker hypothesis," a spine-chilling explanation rooted in science and popularized by science fiction, suggests a grim answer to this enduring mystery. The concept's moniker traces ...