SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Scientific Method

Science is an enormously successful human enterprise. The study of scientific method is the attempt to discern the activities by which that success is achieved. Among the activities often identified as characteristic of science are systematic observation and experimentation, inductive and deductive reasoning, and the formation and testing of hypotheses and theories. How these are carried out in detail can vary greatly, but characteristics like these have been looked to as a way of demarcating scientific activity from non-science, where only enterprises which employ some canonical form of scientific method or methods should be considered science (see also the entry on science and pseudo-science ). Others have questioned whether there is anything like a fixed toolkit of methods which is common across science and only science. Some reject privileging one view of method as part of rejecting broader views about the nature of science, such as naturalism (Dupré 2004); some reject any restriction in principle (pluralism).

Scientific method should be distinguished from the aims and products of science, such as knowledge, predictions, or control. Methods are the means by which those goals are achieved. Scientific method should also be distinguished from meta-methodology, which includes the values and justifications behind a particular characterization of scientific method (i.e., a methodology) — values such as objectivity, reproducibility, simplicity, or past successes. Methodological rules are proposed to govern method and it is a meta-methodological question whether methods obeying those rules satisfy given values. Finally, method is distinct, to some degree, from the detailed and contextual practices through which methods are implemented. The latter might range over: specific laboratory techniques; mathematical formalisms or other specialized languages used in descriptions and reasoning; technological or other material means; ways of communicating and sharing results, whether with other scientists or with the public at large; or the conventions, habits, enforced customs, and institutional controls over how and what science is carried out.

While it is important to recognize these distinctions, their boundaries are fuzzy. Hence, accounts of method cannot be entirely divorced from their methodological and meta-methodological motivations or justifications, Moreover, each aspect plays a crucial role in identifying methods. Disputes about method have therefore played out at the detail, rule, and meta-rule levels. Changes in beliefs about the certainty or fallibility of scientific knowledge, for instance (which is a meta-methodological consideration of what we can hope for methods to deliver), have meant different emphases on deductive and inductive reasoning, or on the relative importance attached to reasoning over observation (i.e., differences over particular methods.) Beliefs about the role of science in society will affect the place one gives to values in scientific method.

The issue which has shaped debates over scientific method the most in the last half century is the question of how pluralist do we need to be about method? Unificationists continue to hold out for one method essential to science; nihilism is a form of radical pluralism, which considers the effectiveness of any methodological prescription to be so context sensitive as to render it not explanatory on its own. Some middle degree of pluralism regarding the methods embodied in scientific practice seems appropriate. But the details of scientific practice vary with time and place, from institution to institution, across scientists and their subjects of investigation. How significant are the variations for understanding science and its success? How much can method be abstracted from practice? This entry describes some of the attempts to characterize scientific method or methods, as well as arguments for a more context-sensitive approach to methods embedded in actual scientific practices.

1. Overview and organizing themes

2. historical review: aristotle to mill, 3.1 logical constructionism and operationalism, 3.2. h-d as a logic of confirmation, 3.3. popper and falsificationism, 3.4 meta-methodology and the end of method, 4. statistical methods for hypothesis testing, 5.1 creative and exploratory practices.

  • 5.2 Computer methods and the ‘new ways’ of doing science

6.1 “The scientific method” in science education and as seen by scientists

6.2 privileged methods and ‘gold standards’, 6.3 scientific method in the court room, 6.4 deviating practices, 7. conclusion, other internet resources, related entries.

This entry could have been given the title Scientific Methods and gone on to fill volumes, or it could have been extremely short, consisting of a brief summary rejection of the idea that there is any such thing as a unique Scientific Method at all. Both unhappy prospects are due to the fact that scientific activity varies so much across disciplines, times, places, and scientists that any account which manages to unify it all will either consist of overwhelming descriptive detail, or trivial generalizations.

The choice of scope for the present entry is more optimistic, taking a cue from the recent movement in philosophy of science toward a greater attention to practice: to what scientists actually do. This “turn to practice” can be seen as the latest form of studies of methods in science, insofar as it represents an attempt at understanding scientific activity, but through accounts that are neither meant to be universal and unified, nor singular and narrowly descriptive. To some extent, different scientists at different times and places can be said to be using the same method even though, in practice, the details are different.

Whether the context in which methods are carried out is relevant, or to what extent, will depend largely on what one takes the aims of science to be and what one’s own aims are. For most of the history of scientific methodology the assumption has been that the most important output of science is knowledge and so the aim of methodology should be to discover those methods by which scientific knowledge is generated.

Science was seen to embody the most successful form of reasoning (but which form?) to the most certain knowledge claims (but how certain?) on the basis of systematically collected evidence (but what counts as evidence, and should the evidence of the senses take precedence, or rational insight?) Section 2 surveys some of the history, pointing to two major themes. One theme is seeking the right balance between observation and reasoning (and the attendant forms of reasoning which employ them); the other is how certain scientific knowledge is or can be.

Section 3 turns to 20 th century debates on scientific method. In the second half of the 20 th century the epistemic privilege of science faced several challenges and many philosophers of science abandoned the reconstruction of the logic of scientific method. Views changed significantly regarding which functions of science ought to be captured and why. For some, the success of science was better identified with social or cultural features. Historical and sociological turns in the philosophy of science were made, with a demand that greater attention be paid to the non-epistemic aspects of science, such as sociological, institutional, material, and political factors. Even outside of those movements there was an increased specialization in the philosophy of science, with more and more focus on specific fields within science. The combined upshot was very few philosophers arguing any longer for a grand unified methodology of science. Sections 3 and 4 surveys the main positions on scientific method in 20 th century philosophy of science, focusing on where they differ in their preference for confirmation or falsification or for waiving the idea of a special scientific method altogether.

In recent decades, attention has primarily been paid to scientific activities traditionally falling under the rubric of method, such as experimental design and general laboratory practice, the use of statistics, the construction and use of models and diagrams, interdisciplinary collaboration, and science communication. Sections 4–6 attempt to construct a map of the current domains of the study of methods in science.

As these sections illustrate, the question of method is still central to the discourse about science. Scientific method remains a topic for education, for science policy, and for scientists. It arises in the public domain where the demarcation or status of science is at issue. Some philosophers have recently returned, therefore, to the question of what it is that makes science a unique cultural product. This entry will close with some of these recent attempts at discerning and encapsulating the activities by which scientific knowledge is achieved.

Attempting a history of scientific method compounds the vast scope of the topic. This section briefly surveys the background to modern methodological debates. What can be called the classical view goes back to antiquity, and represents a point of departure for later divergences. [ 1 ]

We begin with a point made by Laudan (1968) in his historical survey of scientific method:

Perhaps the most serious inhibition to the emergence of the history of theories of scientific method as a respectable area of study has been the tendency to conflate it with the general history of epistemology, thereby assuming that the narrative categories and classificatory pigeon-holes applied to the latter are also basic to the former. (1968: 5)

To see knowledge about the natural world as falling under knowledge more generally is an understandable conflation. Histories of theories of method would naturally employ the same narrative categories and classificatory pigeon holes. An important theme of the history of epistemology, for example, is the unification of knowledge, a theme reflected in the question of the unification of method in science. Those who have identified differences in kinds of knowledge have often likewise identified different methods for achieving that kind of knowledge (see the entry on the unity of science ).

Different views on what is known, how it is known, and what can be known are connected. Plato distinguished the realms of things into the visible and the intelligible ( The Republic , 510a, in Cooper 1997). Only the latter, the Forms, could be objects of knowledge. The intelligible truths could be known with the certainty of geometry and deductive reasoning. What could be observed of the material world, however, was by definition imperfect and deceptive, not ideal. The Platonic way of knowledge therefore emphasized reasoning as a method, downplaying the importance of observation. Aristotle disagreed, locating the Forms in the natural world as the fundamental principles to be discovered through the inquiry into nature ( Metaphysics Z , in Barnes 1984).

Aristotle is recognized as giving the earliest systematic treatise on the nature of scientific inquiry in the western tradition, one which embraced observation and reasoning about the natural world. In the Prior and Posterior Analytics , Aristotle reflects first on the aims and then the methods of inquiry into nature. A number of features can be found which are still considered by most to be essential to science. For Aristotle, empiricism, careful observation (but passive observation, not controlled experiment), is the starting point. The aim is not merely recording of facts, though. For Aristotle, science ( epistêmê ) is a body of properly arranged knowledge or learning—the empirical facts, but also their ordering and display are of crucial importance. The aims of discovery, ordering, and display of facts partly determine the methods required of successful scientific inquiry. Also determinant is the nature of the knowledge being sought, and the explanatory causes proper to that kind of knowledge (see the discussion of the four causes in the entry on Aristotle on causality ).

In addition to careful observation, then, scientific method requires a logic as a system of reasoning for properly arranging, but also inferring beyond, what is known by observation. Methods of reasoning may include induction, prediction, or analogy, among others. Aristotle’s system (along with his catalogue of fallacious reasoning) was collected under the title the Organon . This title would be echoed in later works on scientific reasoning, such as Novum Organon by Francis Bacon, and Novum Organon Restorum by William Whewell (see below). In Aristotle’s Organon reasoning is divided primarily into two forms, a rough division which persists into modern times. The division, known most commonly today as deductive versus inductive method, appears in other eras and methodologies as analysis/​synthesis, non-ampliative/​ampliative, or even confirmation/​verification. The basic idea is there are two “directions” to proceed in our methods of inquiry: one away from what is observed, to the more fundamental, general, and encompassing principles; the other, from the fundamental and general to instances or implications of principles.

The basic aim and method of inquiry identified here can be seen as a theme running throughout the next two millennia of reflection on the correct way to seek after knowledge: carefully observe nature and then seek rules or principles which explain or predict its operation. The Aristotelian corpus provided the framework for a commentary tradition on scientific method independent of science itself (cosmos versus physics.) During the medieval period, figures such as Albertus Magnus (1206–1280), Thomas Aquinas (1225–1274), Robert Grosseteste (1175–1253), Roger Bacon (1214/1220–1292), William of Ockham (1287–1347), Andreas Vesalius (1514–1546), Giacomo Zabarella (1533–1589) all worked to clarify the kind of knowledge obtainable by observation and induction, the source of justification of induction, and best rules for its application. [ 2 ] Many of their contributions we now think of as essential to science (see also Laudan 1968). As Aristotle and Plato had employed a framework of reasoning either “to the forms” or “away from the forms”, medieval thinkers employed directions away from the phenomena or back to the phenomena. In analysis, a phenomena was examined to discover its basic explanatory principles; in synthesis, explanations of a phenomena were constructed from first principles.

During the Scientific Revolution these various strands of argument, experiment, and reason were forged into a dominant epistemic authority. The 16 th –18 th centuries were a period of not only dramatic advance in knowledge about the operation of the natural world—advances in mechanical, medical, biological, political, economic explanations—but also of self-awareness of the revolutionary changes taking place, and intense reflection on the source and legitimation of the method by which the advances were made. The struggle to establish the new authority included methodological moves. The Book of Nature, according to the metaphor of Galileo Galilei (1564–1642) or Francis Bacon (1561–1626), was written in the language of mathematics, of geometry and number. This motivated an emphasis on mathematical description and mechanical explanation as important aspects of scientific method. Through figures such as Henry More and Ralph Cudworth, a neo-Platonic emphasis on the importance of metaphysical reflection on nature behind appearances, particularly regarding the spiritual as a complement to the purely mechanical, remained an important methodological thread of the Scientific Revolution (see the entries on Cambridge platonists ; Boyle ; Henry More ; Galileo ).

In Novum Organum (1620), Bacon was critical of the Aristotelian method for leaping from particulars to universals too quickly. The syllogistic form of reasoning readily mixed those two types of propositions. Bacon aimed at the invention of new arts, principles, and directions. His method would be grounded in methodical collection of observations, coupled with correction of our senses (and particularly, directions for the avoidance of the Idols, as he called them, kinds of systematic errors to which naïve observers are prone.) The community of scientists could then climb, by a careful, gradual and unbroken ascent, to reliable general claims.

Bacon’s method has been criticized as impractical and too inflexible for the practicing scientist. Whewell would later criticize Bacon in his System of Logic for paying too little attention to the practices of scientists. It is hard to find convincing examples of Bacon’s method being put in to practice in the history of science, but there are a few who have been held up as real examples of 16 th century scientific, inductive method, even if not in the rigid Baconian mold: figures such as Robert Boyle (1627–1691) and William Harvey (1578–1657) (see the entry on Bacon ).

It is to Isaac Newton (1642–1727), however, that historians of science and methodologists have paid greatest attention. Given the enormous success of his Principia Mathematica and Opticks , this is understandable. The study of Newton’s method has had two main thrusts: the implicit method of the experiments and reasoning presented in the Opticks, and the explicit methodological rules given as the Rules for Philosophising (the Regulae) in Book III of the Principia . [ 3 ] Newton’s law of gravitation, the linchpin of his new cosmology, broke with explanatory conventions of natural philosophy, first for apparently proposing action at a distance, but more generally for not providing “true”, physical causes. The argument for his System of the World ( Principia , Book III) was based on phenomena, not reasoned first principles. This was viewed (mainly on the continent) as insufficient for proper natural philosophy. The Regulae counter this objection, re-defining the aims of natural philosophy by re-defining the method natural philosophers should follow. (See the entry on Newton’s philosophy .)

To his list of methodological prescriptions should be added Newton’s famous phrase “ hypotheses non fingo ” (commonly translated as “I frame no hypotheses”.) The scientist was not to invent systems but infer explanations from observations, as Bacon had advocated. This would come to be known as inductivism. In the century after Newton, significant clarifications of the Newtonian method were made. Colin Maclaurin (1698–1746), for instance, reconstructed the essential structure of the method as having complementary analysis and synthesis phases, one proceeding away from the phenomena in generalization, the other from the general propositions to derive explanations of new phenomena. Denis Diderot (1713–1784) and editors of the Encyclopédie did much to consolidate and popularize Newtonianism, as did Francesco Algarotti (1721–1764). The emphasis was often the same, as much on the character of the scientist as on their process, a character which is still commonly assumed. The scientist is humble in the face of nature, not beholden to dogma, obeys only his eyes, and follows the truth wherever it leads. It was certainly Voltaire (1694–1778) and du Chatelet (1706–1749) who were most influential in propagating the latter vision of the scientist and their craft, with Newton as hero. Scientific method became a revolutionary force of the Enlightenment. (See also the entries on Newton , Leibniz , Descartes , Boyle , Hume , enlightenment , as well as Shank 2008 for a historical overview.)

Not all 18 th century reflections on scientific method were so celebratory. Famous also are George Berkeley’s (1685–1753) attack on the mathematics of the new science, as well as the over-emphasis of Newtonians on observation; and David Hume’s (1711–1776) undermining of the warrant offered for scientific claims by inductive justification (see the entries on: George Berkeley ; David Hume ; Hume’s Newtonianism and Anti-Newtonianism ). Hume’s problem of induction motivated Immanuel Kant (1724–1804) to seek new foundations for empirical method, though as an epistemic reconstruction, not as any set of practical guidelines for scientists. Both Hume and Kant influenced the methodological reflections of the next century, such as the debate between Mill and Whewell over the certainty of inductive inferences in science.

The debate between John Stuart Mill (1806–1873) and William Whewell (1794–1866) has become the canonical methodological debate of the 19 th century. Although often characterized as a debate between inductivism and hypothetico-deductivism, the role of the two methods on each side is actually more complex. On the hypothetico-deductive account, scientists work to come up with hypotheses from which true observational consequences can be deduced—hence, hypothetico-deductive. Because Whewell emphasizes both hypotheses and deduction in his account of method, he can be seen as a convenient foil to the inductivism of Mill. However, equally if not more important to Whewell’s portrayal of scientific method is what he calls the “fundamental antithesis”. Knowledge is a product of the objective (what we see in the world around us) and subjective (the contributions of our mind to how we perceive and understand what we experience, which he called the Fundamental Ideas). Both elements are essential according to Whewell, and he was therefore critical of Kant for too much focus on the subjective, and John Locke (1632–1704) and Mill for too much focus on the senses. Whewell’s fundamental ideas can be discipline relative. An idea can be fundamental even if it is necessary for knowledge only within a given scientific discipline (e.g., chemical affinity for chemistry). This distinguishes fundamental ideas from the forms and categories of intuition of Kant. (See the entry on Whewell .)

Clarifying fundamental ideas would therefore be an essential part of scientific method and scientific progress. Whewell called this process “Discoverer’s Induction”. It was induction, following Bacon or Newton, but Whewell sought to revive Bacon’s account by emphasising the role of ideas in the clear and careful formulation of inductive hypotheses. Whewell’s induction is not merely the collecting of objective facts. The subjective plays a role through what Whewell calls the Colligation of Facts, a creative act of the scientist, the invention of a theory. A theory is then confirmed by testing, where more facts are brought under the theory, called the Consilience of Inductions. Whewell felt that this was the method by which the true laws of nature could be discovered: clarification of fundamental concepts, clever invention of explanations, and careful testing. Mill, in his critique of Whewell, and others who have cast Whewell as a fore-runner of the hypothetico-deductivist view, seem to have under-estimated the importance of this discovery phase in Whewell’s understanding of method (Snyder 1997a,b, 1999). Down-playing the discovery phase would come to characterize methodology of the early 20 th century (see section 3 ).

Mill, in his System of Logic , put forward a narrower view of induction as the essence of scientific method. For Mill, induction is the search first for regularities among events. Among those regularities, some will continue to hold for further observations, eventually gaining the status of laws. One can also look for regularities among the laws discovered in a domain, i.e., for a law of laws. Which “law law” will hold is time and discipline dependent and open to revision. One example is the Law of Universal Causation, and Mill put forward specific methods for identifying causes—now commonly known as Mill’s methods. These five methods look for circumstances which are common among the phenomena of interest, those which are absent when the phenomena are, or those for which both vary together. Mill’s methods are still seen as capturing basic intuitions about experimental methods for finding the relevant explanatory factors ( System of Logic (1843), see Mill entry). The methods advocated by Whewell and Mill, in the end, look similar. Both involve inductive generalization to covering laws. They differ dramatically, however, with respect to the necessity of the knowledge arrived at; that is, at the meta-methodological level (see the entries on Whewell and Mill entries).

3. Logic of method and critical responses

The quantum and relativistic revolutions in physics in the early 20 th century had a profound effect on methodology. Conceptual foundations of both theories were taken to show the defeasibility of even the most seemingly secure intuitions about space, time and bodies. Certainty of knowledge about the natural world was therefore recognized as unattainable. Instead a renewed empiricism was sought which rendered science fallible but still rationally justifiable.

Analyses of the reasoning of scientists emerged, according to which the aspects of scientific method which were of primary importance were the means of testing and confirming of theories. A distinction in methodology was made between the contexts of discovery and justification. The distinction could be used as a wedge between the particularities of where and how theories or hypotheses are arrived at, on the one hand, and the underlying reasoning scientists use (whether or not they are aware of it) when assessing theories and judging their adequacy on the basis of the available evidence. By and large, for most of the 20 th century, philosophy of science focused on the second context, although philosophers differed on whether to focus on confirmation or refutation as well as on the many details of how confirmation or refutation could or could not be brought about. By the mid-20 th century these attempts at defining the method of justification and the context distinction itself came under pressure. During the same period, philosophy of science developed rapidly, and from section 4 this entry will therefore shift from a primarily historical treatment of the scientific method towards a primarily thematic one.

Advances in logic and probability held out promise of the possibility of elaborate reconstructions of scientific theories and empirical method, the best example being Rudolf Carnap’s The Logical Structure of the World (1928). Carnap attempted to show that a scientific theory could be reconstructed as a formal axiomatic system—that is, a logic. That system could refer to the world because some of its basic sentences could be interpreted as observations or operations which one could perform to test them. The rest of the theoretical system, including sentences using theoretical or unobservable terms (like electron or force) would then either be meaningful because they could be reduced to observations, or they had purely logical meanings (called analytic, like mathematical identities). This has been referred to as the verifiability criterion of meaning. According to the criterion, any statement not either analytic or verifiable was strictly meaningless. Although the view was endorsed by Carnap in 1928, he would later come to see it as too restrictive (Carnap 1956). Another familiar version of this idea is operationalism of Percy William Bridgman. In The Logic of Modern Physics (1927) Bridgman asserted that every physical concept could be defined in terms of the operations one would perform to verify the application of that concept. Making good on the operationalisation of a concept even as simple as length, however, can easily become enormously complex (for measuring very small lengths, for instance) or impractical (measuring large distances like light years.)

Carl Hempel’s (1950, 1951) criticisms of the verifiability criterion of meaning had enormous influence. He pointed out that universal generalizations, such as most scientific laws, were not strictly meaningful on the criterion. Verifiability and operationalism both seemed too restrictive to capture standard scientific aims and practice. The tenuous connection between these reconstructions and actual scientific practice was criticized in another way. In both approaches, scientific methods are instead recast in methodological roles. Measurements, for example, were looked to as ways of giving meanings to terms. The aim of the philosopher of science was not to understand the methods per se , but to use them to reconstruct theories, their meanings, and their relation to the world. When scientists perform these operations, however, they will not report that they are doing them to give meaning to terms in a formal axiomatic system. This disconnect between methodology and the details of actual scientific practice would seem to violate the empiricism the Logical Positivists and Bridgman were committed to. The view that methodology should correspond to practice (to some extent) has been called historicism, or intuitionism. We turn to these criticisms and responses in section 3.4 . [ 4 ]

Positivism also had to contend with the recognition that a purely inductivist approach, along the lines of Bacon-Newton-Mill, was untenable. There was no pure observation, for starters. All observation was theory laden. Theory is required to make any observation, therefore not all theory can be derived from observation alone. (See the entry on theory and observation in science .) Even granting an observational basis, Hume had already pointed out that one could not deductively justify inductive conclusions without begging the question by presuming the success of the inductive method. Likewise, positivist attempts at analyzing how a generalization can be confirmed by observations of its instances were subject to a number of criticisms. Goodman (1965) and Hempel (1965) both point to paradoxes inherent in standard accounts of confirmation. Recent attempts at explaining how observations can serve to confirm a scientific theory are discussed in section 4 below.

The standard starting point for a non-inductive analysis of the logic of confirmation is known as the Hypothetico-Deductive (H-D) method. In its simplest form, a sentence of a theory which expresses some hypothesis is confirmed by its true consequences. As noted in section 2 , this method had been advanced by Whewell in the 19 th century, as well as Nicod (1924) and others in the 20 th century. Often, Hempel’s (1966) description of the H-D method, illustrated by the case of Semmelweiss’ inferential procedures in establishing the cause of childbed fever, has been presented as a key account of H-D as well as a foil for criticism of the H-D account of confirmation (see, for example, Lipton’s (2004) discussion of inference to the best explanation; also the entry on confirmation ). Hempel described Semmelsweiss’ procedure as examining various hypotheses explaining the cause of childbed fever. Some hypotheses conflicted with observable facts and could be rejected as false immediately. Others needed to be tested experimentally by deducing which observable events should follow if the hypothesis were true (what Hempel called the test implications of the hypothesis), then conducting an experiment and observing whether or not the test implications occurred. If the experiment showed the test implication to be false, the hypothesis could be rejected. If the experiment showed the test implications to be true, however, this did not prove the hypothesis true. The confirmation of a test implication does not verify a hypothesis, though Hempel did allow that “it provides at least some support, some corroboration or confirmation for it” (Hempel 1966: 8). The degree of this support then depends on the quantity, variety and precision of the supporting evidence.

Another approach that took off from the difficulties with inductive inference was Karl Popper’s critical rationalism or falsificationism (Popper 1959, 1963). Falsification is deductive and similar to H-D in that it involves scientists deducing observational consequences from the hypothesis under test. For Popper, however, the important point was not the degree of confirmation that successful prediction offered to a hypothesis. The crucial thing was the logical asymmetry between confirmation, based on inductive inference, and falsification, which can be based on a deductive inference. (This simple opposition was later questioned, by Lakatos, among others. See the entry on historicist theories of scientific rationality. )

Popper stressed that, regardless of the amount of confirming evidence, we can never be certain that a hypothesis is true without committing the fallacy of affirming the consequent. Instead, Popper introduced the notion of corroboration as a measure for how well a theory or hypothesis has survived previous testing—but without implying that this is also a measure for the probability that it is true.

Popper was also motivated by his doubts about the scientific status of theories like the Marxist theory of history or psycho-analysis, and so wanted to demarcate between science and pseudo-science. Popper saw this as an importantly different distinction than demarcating science from metaphysics. The latter demarcation was the primary concern of many logical empiricists. Popper used the idea of falsification to draw a line instead between pseudo and proper science. Science was science because its method involved subjecting theories to rigorous tests which offered a high probability of failing and thus refuting the theory.

A commitment to the risk of failure was important. Avoiding falsification could be done all too easily. If a consequence of a theory is inconsistent with observations, an exception can be added by introducing auxiliary hypotheses designed explicitly to save the theory, so-called ad hoc modifications. This Popper saw done in pseudo-science where ad hoc theories appeared capable of explaining anything in their field of application. In contrast, science is risky. If observations showed the predictions from a theory to be wrong, the theory would be refuted. Hence, scientific hypotheses must be falsifiable. Not only must there exist some possible observation statement which could falsify the hypothesis or theory, were it observed, (Popper called these the hypothesis’ potential falsifiers) it is crucial to the Popperian scientific method that such falsifications be sincerely attempted on a regular basis.

The more potential falsifiers of a hypothesis, the more falsifiable it would be, and the more the hypothesis claimed. Conversely, hypotheses without falsifiers claimed very little or nothing at all. Originally, Popper thought that this meant the introduction of ad hoc hypotheses only to save a theory should not be countenanced as good scientific method. These would undermine the falsifiabililty of a theory. However, Popper later came to recognize that the introduction of modifications (immunizations, he called them) was often an important part of scientific development. Responding to surprising or apparently falsifying observations often generated important new scientific insights. Popper’s own example was the observed motion of Uranus which originally did not agree with Newtonian predictions. The ad hoc hypothesis of an outer planet explained the disagreement and led to further falsifiable predictions. Popper sought to reconcile the view by blurring the distinction between falsifiable and not falsifiable, and speaking instead of degrees of testability (Popper 1985: 41f.).

From the 1960s on, sustained meta-methodological criticism emerged that drove philosophical focus away from scientific method. A brief look at those criticisms follows, with recommendations for further reading at the end of the entry.

Thomas Kuhn’s The Structure of Scientific Revolutions (1962) begins with a well-known shot across the bow for philosophers of science:

History, if viewed as a repository for more than anecdote or chronology, could produce a decisive transformation in the image of science by which we are now possessed. (1962: 1)

The image Kuhn thought needed transforming was the a-historical, rational reconstruction sought by many of the Logical Positivists, though Carnap and other positivists were actually quite sympathetic to Kuhn’s views. (See the entry on the Vienna Circle .) Kuhn shares with other of his contemporaries, such as Feyerabend and Lakatos, a commitment to a more empirical approach to philosophy of science. Namely, the history of science provides important data, and necessary checks, for philosophy of science, including any theory of scientific method.

The history of science reveals, according to Kuhn, that scientific development occurs in alternating phases. During normal science, the members of the scientific community adhere to the paradigm in place. Their commitment to the paradigm means a commitment to the puzzles to be solved and the acceptable ways of solving them. Confidence in the paradigm remains so long as steady progress is made in solving the shared puzzles. Method in this normal phase operates within a disciplinary matrix (Kuhn’s later concept of a paradigm) which includes standards for problem solving, and defines the range of problems to which the method should be applied. An important part of a disciplinary matrix is the set of values which provide the norms and aims for scientific method. The main values that Kuhn identifies are prediction, problem solving, simplicity, consistency, and plausibility.

An important by-product of normal science is the accumulation of puzzles which cannot be solved with resources of the current paradigm. Once accumulation of these anomalies has reached some critical mass, it can trigger a communal shift to a new paradigm and a new phase of normal science. Importantly, the values that provide the norms and aims for scientific method may have transformed in the meantime. Method may therefore be relative to discipline, time or place

Feyerabend also identified the aims of science as progress, but argued that any methodological prescription would only stifle that progress (Feyerabend 1988). His arguments are grounded in re-examining accepted “myths” about the history of science. Heroes of science, like Galileo, are shown to be just as reliant on rhetoric and persuasion as they are on reason and demonstration. Others, like Aristotle, are shown to be far more reasonable and far-reaching in their outlooks then they are given credit for. As a consequence, the only rule that could provide what he took to be sufficient freedom was the vacuous “anything goes”. More generally, even the methodological restriction that science is the best way to pursue knowledge, and to increase knowledge, is too restrictive. Feyerabend suggested instead that science might, in fact, be a threat to a free society, because it and its myth had become so dominant (Feyerabend 1978).

An even more fundamental kind of criticism was offered by several sociologists of science from the 1970s onwards who rejected the methodology of providing philosophical accounts for the rational development of science and sociological accounts of the irrational mistakes. Instead, they adhered to a symmetry thesis on which any causal explanation of how scientific knowledge is established needs to be symmetrical in explaining truth and falsity, rationality and irrationality, success and mistakes, by the same causal factors (see, e.g., Barnes and Bloor 1982, Bloor 1991). Movements in the Sociology of Science, like the Strong Programme, or in the social dimensions and causes of knowledge more generally led to extended and close examination of detailed case studies in contemporary science and its history. (See the entries on the social dimensions of scientific knowledge and social epistemology .) Well-known examinations by Latour and Woolgar (1979/1986), Knorr-Cetina (1981), Pickering (1984), Shapin and Schaffer (1985) seem to bear out that it was social ideologies (on a macro-scale) or individual interactions and circumstances (on a micro-scale) which were the primary causal factors in determining which beliefs gained the status of scientific knowledge. As they saw it therefore, explanatory appeals to scientific method were not empirically grounded.

A late, and largely unexpected, criticism of scientific method came from within science itself. Beginning in the early 2000s, a number of scientists attempting to replicate the results of published experiments could not do so. There may be close conceptual connection between reproducibility and method. For example, if reproducibility means that the same scientific methods ought to produce the same result, and all scientific results ought to be reproducible, then whatever it takes to reproduce a scientific result ought to be called scientific method. Space limits us to the observation that, insofar as reproducibility is a desired outcome of proper scientific method, it is not strictly a part of scientific method. (See the entry on reproducibility of scientific results .)

By the close of the 20 th century the search for the scientific method was flagging. Nola and Sankey (2000b) could introduce their volume on method by remarking that “For some, the whole idea of a theory of scientific method is yester-year’s debate …”.

Despite the many difficulties that philosophers encountered in trying to providing a clear methodology of conformation (or refutation), still important progress has been made on understanding how observation can provide evidence for a given theory. Work in statistics has been crucial for understanding how theories can be tested empirically, and in recent decades a huge literature has developed that attempts to recast confirmation in Bayesian terms. Here these developments can be covered only briefly, and we refer to the entry on confirmation for further details and references.

Statistics has come to play an increasingly important role in the methodology of the experimental sciences from the 19 th century onwards. At that time, statistics and probability theory took on a methodological role as an analysis of inductive inference, and attempts to ground the rationality of induction in the axioms of probability theory have continued throughout the 20 th century and in to the present. Developments in the theory of statistics itself, meanwhile, have had a direct and immense influence on the experimental method, including methods for measuring the uncertainty of observations such as the Method of Least Squares developed by Legendre and Gauss in the early 19 th century, criteria for the rejection of outliers proposed by Peirce by the mid-19 th century, and the significance tests developed by Gosset (a.k.a. “Student”), Fisher, Neyman & Pearson and others in the 1920s and 1930s (see, e.g., Swijtink 1987 for a brief historical overview; and also the entry on C.S. Peirce ).

These developments within statistics then in turn led to a reflective discussion among both statisticians and philosophers of science on how to perceive the process of hypothesis testing: whether it was a rigorous statistical inference that could provide a numerical expression of the degree of confidence in the tested hypothesis, or if it should be seen as a decision between different courses of actions that also involved a value component. This led to a major controversy among Fisher on the one side and Neyman and Pearson on the other (see especially Fisher 1955, Neyman 1956 and Pearson 1955, and for analyses of the controversy, e.g., Howie 2002, Marks 2000, Lenhard 2006). On Fisher’s view, hypothesis testing was a methodology for when to accept or reject a statistical hypothesis, namely that a hypothesis should be rejected by evidence if this evidence would be unlikely relative to other possible outcomes, given the hypothesis were true. In contrast, on Neyman and Pearson’s view, the consequence of error also had to play a role when deciding between hypotheses. Introducing the distinction between the error of rejecting a true hypothesis (type I error) and accepting a false hypothesis (type II error), they argued that it depends on the consequences of the error to decide whether it is more important to avoid rejecting a true hypothesis or accepting a false one. Hence, Fisher aimed for a theory of inductive inference that enabled a numerical expression of confidence in a hypothesis. To him, the important point was the search for truth, not utility. In contrast, the Neyman-Pearson approach provided a strategy of inductive behaviour for deciding between different courses of action. Here, the important point was not whether a hypothesis was true, but whether one should act as if it was.

Similar discussions are found in the philosophical literature. On the one side, Churchman (1948) and Rudner (1953) argued that because scientific hypotheses can never be completely verified, a complete analysis of the methods of scientific inference includes ethical judgments in which the scientists must decide whether the evidence is sufficiently strong or that the probability is sufficiently high to warrant the acceptance of the hypothesis, which again will depend on the importance of making a mistake in accepting or rejecting the hypothesis. Others, such as Jeffrey (1956) and Levi (1960) disagreed and instead defended a value-neutral view of science on which scientists should bracket their attitudes, preferences, temperament, and values when assessing the correctness of their inferences. For more details on this value-free ideal in the philosophy of science and its historical development, see Douglas (2009) and Howard (2003). For a broad set of case studies examining the role of values in science, see e.g. Elliott & Richards 2017.

In recent decades, philosophical discussions of the evaluation of probabilistic hypotheses by statistical inference have largely focused on Bayesianism that understands probability as a measure of a person’s degree of belief in an event, given the available information, and frequentism that instead understands probability as a long-run frequency of a repeatable event. Hence, for Bayesians probabilities refer to a state of knowledge, whereas for frequentists probabilities refer to frequencies of events (see, e.g., Sober 2008, chapter 1 for a detailed introduction to Bayesianism and frequentism as well as to likelihoodism). Bayesianism aims at providing a quantifiable, algorithmic representation of belief revision, where belief revision is a function of prior beliefs (i.e., background knowledge) and incoming evidence. Bayesianism employs a rule based on Bayes’ theorem, a theorem of the probability calculus which relates conditional probabilities. The probability that a particular hypothesis is true is interpreted as a degree of belief, or credence, of the scientist. There will also be a probability and a degree of belief that a hypothesis will be true conditional on a piece of evidence (an observation, say) being true. Bayesianism proscribes that it is rational for the scientist to update their belief in the hypothesis to that conditional probability should it turn out that the evidence is, in fact, observed (see, e.g., Sprenger & Hartmann 2019 for a comprehensive treatment of Bayesian philosophy of science). Originating in the work of Neyman and Person, frequentism aims at providing the tools for reducing long-run error rates, such as the error-statistical approach developed by Mayo (1996) that focuses on how experimenters can avoid both type I and type II errors by building up a repertoire of procedures that detect errors if and only if they are present. Both Bayesianism and frequentism have developed over time, they are interpreted in different ways by its various proponents, and their relations to previous criticism to attempts at defining scientific method are seen differently by proponents and critics. The literature, surveys, reviews and criticism in this area are vast and the reader is referred to the entries on Bayesian epistemology and confirmation .

5. Method in Practice

Attention to scientific practice, as we have seen, is not itself new. However, the turn to practice in the philosophy of science of late can be seen as a correction to the pessimism with respect to method in philosophy of science in later parts of the 20 th century, and as an attempted reconciliation between sociological and rationalist explanations of scientific knowledge. Much of this work sees method as detailed and context specific problem-solving procedures, and methodological analyses to be at the same time descriptive, critical and advisory (see Nickles 1987 for an exposition of this view). The following section contains a survey of some of the practice focuses. In this section we turn fully to topics rather than chronology.

A problem with the distinction between the contexts of discovery and justification that figured so prominently in philosophy of science in the first half of the 20 th century (see section 2 ) is that no such distinction can be clearly seen in scientific activity (see Arabatzis 2006). Thus, in recent decades, it has been recognized that study of conceptual innovation and change should not be confined to psychology and sociology of science, but are also important aspects of scientific practice which philosophy of science should address (see also the entry on scientific discovery ). Looking for the practices that drive conceptual innovation has led philosophers to examine both the reasoning practices of scientists and the wide realm of experimental practices that are not directed narrowly at testing hypotheses, that is, exploratory experimentation.

Examining the reasoning practices of historical and contemporary scientists, Nersessian (2008) has argued that new scientific concepts are constructed as solutions to specific problems by systematic reasoning, and that of analogy, visual representation and thought-experimentation are among the important reasoning practices employed. These ubiquitous forms of reasoning are reliable—but also fallible—methods of conceptual development and change. On her account, model-based reasoning consists of cycles of construction, simulation, evaluation and adaption of models that serve as interim interpretations of the target problem to be solved. Often, this process will lead to modifications or extensions, and a new cycle of simulation and evaluation. However, Nersessian also emphasizes that

creative model-based reasoning cannot be applied as a simple recipe, is not always productive of solutions, and even its most exemplary usages can lead to incorrect solutions. (Nersessian 2008: 11)

Thus, while on the one hand she agrees with many previous philosophers that there is no logic of discovery, discoveries can derive from reasoned processes, such that a large and integral part of scientific practice is

the creation of concepts through which to comprehend, structure, and communicate about physical phenomena …. (Nersessian 1987: 11)

Similarly, work on heuristics for discovery and theory construction by scholars such as Darden (1991) and Bechtel & Richardson (1993) present science as problem solving and investigate scientific problem solving as a special case of problem-solving in general. Drawing largely on cases from the biological sciences, much of their focus has been on reasoning strategies for the generation, evaluation, and revision of mechanistic explanations of complex systems.

Addressing another aspect of the context distinction, namely the traditional view that the primary role of experiments is to test theoretical hypotheses according to the H-D model, other philosophers of science have argued for additional roles that experiments can play. The notion of exploratory experimentation was introduced to describe experiments driven by the desire to obtain empirical regularities and to develop concepts and classifications in which these regularities can be described (Steinle 1997, 2002; Burian 1997; Waters 2007)). However the difference between theory driven experimentation and exploratory experimentation should not be seen as a sharp distinction. Theory driven experiments are not always directed at testing hypothesis, but may also be directed at various kinds of fact-gathering, such as determining numerical parameters. Vice versa , exploratory experiments are usually informed by theory in various ways and are therefore not theory-free. Instead, in exploratory experiments phenomena are investigated without first limiting the possible outcomes of the experiment on the basis of extant theory about the phenomena.

The development of high throughput instrumentation in molecular biology and neighbouring fields has given rise to a special type of exploratory experimentation that collects and analyses very large amounts of data, and these new ‘omics’ disciplines are often said to represent a break with the ideal of hypothesis-driven science (Burian 2007; Elliott 2007; Waters 2007; O’Malley 2007) and instead described as data-driven research (Leonelli 2012; Strasser 2012) or as a special kind of “convenience experimentation” in which many experiments are done simply because they are extraordinarily convenient to perform (Krohs 2012).

5.2 Computer methods and ‘new ways’ of doing science

The field of omics just described is possible because of the ability of computers to process, in a reasonable amount of time, the huge quantities of data required. Computers allow for more elaborate experimentation (higher speed, better filtering, more variables, sophisticated coordination and control), but also, through modelling and simulations, might constitute a form of experimentation themselves. Here, too, we can pose a version of the general question of method versus practice: does the practice of using computers fundamentally change scientific method, or merely provide a more efficient means of implementing standard methods?

Because computers can be used to automate measurements, quantifications, calculations, and statistical analyses where, for practical reasons, these operations cannot be otherwise carried out, many of the steps involved in reaching a conclusion on the basis of an experiment are now made inside a “black box”, without the direct involvement or awareness of a human. This has epistemological implications, regarding what we can know, and how we can know it. To have confidence in the results, computer methods are therefore subjected to tests of verification and validation.

The distinction between verification and validation is easiest to characterize in the case of computer simulations. In a typical computer simulation scenario computers are used to numerically integrate differential equations for which no analytic solution is available. The equations are part of the model the scientist uses to represent a phenomenon or system under investigation. Verifying a computer simulation means checking that the equations of the model are being correctly approximated. Validating a simulation means checking that the equations of the model are adequate for the inferences one wants to make on the basis of that model.

A number of issues related to computer simulations have been raised. The identification of validity and verification as the testing methods has been criticized. Oreskes et al. (1994) raise concerns that “validiation”, because it suggests deductive inference, might lead to over-confidence in the results of simulations. The distinction itself is probably too clean, since actual practice in the testing of simulations mixes and moves back and forth between the two (Weissart 1997; Parker 2008a; Winsberg 2010). Computer simulations do seem to have a non-inductive character, given that the principles by which they operate are built in by the programmers, and any results of the simulation follow from those in-built principles in such a way that those results could, in principle, be deduced from the program code and its inputs. The status of simulations as experiments has therefore been examined (Kaufmann and Smarr 1993; Humphreys 1995; Hughes 1999; Norton and Suppe 2001). This literature considers the epistemology of these experiments: what we can learn by simulation, and also the kinds of justifications which can be given in applying that knowledge to the “real” world. (Mayo 1996; Parker 2008b). As pointed out, part of the advantage of computer simulation derives from the fact that huge numbers of calculations can be carried out without requiring direct observation by the experimenter/​simulator. At the same time, many of these calculations are approximations to the calculations which would be performed first-hand in an ideal situation. Both factors introduce uncertainties into the inferences drawn from what is observed in the simulation.

For many of the reasons described above, computer simulations do not seem to belong clearly to either the experimental or theoretical domain. Rather, they seem to crucially involve aspects of both. This has led some authors, such as Fox Keller (2003: 200) to argue that we ought to consider computer simulation a “qualitatively different way of doing science”. The literature in general tends to follow Kaufmann and Smarr (1993) in referring to computer simulation as a “third way” for scientific methodology (theoretical reasoning and experimental practice are the first two ways.). It should also be noted that the debates around these issues have tended to focus on the form of computer simulation typical in the physical sciences, where models are based on dynamical equations. Other forms of simulation might not have the same problems, or have problems of their own (see the entry on computer simulations in science ).

In recent years, the rapid development of machine learning techniques has prompted some scholars to suggest that the scientific method has become “obsolete” (Anderson 2008, Carrol and Goodstein 2009). This has resulted in an intense debate on the relative merit of data-driven and hypothesis-driven research (for samples, see e.g. Mazzocchi 2015 or Succi and Coveney 2018). For a detailed treatment of this topic, we refer to the entry scientific research and big data .

6. Discourse on scientific method

Despite philosophical disagreements, the idea of the scientific method still figures prominently in contemporary discourse on many different topics, both within science and in society at large. Often, reference to scientific method is used in ways that convey either the legend of a single, universal method characteristic of all science, or grants to a particular method or set of methods privilege as a special ‘gold standard’, often with reference to particular philosophers to vindicate the claims. Discourse on scientific method also typically arises when there is a need to distinguish between science and other activities, or for justifying the special status conveyed to science. In these areas, the philosophical attempts at identifying a set of methods characteristic for scientific endeavors are closely related to the philosophy of science’s classical problem of demarcation (see the entry on science and pseudo-science ) and to the philosophical analysis of the social dimension of scientific knowledge and the role of science in democratic society.

One of the settings in which the legend of a single, universal scientific method has been particularly strong is science education (see, e.g., Bauer 1992; McComas 1996; Wivagg & Allchin 2002). [ 5 ] Often, ‘the scientific method’ is presented in textbooks and educational web pages as a fixed four or five step procedure starting from observations and description of a phenomenon and progressing over formulation of a hypothesis which explains the phenomenon, designing and conducting experiments to test the hypothesis, analyzing the results, and ending with drawing a conclusion. Such references to a universal scientific method can be found in educational material at all levels of science education (Blachowicz 2009), and numerous studies have shown that the idea of a general and universal scientific method often form part of both students’ and teachers’ conception of science (see, e.g., Aikenhead 1987; Osborne et al. 2003). In response, it has been argued that science education need to focus more on teaching about the nature of science, although views have differed on whether this is best done through student-led investigations, contemporary cases, or historical cases (Allchin, Andersen & Nielsen 2014)

Although occasionally phrased with reference to the H-D method, important historical roots of the legend in science education of a single, universal scientific method are the American philosopher and psychologist Dewey’s account of inquiry in How We Think (1910) and the British mathematician Karl Pearson’s account of science in Grammar of Science (1892). On Dewey’s account, inquiry is divided into the five steps of

(i) a felt difficulty, (ii) its location and definition, (iii) suggestion of a possible solution, (iv) development by reasoning of the bearing of the suggestions, (v) further observation and experiment leading to its acceptance or rejection. (Dewey 1910: 72)

Similarly, on Pearson’s account, scientific investigations start with measurement of data and observation of their correction and sequence from which scientific laws can be discovered with the aid of creative imagination. These laws have to be subject to criticism, and their final acceptance will have equal validity for “all normally constituted minds”. Both Dewey’s and Pearson’s accounts should be seen as generalized abstractions of inquiry and not restricted to the realm of science—although both Dewey and Pearson referred to their respective accounts as ‘the scientific method’.

Occasionally, scientists make sweeping statements about a simple and distinct scientific method, as exemplified by Feynman’s simplified version of a conjectures and refutations method presented, for example, in the last of his 1964 Cornell Messenger lectures. [ 6 ] However, just as often scientists have come to the same conclusion as recent philosophy of science that there is not any unique, easily described scientific method. For example, the physicist and Nobel Laureate Weinberg described in the paper “The Methods of Science … And Those By Which We Live” (1995) how

The fact that the standards of scientific success shift with time does not only make the philosophy of science difficult; it also raises problems for the public understanding of science. We do not have a fixed scientific method to rally around and defend. (1995: 8)

Interview studies with scientists on their conception of method shows that scientists often find it hard to figure out whether available evidence confirms their hypothesis, and that there are no direct translations between general ideas about method and specific strategies to guide how research is conducted (Schickore & Hangel 2019, Hangel & Schickore 2017)

Reference to the scientific method has also often been used to argue for the scientific nature or special status of a particular activity. Philosophical positions that argue for a simple and unique scientific method as a criterion of demarcation, such as Popperian falsification, have often attracted practitioners who felt that they had a need to defend their domain of practice. For example, references to conjectures and refutation as the scientific method are abundant in much of the literature on complementary and alternative medicine (CAM)—alongside the competing position that CAM, as an alternative to conventional biomedicine, needs to develop its own methodology different from that of science.

Also within mainstream science, reference to the scientific method is used in arguments regarding the internal hierarchy of disciplines and domains. A frequently seen argument is that research based on the H-D method is superior to research based on induction from observations because in deductive inferences the conclusion follows necessarily from the premises. (See, e.g., Parascandola 1998 for an analysis of how this argument has been made to downgrade epidemiology compared to the laboratory sciences.) Similarly, based on an examination of the practices of major funding institutions such as the National Institutes of Health (NIH), the National Science Foundation (NSF) and the Biomedical Sciences Research Practices (BBSRC) in the UK, O’Malley et al. (2009) have argued that funding agencies seem to have a tendency to adhere to the view that the primary activity of science is to test hypotheses, while descriptive and exploratory research is seen as merely preparatory activities that are valuable only insofar as they fuel hypothesis-driven research.

In some areas of science, scholarly publications are structured in a way that may convey the impression of a neat and linear process of inquiry from stating a question, devising the methods by which to answer it, collecting the data, to drawing a conclusion from the analysis of data. For example, the codified format of publications in most biomedical journals known as the IMRAD format (Introduction, Method, Results, Analysis, Discussion) is explicitly described by the journal editors as “not an arbitrary publication format but rather a direct reflection of the process of scientific discovery” (see the so-called “Vancouver Recommendations”, ICMJE 2013: 11). However, scientific publications do not in general reflect the process by which the reported scientific results were produced. For example, under the provocative title “Is the scientific paper a fraud?”, Medawar argued that scientific papers generally misrepresent how the results have been produced (Medawar 1963/1996). Similar views have been advanced by philosophers, historians and sociologists of science (Gilbert 1976; Holmes 1987; Knorr-Cetina 1981; Schickore 2008; Suppe 1998) who have argued that scientists’ experimental practices are messy and often do not follow any recognizable pattern. Publications of research results, they argue, are retrospective reconstructions of these activities that often do not preserve the temporal order or the logic of these activities, but are instead often constructed in order to screen off potential criticism (see Schickore 2008 for a review of this work).

Philosophical positions on the scientific method have also made it into the court room, especially in the US where judges have drawn on philosophy of science in deciding when to confer special status to scientific expert testimony. A key case is Daubert vs Merrell Dow Pharmaceuticals (92–102, 509 U.S. 579, 1993). In this case, the Supreme Court argued in its 1993 ruling that trial judges must ensure that expert testimony is reliable, and that in doing this the court must look at the expert’s methodology to determine whether the proffered evidence is actually scientific knowledge. Further, referring to works of Popper and Hempel the court stated that

ordinarily, a key question to be answered in determining whether a theory or technique is scientific knowledge … is whether it can be (and has been) tested. (Justice Blackmun, Daubert v. Merrell Dow Pharmaceuticals; see Other Internet Resources for a link to the opinion)

But as argued by Haack (2005a,b, 2010) and by Foster & Hubner (1999), by equating the question of whether a piece of testimony is reliable with the question whether it is scientific as indicated by a special methodology, the court was producing an inconsistent mixture of Popper’s and Hempel’s philosophies, and this has later led to considerable confusion in subsequent case rulings that drew on the Daubert case (see Haack 2010 for a detailed exposition).

The difficulties around identifying the methods of science are also reflected in the difficulties of identifying scientific misconduct in the form of improper application of the method or methods of science. One of the first and most influential attempts at defining misconduct in science was the US definition from 1989 that defined misconduct as

fabrication, falsification, plagiarism, or other practices that seriously deviate from those that are commonly accepted within the scientific community . (Code of Federal Regulations, part 50, subpart A., August 8, 1989, italics added)

However, the “other practices that seriously deviate” clause was heavily criticized because it could be used to suppress creative or novel science. For example, the National Academy of Science stated in their report Responsible Science (1992) that it

wishes to discourage the possibility that a misconduct complaint could be lodged against scientists based solely on their use of novel or unorthodox research methods. (NAS: 27)

This clause was therefore later removed from the definition. For an entry into the key philosophical literature on conduct in science, see Shamoo & Resnick (2009).

The question of the source of the success of science has been at the core of philosophy since the beginning of modern science. If viewed as a matter of epistemology more generally, scientific method is a part of the entire history of philosophy. Over that time, science and whatever methods its practitioners may employ have changed dramatically. Today, many philosophers have taken up the banners of pluralism or of practice to focus on what are, in effect, fine-grained and contextually limited examinations of scientific method. Others hope to shift perspectives in order to provide a renewed general account of what characterizes the activity we call science.

One such perspective has been offered recently by Hoyningen-Huene (2008, 2013), who argues from the history of philosophy of science that after three lengthy phases of characterizing science by its method, we are now in a phase where the belief in the existence of a positive scientific method has eroded and what has been left to characterize science is only its fallibility. First was a phase from Plato and Aristotle up until the 17 th century where the specificity of scientific knowledge was seen in its absolute certainty established by proof from evident axioms; next was a phase up to the mid-19 th century in which the means to establish the certainty of scientific knowledge had been generalized to include inductive procedures as well. In the third phase, which lasted until the last decades of the 20 th century, it was recognized that empirical knowledge was fallible, but it was still granted a special status due to its distinctive mode of production. But now in the fourth phase, according to Hoyningen-Huene, historical and philosophical studies have shown how “scientific methods with the characteristics as posited in the second and third phase do not exist” (2008: 168) and there is no longer any consensus among philosophers and historians of science about the nature of science. For Hoyningen-Huene, this is too negative a stance, and he therefore urges the question about the nature of science anew. His own answer to this question is that “scientific knowledge differs from other kinds of knowledge, especially everyday knowledge, primarily by being more systematic” (Hoyningen-Huene 2013: 14). Systematicity can have several different dimensions: among them are more systematic descriptions, explanations, predictions, defense of knowledge claims, epistemic connectedness, ideal of completeness, knowledge generation, representation of knowledge and critical discourse. Hence, what characterizes science is the greater care in excluding possible alternative explanations, the more detailed elaboration with respect to data on which predictions are based, the greater care in detecting and eliminating sources of error, the more articulate connections to other pieces of knowledge, etc. On this position, what characterizes science is not that the methods employed are unique to science, but that the methods are more carefully employed.

Another, similar approach has been offered by Haack (2003). She sets off, similar to Hoyningen-Huene, from a dissatisfaction with the recent clash between what she calls Old Deferentialism and New Cynicism. The Old Deferentialist position is that science progressed inductively by accumulating true theories confirmed by empirical evidence or deductively by testing conjectures against basic statements; while the New Cynics position is that science has no epistemic authority and no uniquely rational method and is merely just politics. Haack insists that contrary to the views of the New Cynics, there are objective epistemic standards, and there is something epistemologically special about science, even though the Old Deferentialists pictured this in a wrong way. Instead, she offers a new Critical Commonsensist account on which standards of good, strong, supportive evidence and well-conducted, honest, thorough and imaginative inquiry are not exclusive to the sciences, but the standards by which we judge all inquirers. In this sense, science does not differ in kind from other kinds of inquiry, but it may differ in the degree to which it requires broad and detailed background knowledge and a familiarity with a technical vocabulary that only specialists may possess.

  • Aikenhead, G.S., 1987, “High-school graduates’ beliefs about science-technology-society. III. Characteristics and limitations of scientific knowledge”, Science Education , 71(4): 459–487.
  • Allchin, D., H.M. Andersen and K. Nielsen, 2014, “Complementary Approaches to Teaching Nature of Science: Integrating Student Inquiry, Historical Cases, and Contemporary Cases in Classroom Practice”, Science Education , 98: 461–486.
  • Anderson, C., 2008, “The end of theory: The data deluge makes the scientific method obsolete”, Wired magazine , 16(7): 16–07
  • Arabatzis, T., 2006, “On the inextricability of the context of discovery and the context of justification”, in Revisiting Discovery and Justification , J. Schickore and F. Steinle (eds.), Dordrecht: Springer, pp. 215–230.
  • Barnes, J. (ed.), 1984, The Complete Works of Aristotle, Vols I and II , Princeton: Princeton University Press.
  • Barnes, B. and D. Bloor, 1982, “Relativism, Rationalism, and the Sociology of Knowledge”, in Rationality and Relativism , M. Hollis and S. Lukes (eds.), Cambridge: MIT Press, pp. 1–20.
  • Bauer, H.H., 1992, Scientific Literacy and the Myth of the Scientific Method , Urbana: University of Illinois Press.
  • Bechtel, W. and R.C. Richardson, 1993, Discovering complexity , Princeton, NJ: Princeton University Press.
  • Berkeley, G., 1734, The Analyst in De Motu and The Analyst: A Modern Edition with Introductions and Commentary , D. Jesseph (trans. and ed.), Dordrecht: Kluwer Academic Publishers, 1992.
  • Blachowicz, J., 2009, “How science textbooks treat scientific method: A philosopher’s perspective”, The British Journal for the Philosophy of Science , 60(2): 303–344.
  • Bloor, D., 1991, Knowledge and Social Imagery , Chicago: University of Chicago Press, 2 nd edition.
  • Boyle, R., 1682, New experiments physico-mechanical, touching the air , Printed by Miles Flesher for Richard Davis, bookseller in Oxford.
  • Bridgman, P.W., 1927, The Logic of Modern Physics , New York: Macmillan.
  • –––, 1956, “The Methodological Character of Theoretical Concepts”, in The Foundations of Science and the Concepts of Science and Psychology , Herbert Feigl and Michael Scriven (eds.), Minnesota: University of Minneapolis Press, pp. 38–76.
  • Burian, R., 1997, “Exploratory Experimentation and the Role of Histochemical Techniques in the Work of Jean Brachet, 1938–1952”, History and Philosophy of the Life Sciences , 19(1): 27–45.
  • –––, 2007, “On microRNA and the need for exploratory experimentation in post-genomic molecular biology”, History and Philosophy of the Life Sciences , 29(3): 285–311.
  • Carnap, R., 1928, Der logische Aufbau der Welt , Berlin: Bernary, transl. by R.A. George, The Logical Structure of the World , Berkeley: University of California Press, 1967.
  • –––, 1956, “The methodological character of theoretical concepts”, Minnesota studies in the philosophy of science , 1: 38–76.
  • Carrol, S., and D. Goodstein, 2009, “Defining the scientific method”, Nature Methods , 6: 237.
  • Churchman, C.W., 1948, “Science, Pragmatics, Induction”, Philosophy of Science , 15(3): 249–268.
  • Cooper, J. (ed.), 1997, Plato: Complete Works , Indianapolis: Hackett.
  • Darden, L., 1991, Theory Change in Science: Strategies from Mendelian Genetics , Oxford: Oxford University Press
  • Dewey, J., 1910, How we think , New York: Dover Publications (reprinted 1997).
  • Douglas, H., 2009, Science, Policy, and the Value-Free Ideal , Pittsburgh: University of Pittsburgh Press.
  • Dupré, J., 2004, “Miracle of Monism ”, in Naturalism in Question , Mario De Caro and David Macarthur (eds.), Cambridge, MA: Harvard University Press, pp. 36–58.
  • Elliott, K.C., 2007, “Varieties of exploratory experimentation in nanotoxicology”, History and Philosophy of the Life Sciences , 29(3): 311–334.
  • Elliott, K. C., and T. Richards (eds.), 2017, Exploring inductive risk: Case studies of values in science , Oxford: Oxford University Press.
  • Falcon, Andrea, 2005, Aristotle and the science of nature: Unity without uniformity , Cambridge: Cambridge University Press.
  • Feyerabend, P., 1978, Science in a Free Society , London: New Left Books
  • –––, 1988, Against Method , London: Verso, 2 nd edition.
  • Fisher, R.A., 1955, “Statistical Methods and Scientific Induction”, Journal of The Royal Statistical Society. Series B (Methodological) , 17(1): 69–78.
  • Foster, K. and P.W. Huber, 1999, Judging Science. Scientific Knowledge and the Federal Courts , Cambridge: MIT Press.
  • Fox Keller, E., 2003, “Models, Simulation, and ‘computer experiments’”, in The Philosophy of Scientific Experimentation , H. Radder (ed.), Pittsburgh: Pittsburgh University Press, 198–215.
  • Gilbert, G., 1976, “The transformation of research findings into scientific knowledge”, Social Studies of Science , 6: 281–306.
  • Gimbel, S., 2011, Exploring the Scientific Method , Chicago: University of Chicago Press.
  • Goodman, N., 1965, Fact , Fiction, and Forecast , Indianapolis: Bobbs-Merrill.
  • Haack, S., 1995, “Science is neither sacred nor a confidence trick”, Foundations of Science , 1(3): 323–335.
  • –––, 2003, Defending science—within reason , Amherst: Prometheus.
  • –––, 2005a, “Disentangling Daubert: an epistemological study in theory and practice”, Journal of Philosophy, Science and Law , 5, Haack 2005a available online . doi:10.5840/jpsl2005513
  • –––, 2005b, “Trial and error: The Supreme Court’s philosophy of science”, American Journal of Public Health , 95: S66-S73.
  • –––, 2010, “Federal Philosophy of Science: A Deconstruction-and a Reconstruction”, NYUJL & Liberty , 5: 394.
  • Hangel, N. and J. Schickore, 2017, “Scientists’ conceptions of good research practice”, Perspectives on Science , 25(6): 766–791
  • Harper, W.L., 2011, Isaac Newton’s Scientific Method: Turning Data into Evidence about Gravity and Cosmology , Oxford: Oxford University Press.
  • Hempel, C., 1950, “Problems and Changes in the Empiricist Criterion of Meaning”, Revue Internationale de Philosophie , 41(11): 41–63.
  • –––, 1951, “The Concept of Cognitive Significance: A Reconsideration”, Proceedings of the American Academy of Arts and Sciences , 80(1): 61–77.
  • –––, 1965, Aspects of scientific explanation and other essays in the philosophy of science , New York–London: Free Press.
  • –––, 1966, Philosophy of Natural Science , Englewood Cliffs: Prentice-Hall.
  • Holmes, F.L., 1987, “Scientific writing and scientific discovery”, Isis , 78(2): 220–235.
  • Howard, D., 2003, “Two left turns make a right: On the curious political career of North American philosophy of science at midcentury”, in Logical Empiricism in North America , G.L. Hardcastle & A.W. Richardson (eds.), Minneapolis: University of Minnesota Press, pp. 25–93.
  • Hoyningen-Huene, P., 2008, “Systematicity: The nature of science”, Philosophia , 36(2): 167–180.
  • –––, 2013, Systematicity. The Nature of Science , Oxford: Oxford University Press.
  • Howie, D., 2002, Interpreting probability: Controversies and developments in the early twentieth century , Cambridge: Cambridge University Press.
  • Hughes, R., 1999, “The Ising Model, Computer Simulation, and Universal Physics”, in Models as Mediators , M. Morgan and M. Morrison (eds.), Cambridge: Cambridge University Press, pp. 97–145
  • Hume, D., 1739, A Treatise of Human Nature , D. Fate Norton and M.J. Norton (eds.), Oxford: Oxford University Press, 2000.
  • Humphreys, P., 1995, “Computational science and scientific method”, Minds and Machines , 5(1): 499–512.
  • ICMJE, 2013, “Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals”, International Committee of Medical Journal Editors, available online , accessed August 13 2014
  • Jeffrey, R.C., 1956, “Valuation and Acceptance of Scientific Hypotheses”, Philosophy of Science , 23(3): 237–246.
  • Kaufmann, W.J., and L.L. Smarr, 1993, Supercomputing and the Transformation of Science , New York: Scientific American Library.
  • Knorr-Cetina, K., 1981, The Manufacture of Knowledge , Oxford: Pergamon Press.
  • Krohs, U., 2012, “Convenience experimentation”, Studies in History and Philosophy of Biological and BiomedicalSciences , 43: 52–57.
  • Kuhn, T.S., 1962, The Structure of Scientific Revolutions , Chicago: University of Chicago Press
  • Latour, B. and S. Woolgar, 1986, Laboratory Life: The Construction of Scientific Facts , Princeton: Princeton University Press, 2 nd edition.
  • Laudan, L., 1968, “Theories of scientific method from Plato to Mach”, History of Science , 7(1): 1–63.
  • Lenhard, J., 2006, “Models and statistical inference: The controversy between Fisher and Neyman-Pearson”, The British Journal for the Philosophy of Science , 57(1): 69–91.
  • Leonelli, S., 2012, “Making Sense of Data-Driven Research in the Biological and the Biomedical Sciences”, Studies in the History and Philosophy of the Biological and Biomedical Sciences , 43(1): 1–3.
  • Levi, I., 1960, “Must the scientist make value judgments?”, Philosophy of Science , 57(11): 345–357
  • Lindley, D., 1991, Theory Change in Science: Strategies from Mendelian Genetics , Oxford: Oxford University Press.
  • Lipton, P., 2004, Inference to the Best Explanation , London: Routledge, 2 nd edition.
  • Marks, H.M., 2000, The progress of experiment: science and therapeutic reform in the United States, 1900–1990 , Cambridge: Cambridge University Press.
  • Mazzochi, F., 2015, “Could Big Data be the end of theory in science?”, EMBO reports , 16: 1250–1255.
  • Mayo, D.G., 1996, Error and the Growth of Experimental Knowledge , Chicago: University of Chicago Press.
  • McComas, W.F., 1996, “Ten myths of science: Reexamining what we think we know about the nature of science”, School Science and Mathematics , 96(1): 10–16.
  • Medawar, P.B., 1963/1996, “Is the scientific paper a fraud”, in The Strange Case of the Spotted Mouse and Other Classic Essays on Science , Oxford: Oxford University Press, 33–39.
  • Mill, J.S., 1963, Collected Works of John Stuart Mill , J. M. Robson (ed.), Toronto: University of Toronto Press
  • NAS, 1992, Responsible Science: Ensuring the integrity of the research process , Washington DC: National Academy Press.
  • Nersessian, N.J., 1987, “A cognitive-historical approach to meaning in scientific theories”, in The process of science , N. Nersessian (ed.), Berlin: Springer, pp. 161–177.
  • –––, 2008, Creating Scientific Concepts , Cambridge: MIT Press.
  • Newton, I., 1726, Philosophiae naturalis Principia Mathematica (3 rd edition), in The Principia: Mathematical Principles of Natural Philosophy: A New Translation , I.B. Cohen and A. Whitman (trans.), Berkeley: University of California Press, 1999.
  • –––, 1704, Opticks or A Treatise of the Reflections, Refractions, Inflections & Colors of Light , New York: Dover Publications, 1952.
  • Neyman, J., 1956, “Note on an Article by Sir Ronald Fisher”, Journal of the Royal Statistical Society. Series B (Methodological) , 18: 288–294.
  • Nickles, T., 1987, “Methodology, heuristics, and rationality”, in Rational changes in science: Essays on Scientific Reasoning , J.C. Pitt (ed.), Berlin: Springer, pp. 103–132.
  • Nicod, J., 1924, Le problème logique de l’induction , Paris: Alcan. (Engl. transl. “The Logical Problem of Induction”, in Foundations of Geometry and Induction , London: Routledge, 2000.)
  • Nola, R. and H. Sankey, 2000a, “A selective survey of theories of scientific method”, in Nola and Sankey 2000b: 1–65.
  • –––, 2000b, After Popper, Kuhn and Feyerabend. Recent Issues in Theories of Scientific Method , London: Springer.
  • –––, 2007, Theories of Scientific Method , Stocksfield: Acumen.
  • Norton, S., and F. Suppe, 2001, “Why atmospheric modeling is good science”, in Changing the Atmosphere: Expert Knowledge and Environmental Governance , C. Miller and P. Edwards (eds.), Cambridge, MA: MIT Press, 88–133.
  • O’Malley, M., 2007, “Exploratory experimentation and scientific practice: Metagenomics and the proteorhodopsin case”, History and Philosophy of the Life Sciences , 29(3): 337–360.
  • O’Malley, M., C. Haufe, K. Elliot, and R. Burian, 2009, “Philosophies of Funding”, Cell , 138: 611–615.
  • Oreskes, N., K. Shrader-Frechette, and K. Belitz, 1994, “Verification, Validation and Confirmation of Numerical Models in the Earth Sciences”, Science , 263(5147): 641–646.
  • Osborne, J., S. Simon, and S. Collins, 2003, “Attitudes towards science: a review of the literature and its implications”, International Journal of Science Education , 25(9): 1049–1079.
  • Parascandola, M., 1998, “Epidemiology—2 nd -Rate Science”, Public Health Reports , 113(4): 312–320.
  • Parker, W., 2008a, “Franklin, Holmes and the Epistemology of Computer Simulation”, International Studies in the Philosophy of Science , 22(2): 165–83.
  • –––, 2008b, “Computer Simulation through an Error-Statistical Lens”, Synthese , 163(3): 371–84.
  • Pearson, K. 1892, The Grammar of Science , London: J.M. Dents and Sons, 1951
  • Pearson, E.S., 1955, “Statistical Concepts in Their Relation to Reality”, Journal of the Royal Statistical Society , B, 17: 204–207.
  • Pickering, A., 1984, Constructing Quarks: A Sociological History of Particle Physics , Edinburgh: Edinburgh University Press.
  • Popper, K.R., 1959, The Logic of Scientific Discovery , London: Routledge, 2002
  • –––, 1963, Conjectures and Refutations , London: Routledge, 2002.
  • –––, 1985, Unended Quest: An Intellectual Autobiography , La Salle: Open Court Publishing Co..
  • Rudner, R., 1953, “The Scientist Qua Scientist Making Value Judgments”, Philosophy of Science , 20(1): 1–6.
  • Rudolph, J.L., 2005, “Epistemology for the masses: The origin of ‘The Scientific Method’ in American Schools”, History of Education Quarterly , 45(3): 341–376
  • Schickore, J., 2008, “Doing science, writing science”, Philosophy of Science , 75: 323–343.
  • Schickore, J. and N. Hangel, 2019, “‘It might be this, it should be that…’ uncertainty and doubt in day-to-day science practice”, European Journal for Philosophy of Science , 9(2): 31. doi:10.1007/s13194-019-0253-9
  • Shamoo, A.E. and D.B. Resnik, 2009, Responsible Conduct of Research , Oxford: Oxford University Press.
  • Shank, J.B., 2008, The Newton Wars and the Beginning of the French Enlightenment , Chicago: The University of Chicago Press.
  • Shapin, S. and S. Schaffer, 1985, Leviathan and the air-pump , Princeton: Princeton University Press.
  • Smith, G.E., 2002, “The Methodology of the Principia”, in The Cambridge Companion to Newton , I.B. Cohen and G.E. Smith (eds.), Cambridge: Cambridge University Press, 138–173.
  • Snyder, L.J., 1997a, “Discoverers’ Induction”, Philosophy of Science , 64: 580–604.
  • –––, 1997b, “The Mill-Whewell Debate: Much Ado About Induction”, Perspectives on Science , 5: 159–198.
  • –––, 1999, “Renovating the Novum Organum: Bacon, Whewell and Induction”, Studies in History and Philosophy of Science , 30: 531–557.
  • Sober, E., 2008, Evidence and Evolution. The logic behind the science , Cambridge: Cambridge University Press
  • Sprenger, J. and S. Hartmann, 2019, Bayesian philosophy of science , Oxford: Oxford University Press.
  • Steinle, F., 1997, “Entering New Fields: Exploratory Uses of Experimentation”, Philosophy of Science (Proceedings), 64: S65–S74.
  • –––, 2002, “Experiments in History and Philosophy of Science”, Perspectives on Science , 10(4): 408–432.
  • Strasser, B.J., 2012, “Data-driven sciences: From wonder cabinets to electronic databases”, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences , 43(1): 85–87.
  • Succi, S. and P.V. Coveney, 2018, “Big data: the end of the scientific method?”, Philosophical Transactions of the Royal Society A , 377: 20180145. doi:10.1098/rsta.2018.0145
  • Suppe, F., 1998, “The Structure of a Scientific Paper”, Philosophy of Science , 65(3): 381–405.
  • Swijtink, Z.G., 1987, “The objectification of observation: Measurement and statistical methods in the nineteenth century”, in The probabilistic revolution. Ideas in History, Vol. 1 , L. Kruger (ed.), Cambridge MA: MIT Press, pp. 261–285.
  • Waters, C.K., 2007, “The nature and context of exploratory experimentation: An introduction to three case studies of exploratory research”, History and Philosophy of the Life Sciences , 29(3): 275–284.
  • Weinberg, S., 1995, “The methods of science… and those by which we live”, Academic Questions , 8(2): 7–13.
  • Weissert, T., 1997, The Genesis of Simulation in Dynamics: Pursuing the Fermi-Pasta-Ulam Problem , New York: Springer Verlag.
  • William H., 1628, Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus , in On the Motion of the Heart and Blood in Animals , R. Willis (trans.), Buffalo: Prometheus Books, 1993.
  • Winsberg, E., 2010, Science in the Age of Computer Simulation , Chicago: University of Chicago Press.
  • Wivagg, D. & D. Allchin, 2002, “The Dogma of the Scientific Method”, The American Biology Teacher , 64(9): 645–646
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Blackmun opinion , in Daubert v. Merrell Dow Pharmaceuticals (92–102), 509 U.S. 579 (1993).
  • Scientific Method at philpapers. Darrell Rowbottom (ed.).
  • Recent Articles | Scientific Method | The Scientist Magazine

al-Kindi | Albert the Great [= Albertus magnus] | Aquinas, Thomas | Arabic and Islamic Philosophy, disciplines in: natural philosophy and natural science | Arabic and Islamic Philosophy, historical and methodological topics in: Greek sources | Arabic and Islamic Philosophy, historical and methodological topics in: influence of Arabic and Islamic Philosophy on the Latin West | Aristotle | Bacon, Francis | Bacon, Roger | Berkeley, George | biology: experiment in | Boyle, Robert | Cambridge Platonists | confirmation | Descartes, René | Enlightenment | epistemology | epistemology: Bayesian | epistemology: social | Feyerabend, Paul | Galileo Galilei | Grosseteste, Robert | Hempel, Carl | Hume, David | Hume, David: Newtonianism and Anti-Newtonianism | induction: problem of | Kant, Immanuel | Kuhn, Thomas | Leibniz, Gottfried Wilhelm | Locke, John | Mill, John Stuart | More, Henry | Neurath, Otto | Newton, Isaac | Newton, Isaac: philosophy | Ockham [Occam], William | operationalism | Peirce, Charles Sanders | Plato | Popper, Karl | rationality: historicist theories of | Reichenbach, Hans | reproducibility, scientific | Schlick, Moritz | science: and pseudo-science | science: theory and observation in | science: unity of | scientific discovery | scientific knowledge: social dimensions of | simulations in science | skepticism: medieval | space and time: absolute and relational space and motion, post-Newtonian theories | Vienna Circle | Whewell, William | Zabarella, Giacomo

Copyright © 2021 by Brian Hepburn < brian . hepburn @ wichita . edu > Hanne Andersen < hanne . andersen @ ind . ku . dk >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Biology LibreTexts

2.1: The Scientific Method

  • Last updated
  • Save as PDF
  • Page ID 94370

Hypothesis Testing and The scientific Method

The scientific method is a process of research with defined steps that include data collection and careful observation. The scientific method was used even in ancient times, but it was first documented by England’s Sir Francis Bacon (1561–1626) (Figure \(\PageIndex{5}\)), who set up inductive methods for scientific inquiry.

Painting depicts Sir Francis Bacon in a long cloak.

Observation

Scientific advances begin with observations . This involves noticing a pattern, either directly or indirectly from the literature. An example of a direct observation is noticing that there have been a lot of toads in your yard ever since you turned on the sprinklers, where as an indirect observation would be reading a scientific study reporting high densities of toads in urban areas with watered lawns.

During the Vietnam War (figure \(\PageIndex{6}\)), press reports from North Vietnam documented an increasing rate of birth defects. While this credibility of this information was initially questioned by the U.S., it evoked questions about what could be causing these birth defects. Furthermore, increased incidence of certain cancers and other diseases later emerged in Vietnam veterans who had returned to the U.S. This leads us to the next step of the scientific method, the question.

An old map shows North Vietnam separated from South Vietnam

Figure \(\PageIndex{6}\): A map of Vietnam 1954-1975. Image from Bureau of Public Affairs U.S. Government Printing Office (public domain).

The question step of the scientific method is simply asking, what explains the observed pattern? Multiple questions can stem from a single observation. Scientists and the public began to ask, what is causing the birth defects in Vietnam and diseases in Vietnam veterans? Could it be associated with the widespread military use of the herbicide Agent Orange to clear the forests (figure \(\PageIndex{7-8}\)), which helped identify enemies more easily?

Stacks of green drums, each with an orange stripe in the middle

Figure \(\PageIndex{7}\): Agent Orange drums in Vietnam. Image by U.S. Government (public domain).

Aerial view of a healthy forest surrounding a river (top) and a barren, brown landscape following herbicide application.

Figure \(\PageIndex{8}\): A healthy mangrove forest (top), and another forest after application of Agent Orange. Image by unknown author (public domain).

Hypothesis and Prediction

The hypothesis is the expected answer to the question. The best hypotheses state the proposed direction of the effect (increases, decreases, etc.) and explain why the hypothesis could be true.

  • OK hypothesis: Agent Orange influences rates of birth defects and disease.
  • Better hypothesis: Agent Orange increases the incidence of birth defects and disease.
  • Best hypothesis: Agent Orange increases the incidence of birth defects and disease because these health problems have been frequently reported by individuals exposed to this herbicide.

If two or more hypotheses meet this standard, the simpler one is preferred.

Predictions stem from the hypothesis. The prediction explains what results would support hypothesis. The prediction is more specific than the hypothesis because it references the details of the experiment. For example, "If Agent Orange causes health problems, then mice experimentally exposed to TCDD, a contaminant of Agent Orange, during development will have more frequent birth defects than control mice" (figure \(\PageIndex{9}\)).

The structural formula of TCDD, showing three fused rings

Figure \(\PageIndex{9}\): The chemical structure of TCDD (2,3,7,8-tetrachlorodibenzo-p-dioxin), which is produced when synthesizing the chemicals in Agent Orange. It contaminates Agent Orange at low but harmful concentrations. Image by Emeldir (public domain).

Hypotheses and predictions must be testable to ensure that it is valid. For example, a hypothesis that depends on what a bear thinks is not testable, because it can never be known what a bear thinks. It should also be falsifiable , meaning that they have the capacity to be tested and demonstrated to be untrue. An example of an unfalsifiable hypothesis is “Botticelli’s Birth of Venus is beautiful.” There is no experiment that might show this statement to be false. To test a hypothesis, a researcher will conduct one or more experiments designed to eliminate one or more of the hypotheses. This is important. A hypothesis can be disproven, or eliminated, but it can never be proven. Science does not deal in proofs like mathematics. If an experiment fails to disprove a hypothesis, then we find support for that explanation, but this is not to say that down the road a better explanation will not be found, or a more carefully designed experiment will be found to falsify the hypothesis.

Hypotheses are tentative explanations and are different from scientific theories. A scientific theory is a widely-accepted, thoroughly tested and confirmed explanation for a set of observations or phenomena. Scientific theory is the foundation of scientific knowledge. In addition, in many scientific disciplines (less so in biology) there are scientific laws , often expressed in mathematical formulas, which describe how elements of nature will behave under certain specific conditions, but they do not offer explanations for why they occur.

Design an Experiment

Next, a scientific study (experiment) is planned to test the hypothesis and determine whether the results match the predictions. Each experiment will have one or more variables. The explanatory variable is what scientists hypothesize might be causing something else. In a manipulative experiment (see below), the explanatory variable is manipulated by the scientist. The response variable is the response, the variable ultimately measured in the study. Controlled variables (confounding factors) might affect the response variable, but they are not the focus of the study. Scientist attempt to standardize the controlled variables so that they do not influence the results. In our previous example, exposure to Agent Orange is the explanatory variable. It is hypothesized to cause a change in health (likelihood of having children with birth defects or developing a disease), the response variable. Many other things could affect health, including diet, exercise, and family history. These are the controlled variables.

There are two main types of scientific studies: experimental studies (manipulative experiments) and observational studies.

In a manipulative experiment , the explanatory variable is altered by the scientists, who then observe the response. In other words, the scientists apply a treatment . An example would be exposing developing mice to TCDD and comparing the rate of birth defects to a control group. The control group is group of test subjects that are as similar as possible to all other test subjects, with the exception that they don’t receive the experimental treatment (those that do receive it are known as the experimental, treatment, or test group ). The purpose of the control group is to establish what the dependent variable would be under normal conditions, in the absence of the experimental treatment. It serves as a baseline to which the test group can be compared. In this example, the control group would contain mice that were not exposed to TCDD but were otherwise handled the same way as the other mice (figure \(\PageIndex{10}\))

Five white mice in a cage with red eyes

Figure \(\PageIndex{10}\): Laboratory mice. In a proper scientific study, the treatment would be applied to multiple mice. Another group of mice would not receive the treatment (the control group). Image by Aaron Logan ( CC-BY ).

In an observational study , scientists examine multiple samples with and without the presumed cause. An example would be monitoring the health of veterans who had varying levels of exposure to Agent Orange.

Scientific studies contain many replicates. Multiple samples ensure that any observed pattern is due to the treatment rather than naturally occurring differences between individuals. A scientific study should also be repeatable , meaning that if it is conducted again, following the same procedure, it should reproduce the same general results. Additionally, multiple studies will ultimately test the same hypothesis.

Finally, the data are collected and the results are analyzed. As described in the Math Blast chapter, statistics can be used to describe the data and summarize data. They also provide a criterion for deciding whether the pattern in the data is strong enough to support the hypothesis.

The manipulative experiment in our example found that mice exposed to high levels of 2,4,5-T (a component of Agent Orange) or TCDD (a contaminant found in Agent Orange) during development had a cleft palate birth defect more frequently than control mice (figure \(\PageIndex{11}\)). Mice embryos were also more likely to die when exposed to TCDD compared to controls.

A baby with a gap in the upper lip

Figure \(\PageIndex{11}\): Cleft lip and palate, a birth defect in which these structures are split. Image by James Heilman, MD ( CC-BY-SA ).

An observational study found that self-reported exposure to Agent Orange was positively correlated with incidence of multiple diseases in Korean veterans of the Vietnam War, including various cancers, diseases of the cardiovascular and nervous systems, skin diseases, and psychological disorders. Note that a positive correlation simply means that the independent and dependent variables both increase or decrease together, but further data, such as the evidence provided by manipulative experiments is needed to document a cause-and-effect relationship . (A negative correlation occurs when one variable increases as the other decreases.)

Lastly, scientists make a conclusion regarding whether the data support the hypothesis. In the case of Agent Orange, the data, that mice exposed to TCDD and 2,4,5-T had higher frequencies of cleft palate, matches the prediction. Additionally, veterans exposed to Agent Orange had higher rates of certain diseases, further supporting the hypothesis. We can thus accept the hypothesis that Agent Orange increases the incidence of birth defects and disease.

Scientific Method in Practice

In practice, the scientific method is not as rigid and structured as it might first appear. Sometimes an experiment leads to conclusions that favor a change in approach; often, an experiment brings entirely new scientific questions to the puzzle. Many times, science does not operate in a linear fashion; instead, scientists continually draw inferences and make generalizations, finding patterns as their research proceeds (figure \(\PageIndex{12}\)). Even if the hypothesis was supported, scientists may still continue to test it in different ways. For example, scientists explore the impacts of Agent Orange, examining long-term health impacts as Vietnam veterans age.

A flow chart shows the steps in the scientific method. In step 1, an observation is made. In step 2, a question is asked about the observation. In step 3, an answer to the question, called a hypothesis, is proposed. In step 4, a prediction is made based on the hypothesis. In step 5, an experiment is done to test the prediction. In step 6, the results are analyzed to determine whether or not the hypothesis is supported. If the hypothesis is not supported, another hypothesis is made. In either case, the results are reported.

Scientific findings can influence decision making. In response to evidence regarding the effect of Agent Orange on human health, compensation is now available for Vietnam veterans who were exposed to Agent Orange and develop certain diseases. The use of Agent Orange is also banned in the U.S. Finally, the U.S. has began cleaning sites in Vietnam that are still contaminated with TCDD.

As another simple example, an experiment might be conducted to test the hypothesis that phosphate limits the growth of algae in freshwater ponds. A series of artificial ponds are filled with water and half of them are treated by adding phosphate each week, while the other half are treated by adding a salt that is known not to be used by algae. The variable here is the phosphate (or lack of phosphate), the experimental or treatment cases are the ponds with added phosphate and the control ponds are those with something inert added, such as the salt. Just adding something is also a control against the possibility that adding extra matter to the pond has an effect. If the treated ponds show lesser growth of algae, then we have found support for our hypothesis. If they do not, then we reject our hypothesis. Be aware that rejecting one hypothesis does not determine whether or not the other hypotheses can be accepted; it simply eliminates one hypothesis that is not valid (Figure \(\PageIndex{12}\)). Using the scientific method, the hypotheses that are inconsistent with experimental data are rejected.

Institute of Medicine (US) Committee to Review the Health Effects in Vietnam Veterans of Exposure to Herbicides. Veterans and Agent Orange: Health Effects of Herbicides Used in Vietnam . Washington (DC): National Academies Press (US); 1994. 2, History of the Controversy Over the Use of Herbicides.

Neubert, D., Dillmann, I. Embryotoxic effects in mice treated with 2,4,5-trichlorophenoxyacetic acid and 2,3,7,8-tetrachlorodibenzo-p-dioxin . Naunyn-Schmiedeberg's Arch. Pharmacol. 272, 243–264 (1972).

Stellman, J. M., & Stellman, S. D. (2018). Agent Orange During the Vietnam War: The Lingering Issue of Its Civilian and Military Health Impact . American journal of public health , 108 (6), 726–728.

Yi, S. W., Ohrr, H., Hong, J. S., & Yi, J. J. (2013). Agent Orange exposure and prevalence of self-reported diseases in Korean Vietnam veterans . Journal of preventive medicine and public health = Yebang Uihakhoe chi , 46 (5), 213–225.

American Association for the Advancement of Science (AAAS). 1990. Science for All Americans. AAAS, Washington, DC.

Barnes, B. 1985. About Science. Blackwell Ltd ,London, UK.

Giere, R.N. 2005. Understanding Scientific Reasoning. 5th ed. Wadsworth Publishing, New York, NY.

Kuhn, T.S. 1996. The Structure of Scientific Revolutions. 3rd ed. University of Chicago Press, Chicago, IL.

McCain, G. and E.M. Siegal. 1982. The Game of Science. Holbrook Press Inc., Boston, MA.

Moore, J.A. 1999. Science as a Way of Knowing. Harvard University Press, Boston, MA.

Popper, K. 1979. Objective Knowledge: An Evolutionary Approach. Clarendon Press, Oxford, UK.

Raven, P.H., G.B. Johnson, K.A. Mason, and J. Losos. 2013. Biology. 10th ed. McGraw-Hill, Columbus, OH.

Silver, B.L. 2000. The Ascent of Science. Oxford University Press, Oxford, UK.

Contributors and Attributions

  • Modified by Kyle Whittinghill (University of Pittsburgh)

Samantha Fowler (Clayton State University), Rebecca Roush (Sandhills Community College), James Wise (Hampton University). Original content by OpenStax (CC BY 4.0; Access for free at https://cnx.org/contents/b3c1e1d2-83...4-e119a8aafbdd ).

  • Modified by Melissa Ha
  • 1.2: The Process of Science by OpenStax , is licensed CC BY
  • What is Science? from An Introduction to Geology by Chris Johnson et al. (licensed under CC-BY-NC-SA )
  • The Process of Science from Environmental Biology by Matthew R. Fisher (licensed under CC-BY )
  • Scientific Methods from Biology by John W. Kimball (licensed under CC-BY )
  • Scientific Papers from Biology by John W. Kimball ( CC-BY )
  • Environmental Science: A Canadian perspective by Bill Freedman Chapter 2: Science as a Way of Understanding the Natural World

Chapter 2: Chemistry of Life

Chapter 3: macromolecules, chapter 4: cell structure and function, chapter 5: membranes and cellular transport, chapter 6: cell signaling, chapter 7: metabolism, chapter 8: cellular respiration, chapter 9: photosynthesis, chapter 10: cell cycle and division, chapter 11: meiosis, chapter 12: classical and modern genetics, chapter 13: dna structure and function, chapter 14: gene expression, chapter 15: biotechnology, chapter 16: viruses, chapter 17: nutrition and digestion, chapter 18: nervous system, chapter 19: sensory systems, chapter 20: musculoskeletal system, chapter 21: endocrine system, chapter 22: circulatory and pulmonary systems, chapter 23: osmoregulation and excretion, chapter 24: immune system, chapter 25: reproduction and development, chapter 26: behavior, chapter 27: ecosystems, chapter 28: population and community ecology, chapter 29: biodiversity and conservation, chapter 30: speciation and diversity, chapter 31: natural selection, chapter 32: population genetics, chapter 33: evolutionary history, chapter 34: plant structure, growth, and nutrition, chapter 35: plant reproduction, chapter 36: plant responses to the environment.

The JoVE video player is compatible with HTML5 and Adobe Flash. Older browsers that do not support HTML5 and the H.264 video codec will still use a Flash-based video player. We recommend downloading the newest version of Flash here, but we support all versions 10 and above.

hypothesis of the scientific method

The scientific method is a detailed, stepwise process for answering questions. For example, a scientist makes an observation that the slugs destroy some cabbages but not those near garlic.

Such observations lead to asking questions, "Could garlic be used to deter slugs from ruining a cabbage patch?" After formulating questions, the scientist can then develop hypotheses —potential explanations for the observations that lead to specific, testable predictions.

In this case, a hypothesis could be that garlic repels slugs, which predicts that cabbages surrounded by garlic powder will suffer less damage than the ones without it. 

The hypothesis is then tested through a series of experiments designed to eliminate hypotheses.

The experimental setup involves defining variables. An independent variable is an item that is being tested, in this case, garlic addition. The dependent variable describes the measurement used to determine the outcome, such as the number of slugs on the cabbages.

In addition, the slugs must be divided into groups, experimental and control. These groups are identical, except that the experimental group is exposed to garlic powder.

After data are collected and analyzed, conclusions are made, and results are communicated to other scientists.

1.3: The Scientific Method

The scientific method is a detailed, empirical problem-solving process used by biologists and other scientists. This iterative approach involves formulating a question based on observation, developing a testable potential explanation for the observation (called a hypothesis), making and testing predictions based on the hypothesis, and using the findings to create new hypotheses and predictions.

Generally, predictions are tested using carefully-designed experiments. Based on the outcome of these experiments, the original hypothesis may need to be refined, and new hypotheses and questions can be generated. Importantly, this illustrates that the scientific method is not a stepwise recipe. Instead, it is a continuous refinement and testing of ideas based on new observations, which is the crux of scientific inquiry.

Science is mutable and continuously changes as scientists learn more about the world, physical phenomena and how organisms interact with their environment. For this reason, scientists avoid claiming to ‘prove' a specific idea. Instead, they gather evidence that either supports or refutes a given hypothesis.

Making Observations and Formulating Hypotheses

A hypothesis is preceded by an initial observation, during which information is gathered by the senses (e.g., vision, hearing) or using scientific tools and instruments. This observation leads to a question that prompts the formation of an initial hypothesis, a (testable) possible answer to the question. For example, the observation that slugs eat some cabbage plants but not cabbage plants located near garlic may prompt the question: why do slugs selectively not eat cabbage plants near garlic? One possible hypothesis, or answer to this question, is that slugs have an aversion to garlic. Based on this hypothesis, one might predict that slugs will not eat cabbage plants surrounded by a ring of garlic powder.

A hypothesis should be falsifiable, meaning that there are ways to disprove it if it is untrue. In other words, a hypothesis should be testable. Scientists often articulate and explicitly test for the opposite of the hypothesis, which is called the null hypothesis. In this case, the null hypothesis is that slugs do not have an aversion to garlic. The null hypothesis would be supported if, contrary to the prediction, slugs eat cabbage plants that are surrounded by garlic powder.

Testing a Hypothesis

When possible, scientists test hypotheses using controlled experiments that include independent and dependent variables, as well as control and experimental groups.

An independent variable is an item expected to have an effect (e.g., the garlic powder used in the slug and cabbage experiment or treatment given in a clinical trial). Dependent variables are the measurements used to determine the outcome of an experiment. In the experiment with slugs, cabbages, and garlic, the number of slugs eating cabbages is the dependent variable. This number is expected to depend on the presence or absence of garlic powder rings around the cabbage plants.

Experiments require experimental and control groups. An experimental group is treated with or exposed to the independent variable (i.e., the manipulation or treatment). For example, in the garlic aversion experiment with slugs, the experimental group is a group of cabbage plants surrounded by a garlic powder ring. A control group is subject to the same conditions as the experimental group, with the exception of the independent variable. Control groups in this experiment might include a group of cabbage plants in the same area that is surrounded by a non-garlic powder ring (to control for powder aversion) and a group that is not surrounded by any particular substance (to control for cabbage aversion). It is essential to include a control group because, without one, it is unclear whether the outcome is the result of the treatment or manipulation.

Refining a Hypothesis

If the results of an experiment support the hypothesis, further experiments may be designed and carried out to provide support for the hypothesis. The hypothesis may also be refined and made more specific. For example, additional experiments could determine whether slugs also have an aversion to other plants of the Allium genus, like onions.

If the results do not support the hypothesis, then the original hypothesis may be modified based on the new observations. It is important to rule out potential problems with the experimental design before modifying the hypothesis. For example, if slugs demonstrate an aversion to both garlic and non-garlic powder, the experiment can be carried out again using fresh garlic instead of powdered garlic. If the slugs still exhibit no aversion to garlic, then the original hypothesis can be modified.

Communication

The results of the experiments should be communicated to other scientists and the public, regardless of whether the data support the original hypothesis. This information can guide the development of new hypotheses and experimental questions.

Get cutting-edge science videos from J o VE sent straight to your inbox every month.

mktb-description

We use cookies to enhance your experience on our website.

By continuing to use our website or clicking “Continue”, you are agreeing to accept our cookies.

WeChat QR Code - JoVE

Back Home

  • Science Notes Posts
  • Contact Science Notes
  • Todd Helmenstine Biography
  • Anne Helmenstine Biography
  • Free Printable Periodic Tables (PDF and PNG)
  • Periodic Table Wallpapers
  • Interactive Periodic Table
  • Periodic Table Posters
  • How to Grow Crystals
  • Chemistry Projects
  • Fire and Flames Projects
  • Holiday Science
  • Chemistry Problems With Answers
  • Physics Problems
  • Unit Conversion Example Problems
  • Chemistry Worksheets
  • Biology Worksheets
  • Periodic Table Worksheets
  • Physical Science Worksheets
  • Science Lab Worksheets
  • My Amazon Books

Steps of the Scientific Method 2

Scientific Method Steps

The scientific method is a system scientists and other people use to ask and answer questions about the natural world. In a nutshell, the scientific method works by making observations, asking a question or identifying a problem, and then designing and analyzing an experiment to test a prediction of what you expect will happen. It’s a powerful analytical tool because once you draw conclusions, you may be able to answer a question and make predictions about future events.

These are the steps of the scientific method:

  • Make observations.

Sometimes this step is omitted in the list, but you always make observations before asking a question, whether you recognize it or not. You always have some background information about a topic. However, it’s a good idea to be systematic about your observations and to record them in a lab book or another way. Often, these initial observations can help you identify a question. Later on, this information may help you decide on another area of investigation of a topic.

  • Ask a question, identify a problem, or state an objective.

There are various forms of this step. Sometimes you may want to state an objective and a problem and then phrase it in the form of a question. The reason it’s good to state a question is because it’s easiest to design an experiment to answer a question. A question helps you form a hypothesis, which focuses your study.

  • Research the topic.

You should conduct background research on your topic to learn as much as you can about it. This can occur both before and after you state an objective and form a hypothesis. In fact, you may find yourself researching the topic throughout the entire process.

  • Formulate a hypothesis.

A hypothesis is a formal prediction. There are two forms of a hypothesis that are particularly easy to test. One is to state the hypothesis as an “if, then” statement. An example of an if-then hypothesis is: “If plants are grown under red light, then they will be taller than plants grown under white light.” Another good type of hypothesis is what is called a “ null hypothesis ” or “no difference” hypothesis. An example of a null hypothesis is: “There is no difference in the rate of growth of plants grown under red light compared with plants grown under white light.”

  • Design and perform an experiment to test the hypothesis.

Once you have a hypothesis, you need to find a way to test it. This involves an experiment . There are many ways to set up an experiment. A basic experiment contains variables, which are factors you can measure. The two main variables are the independent variable (the one you control or change) and the dependent variable (the one you measure to see if it is affected when you change the independent variable).

  • Record and analyze the data you obtain from the experiment.

It’s a good idea to record notes alongside your data, stating anything unusual or unexpected. Once you have the data, draw a chart, table, or graph to present your results. Next, analyze the results to understand what it all means.

  • Determine whether you accept or reject the hypothesis.

Do the results support the hypothesis or not? Keep in mind, it’s okay if the hypothesis is not supported, especially if you are testing a null hypothesis. Sometimes excluding an explanation answers your question! There is no “right” or “wrong” here. However, if you obtain an unexpected result, you might want to perform another experiment.

  • Draw a conclusion and report the results of the experiment.

What good is knowing something if you keep it to yourself? You should report the outcome of the experiment, even if it’s just in a notebook. What did you learn from the experiment?

How Many Steps Are There?

You may be asked to list the 5 steps of the scientific method or the 6 steps of the method or some other number. There are different ways of grouping together the steps outlined here, so it’s a good idea to learn the way an instructor wants you to list the steps. No matter how many steps there are, the order is always the same.

1.2 The Scientific Methods

Section learning objectives.

By the end of this section, you will be able to do the following:

  • Explain how the methods of science are used to make scientific discoveries
  • Define a scientific model and describe examples of physical and mathematical models used in physics
  • Compare and contrast hypothesis, theory, and law

Teacher Support

The learning objectives in this section will help your students master the following standards:

  • (A) know the definition of science and understand that it has limitations, as specified in subsection (b)(2) of this section;
  • (B) know that scientific hypotheses are tentative and testable statements that must be capable of being supported or not supported by observational evidence. Hypotheses of durable explanatory power which have been tested over a wide variety of conditions are incorporated into theories;
  • (C) know that scientific theories are based on natural and physical phenomena and are capable of being tested by multiple independent researchers. Unlike hypotheses, scientific theories are well-established and highly-reliable explanations, but may be subject to change as new areas of science and new technologies are developed;
  • (D) distinguish between scientific hypotheses and scientific theories.

Section Key Terms

[OL] Pre-assessment for this section could involve students sharing or writing down an anecdote about when they used the methods of science. Then, students could label their thought processes in their anecdote with the appropriate scientific methods. The class could also discuss their definitions of theory and law, both outside and within the context of science.

[OL] It should be noted and possibly mentioned that a scientist , as mentioned in this section, does not necessarily mean a trained scientist. It could be anyone using methods of science.

Scientific Methods

Scientists often plan and carry out investigations to answer questions about the universe around us. These investigations may lead to natural laws. Such laws are intrinsic to the universe, meaning that humans did not create them and cannot change them. We can only discover and understand them. Their discovery is a very human endeavor, with all the elements of mystery, imagination, struggle, triumph, and disappointment inherent in any creative effort. The cornerstone of discovering natural laws is observation. Science must describe the universe as it is, not as we imagine or wish it to be.

We all are curious to some extent. We look around, make generalizations, and try to understand what we see. For example, we look up and wonder whether one type of cloud signals an oncoming storm. As we become serious about exploring nature, we become more organized and formal in collecting and analyzing data. We attempt greater precision, perform controlled experiments (if we can), and write down ideas about how data may be organized. We then formulate models, theories, and laws based on the data we have collected, and communicate those results with others. This, in a nutshell, describes the scientific method that scientists employ to decide scientific issues on the basis of evidence from observation and experiment.

An investigation often begins with a scientist making an observation . The scientist observes a pattern or trend within the natural world. Observation may generate questions that the scientist wishes to answer. Next, the scientist may perform some research about the topic and devise a hypothesis . A hypothesis is a testable statement that describes how something in the natural world works. In essence, a hypothesis is an educated guess that explains something about an observation.

[OL] An educated guess is used throughout this section in describing a hypothesis to combat the tendency to think of a theory as an educated guess.

Scientists may test the hypothesis by performing an experiment . During an experiment, the scientist collects data that will help them learn about the phenomenon they are studying. Then the scientists analyze the results of the experiment (that is, the data), often using statistical, mathematical, and/or graphical methods. From the data analysis, they draw conclusions. They may conclude that their experiment either supports or rejects their hypothesis. If the hypothesis is supported, the scientist usually goes on to test another hypothesis related to the first. If their hypothesis is rejected, they will often then test a new and different hypothesis in their effort to learn more about whatever they are studying.

Scientific processes can be applied to many situations. Let’s say that you try to turn on your car, but it will not start. You have just made an observation! You ask yourself, "Why won’t my car start?" You can now use scientific processes to answer this question. First, you generate a hypothesis such as, "The car won’t start because it has no gasoline in the gas tank." To test this hypothesis, you put gasoline in the car and try to start it again. If the car starts, then your hypothesis is supported by the experiment. If the car does not start, then your hypothesis is rejected. You will then need to think up a new hypothesis to test such as, "My car won’t start because the fuel pump is broken." Hopefully, your investigations lead you to discover why the car won’t start and enable you to fix it.

A model is a representation of something that is often too difficult (or impossible) to study directly. Models can take the form of physical models, equations, computer programs, or simulations—computer graphics/animations. Models are tools that are especially useful in modern physics because they let us visualize phenomena that we normally cannot observe with our senses, such as very small objects or objects that move at high speeds. For example, we can understand the structure of an atom using models, without seeing an atom with our own eyes. Although images of single atoms are now possible, these images are extremely difficult to achieve and are only possible due to the success of our models. The existence of these images is a consequence rather than a source of our understanding of atoms. Models are always approximate, so they are simpler to consider than the real situation; the more complete a model is, the more complicated it must be. Models put the intangible or the extremely complex into human terms that we can visualize, discuss, and hypothesize about.

Scientific models are constructed based on the results of previous experiments. Even still, models often only describe a phenomenon partially or in a few limited situations. Some phenomena are so complex that they may be impossible to model them in their entirety, even using computers. An example is the electron cloud model of the atom in which electrons are moving around the atom’s center in distinct clouds ( Figure 1.12 ), that represent the likelihood of finding an electron in different places. This model helps us to visualize the structure of an atom. However, it does not show us exactly where an electron will be within its cloud at any one particular time.

As mentioned previously, physicists use a variety of models including equations, physical models, computer simulations, etc. For example, three-dimensional models are often commonly used in chemistry and physics to model molecules. Properties other than appearance or location are usually modelled using mathematics, where functions are used to show how these properties relate to one another. Processes such as the formation of a star or the planets, can also be modelled using computer simulations. Once a simulation is correctly programmed based on actual experimental data, the simulation can allow us to view processes that happened in the past or happen too quickly or slowly for us to observe directly. In addition, scientists can also run virtual experiments using computer-based models. In a model of planet formation, for example, the scientist could alter the amount or type of rocks present in space and see how it affects planet formation.

Scientists use models and experimental results to construct explanations of observations or design solutions to problems. For example, one way to make a car more fuel efficient is to reduce the friction or drag caused by air flowing around the moving car. This can be done by designing the body shape of the car to be more aerodynamic, such as by using rounded corners instead of sharp ones. Engineers can then construct physical models of the car body, place them in a wind tunnel, and examine the flow of air around the model. This can also be done mathematically in a computer simulation. The air flow pattern can be analyzed for regions smooth air flow and for eddies that indicate drag. The model of the car body may have to be altered slightly to produce the smoothest pattern of air flow (i.e., the least drag). The pattern with the least drag may be the solution to increasing fuel efficiency of the car. This solution might then be incorporated into the car design.

Using Models and the Scientific Processes

Be sure to secure loose items before opening the window or door.

In this activity, you will learn about scientific models by making a model of how air flows through your classroom or a room in your house.

  • One room with at least one window or door that can be opened
  • Work with a group of four, as directed by your teacher. Close all of the windows and doors in the room you are working in. Your teacher may assign you a specific window or door to study.
  • Before opening any windows or doors, draw a to-scale diagram of your room. First, measure the length and width of your room using the tape measure. Then, transform the measurement using a scale that could fit on your paper, such as 5 centimeters = 1 meter.
  • Your teacher will assign you a specific window or door to study air flow. On your diagram, add arrows showing your hypothesis (before opening any windows or doors) of how air will flow through the room when your assigned window or door is opened. Use pencil so that you can easily make changes to your diagram.
  • On your diagram, mark four locations where you would like to test air flow in your room. To test for airflow, hold a strip of single ply tissue paper between the thumb and index finger. Note the direction that the paper moves when exposed to the airflow. Then, for each location, predict which way the paper will move if your air flow diagram is correct.
  • Now, each member of your group will stand in one of the four selected areas. Each member will test the airflow Agree upon an approximate height at which everyone will hold their papers.
  • When you teacher tells you to, open your assigned window and/or door. Each person should note the direction that their paper points immediately after the window or door was opened. Record your results on your diagram.
  • Did the airflow test data support or refute the hypothetical model of air flow shown in your diagram? Why or why not? Correct your model based on your experimental evidence.
  • With your group, discuss how accurate your model is. What limitations did it have? Write down the limitations that your group agreed upon.
  • Yes, you could use your model to predict air flow through a new window. The earlier experiment of air flow would help you model the system more accurately.
  • Yes, you could use your model to predict air flow through a new window. The earlier experiment of air flow is not useful for modeling the new system.
  • No, you cannot model a system to predict the air flow through a new window. The earlier experiment of air flow would help you model the system more accurately.
  • No, you cannot model a system to predict the air flow through a new window. The earlier experiment of air flow is not useful for modeling the new system.

This Snap Lab! has students construct a model of how air flows in their classroom. Each group of four students will create a model of air flow in their classroom using a scale drawing of the room. Then, the groups will test the validity of their model by placing weathervanes that they have constructed around the room and opening a window or door. By observing the weather vanes, students will see how air actually flows through the room from a specific window or door. Students will then correct their model based on their experimental evidence. The following material list is given per group:

  • One room with at least one window or door that can be opened (An optimal configuration would be one window or door per group.)
  • Several pieces of construction paper (at least four per group)
  • Strips of single ply tissue paper
  • One tape measure (long enough to measure the dimensions of the room)
  • Group size can vary depending on the number of windows/doors available and the number of students in the class.
  • The room dimensions could be provided by the teacher. Also, students may need a brief introduction in how to make a drawing to scale.
  • This is another opportunity to discuss controlled experiments in terms of why the students should hold the strips of tissue paper at the same height and in the same way. One student could also serve as a control and stand far away from the window/door or in another area that will not receive air flow from the window/door.
  • You will probably need to coordinate this when multiple windows or doors are used. Only one window or door should be opened at a time for best results. Between openings, allow a short period (5 minutes) when all windows and doors are closed, if possible.

Answers to the Grasp Check will vary, but the air flow in the new window or door should be based on what the students observed in their experiment.

Scientific Laws and Theories

A scientific law is a description of a pattern in nature that is true in all circumstances that have been studied. That is, physical laws are meant to be universal , meaning that they apply throughout the known universe. Laws are often also concise, whereas theories are more complicated. A law can be expressed in the form of a single sentence or mathematical equation. For example, Newton’s second law of motion , which relates the motion of an object to the force applied ( F ), the mass of the object ( m ), and the object’s acceleration ( a ), is simply stated using the equation

Scientific ideas and explanations that are true in many, but not all situations in the universe are usually called principles . An example is Pascal’s principle , which explains properties of liquids, but not solids or gases. However, the distinction between laws and principles is sometimes not carefully made in science.

A theory is an explanation for patterns in nature that is supported by much scientific evidence and verified multiple times by multiple researchers. While many people confuse theories with educated guesses or hypotheses, theories have withstood more rigorous testing and verification than hypotheses.

[OL] Explain to students that in informal, everyday English the word theory can be used to describe an idea that is possibly true but that has not been proven to be true. This use of the word theory often leads people to think that scientific theories are nothing more than educated guesses. This is not just a misconception among students, but among the general public as well.

As a closing idea about scientific processes, we want to point out that scientific laws and theories, even those that have been supported by experiments for centuries, can still be changed by new discoveries. This is especially true when new technologies emerge that allow us to observe things that were formerly unobservable. Imagine how viewing previously invisible objects with a microscope or viewing Earth for the first time from space may have instantly changed our scientific theories and laws! What discoveries still await us in the future? The constant retesting and perfecting of our scientific laws and theories allows our knowledge of nature to progress. For this reason, many scientists are reluctant to say that their studies prove anything. By saying support instead of prove , it keeps the door open for future discoveries, even if they won’t occur for centuries or even millennia.

[OL] With regard to scientists avoiding using the word prove , the general public knows that science has proven certain things such as that the heart pumps blood and the Earth is round. However, scientists should shy away from using prove because it is impossible to test every single instance and every set of conditions in a system to absolutely prove anything. Using support or similar terminology leaves the door open for further discovery.

Check Your Understanding

  • Models are simpler to analyze.
  • Models give more accurate results.
  • Models provide more reliable predictions.
  • Models do not require any computer calculations.
  • They are the same.
  • A hypothesis has been thoroughly tested and found to be true.
  • A hypothesis is a tentative assumption based on what is already known.
  • A hypothesis is a broad explanation firmly supported by evidence.
  • A scientific model is a representation of something that can be easily studied directly. It is useful for studying things that can be easily analyzed by humans.
  • A scientific model is a representation of something that is often too difficult to study directly. It is useful for studying a complex system or systems that humans cannot observe directly.
  • A scientific model is a representation of scientific equipment. It is useful for studying working principles of scientific equipment.
  • A scientific model is a representation of a laboratory where experiments are performed. It is useful for studying requirements needed inside the laboratory.
  • The hypothesis must be validated by scientific experiments.
  • The hypothesis must not include any physical quantity.
  • The hypothesis must be a short and concise statement.
  • The hypothesis must apply to all the situations in the universe.
  • A scientific theory is an explanation of natural phenomena that is supported by evidence.
  • A scientific theory is an explanation of natural phenomena without the support of evidence.
  • A scientific theory is an educated guess about the natural phenomena occurring in nature.
  • A scientific theory is an uneducated guess about natural phenomena occurring in nature.
  • A hypothesis is an explanation of the natural world with experimental support, while a scientific theory is an educated guess about a natural phenomenon.
  • A hypothesis is an educated guess about natural phenomenon, while a scientific theory is an explanation of natural world with experimental support.
  • A hypothesis is experimental evidence of a natural phenomenon, while a scientific theory is an explanation of the natural world with experimental support.
  • A hypothesis is an explanation of the natural world with experimental support, while a scientific theory is experimental evidence of a natural phenomenon.

Use the Check Your Understanding questions to assess students’ achievement of the section’s learning objectives. If students are struggling with a specific objective, the Check Your Understanding will help identify which objective and direct students to the relevant content.

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute Texas Education Agency (TEA). The original material is available at: https://www.texasgateway.org/book/tea-physics . Changes were made to the original material, including updates to art, structure, and other content updates.

Access for free at https://openstax.org/books/physics/pages/1-introduction
  • Authors: Paul Peter Urone, Roger Hinrichs
  • Publisher/website: OpenStax
  • Book title: Physics
  • Publication date: Mar 26, 2020
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/physics/pages/1-introduction
  • Section URL: https://openstax.org/books/physics/pages/1-2-the-scientific-methods

© Jan 19, 2024 Texas Education Agency (TEA). The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

Science and the scientific method: Definitions and examples

Here's a look at the foundation of doing science — the scientific method.

Kids follow the scientific method to carry out an experiment.

The scientific method

Hypothesis, theory and law, a brief history of science, additional resources, bibliography.

Science is a systematic and logical approach to discovering how things in the universe work. It is also the body of knowledge accumulated through the discoveries about all the things in the universe. 

The word "science" is derived from the Latin word "scientia," which means knowledge based on demonstrable and reproducible data, according to the Merriam-Webster dictionary . True to this definition, science aims for measurable results through testing and analysis, a process known as the scientific method. Science is based on fact, not opinion or preferences. The process of science is designed to challenge ideas through research. One important aspect of the scientific process is that it focuses only on the natural world, according to the University of California, Berkeley . Anything that is considered supernatural, or beyond physical reality, does not fit into the definition of science.

When conducting research, scientists use the scientific method to collect measurable, empirical evidence in an experiment related to a hypothesis (often in the form of an if/then statement) that is designed to support or contradict a scientific theory .

"As a field biologist, my favorite part of the scientific method is being in the field collecting the data," Jaime Tanner, a professor of biology at Marlboro College, told Live Science. "But what really makes that fun is knowing that you are trying to answer an interesting question. So the first step in identifying questions and generating possible answers (hypotheses) is also very important and is a creative process. Then once you collect the data you analyze it to see if your hypothesis is supported or not."

Here's an illustration showing the steps in the scientific method.

The steps of the scientific method go something like this, according to Highline College :

  • Make an observation or observations.
  • Form a hypothesis — a tentative description of what's been observed, and make predictions based on that hypothesis.
  • Test the hypothesis and predictions in an experiment that can be reproduced.
  • Analyze the data and draw conclusions; accept or reject the hypothesis or modify the hypothesis if necessary.
  • Reproduce the experiment until there are no discrepancies between observations and theory. "Replication of methods and results is my favorite step in the scientific method," Moshe Pritsker, a former post-doctoral researcher at Harvard Medical School and CEO of JoVE, told Live Science. "The reproducibility of published experiments is the foundation of science. No reproducibility — no science."

Some key underpinnings to the scientific method:

  • The hypothesis must be testable and falsifiable, according to North Carolina State University . Falsifiable means that there must be a possible negative answer to the hypothesis.
  • Research must involve deductive reasoning and inductive reasoning . Deductive reasoning is the process of using true premises to reach a logical true conclusion while inductive reasoning uses observations to infer an explanation for those observations.
  • An experiment should include a dependent variable (which does not change) and an independent variable (which does change), according to the University of California, Santa Barbara .
  • An experiment should include an experimental group and a control group. The control group is what the experimental group is compared against, according to Britannica .

The process of generating and testing a hypothesis forms the backbone of the scientific method. When an idea has been confirmed over many experiments, it can be called a scientific theory. While a theory provides an explanation for a phenomenon, a scientific law provides a description of a phenomenon, according to The University of Waikato . One example would be the law of conservation of energy, which is the first law of thermodynamics that says that energy can neither be created nor destroyed. 

A law describes an observed phenomenon, but it doesn't explain why the phenomenon exists or what causes it. "In science, laws are a starting place," said Peter Coppinger, an associate professor of biology and biomedical engineering at the Rose-Hulman Institute of Technology. "From there, scientists can then ask the questions, 'Why and how?'"

Laws are generally considered to be without exception, though some laws have been modified over time after further testing found discrepancies. For instance, Newton's laws of motion describe everything we've observed in the macroscopic world, but they break down at the subatomic level.

This does not mean theories are not meaningful. For a hypothesis to become a theory, scientists must conduct rigorous testing, typically across multiple disciplines by separate groups of scientists. Saying something is "just a theory" confuses the scientific definition of "theory" with the layperson's definition. To most people a theory is a hunch. In science, a theory is the framework for observations and facts, Tanner told Live Science.

This Copernican heliocentric solar system, from 1708, shows the orbit of the moon around the Earth, and the orbits of the Earth and planets round the sun, including Jupiter and its moons, all surrounded by the 12 signs of the zodiac.

The earliest evidence of science can be found as far back as records exist. Early tablets contain numerals and information about the solar system , which were derived by using careful observation, prediction and testing of those predictions. Science became decidedly more "scientific" over time, however.

1200s: Robert Grosseteste developed the framework for the proper methods of modern scientific experimentation, according to the Stanford Encyclopedia of Philosophy. His works included the principle that an inquiry must be based on measurable evidence that is confirmed through testing.

1400s: Leonardo da Vinci began his notebooks in pursuit of evidence that the human body is microcosmic. The artist, scientist and mathematician also gathered information about optics and hydrodynamics.

1500s: Nicolaus Copernicus advanced the understanding of the solar system with his discovery of heliocentrism. This is a model in which Earth and the other planets revolve around the sun, which is the center of the solar system.

1600s: Johannes Kepler built upon those observations with his laws of planetary motion. Galileo Galilei improved on a new invention, the telescope, and used it to study the sun and planets. The 1600s also saw advancements in the study of physics as Isaac Newton developed his laws of motion.

1700s: Benjamin Franklin discovered that lightning is electrical. He also contributed to the study of oceanography and meteorology. The understanding of chemistry also evolved during this century as Antoine Lavoisier, dubbed the father of modern chemistry , developed the law of conservation of mass.

1800s: Milestones included Alessandro Volta's discoveries regarding electrochemical series, which led to the invention of the battery. John Dalton also introduced atomic theory, which stated that all matter is composed of atoms that combine to form molecules. The basis of modern study of genetics advanced as Gregor Mendel unveiled his laws of inheritance. Later in the century, Wilhelm Conrad Röntgen discovered X-rays , while George Ohm's law provided the basis for understanding how to harness electrical charges.

1900s: The discoveries of Albert Einstein , who is best known for his theory of relativity, dominated the beginning of the 20th century. Einstein's theory of relativity is actually two separate theories. His special theory of relativity, which he outlined in a 1905 paper, " The Electrodynamics of Moving Bodies ," concluded that time must change according to the speed of a moving object relative to the frame of reference of an observer. His second theory of general relativity, which he published as " The Foundation of the General Theory of Relativity ," advanced the idea that matter causes space to curve.

In 1952, Jonas Salk developed the polio vaccine , which reduced the incidence of polio in the United States by nearly 90%, according to Britannica . The following year, James D. Watson and Francis Crick discovered the structure of DNA , which is a double helix formed by base pairs attached to a sugar-phosphate backbone, according to the National Human Genome Research Institute .

2000s: The 21st century saw the first draft of the human genome completed, leading to a greater understanding of DNA. This advanced the study of genetics, its role in human biology and its use as a predictor of diseases and other disorders, according to the National Human Genome Research Institute .

  • This video from City University of New York delves into the basics of what defines science.
  • Learn about what makes science science in this book excerpt from Washington State University .
  • This resource from the University of Michigan — Flint explains how to design your own scientific study.

Merriam-Webster Dictionary, Scientia. 2022. https://www.merriam-webster.com/dictionary/scientia

University of California, Berkeley, "Understanding Science: An Overview." 2022. ​​ https://undsci.berkeley.edu/article/0_0_0/intro_01  

Highline College, "Scientific method." July 12, 2015. https://people.highline.edu/iglozman/classes/astronotes/scimeth.htm  

North Carolina State University, "Science Scripts." https://projects.ncsu.edu/project/bio183de/Black/science/science_scripts.html  

University of California, Santa Barbara. "What is an Independent variable?" October 31,2017. http://scienceline.ucsb.edu/getkey.php?key=6045  

Encyclopedia Britannica, "Control group." May 14, 2020. https://www.britannica.com/science/control-group  

The University of Waikato, "Scientific Hypothesis, Theories and Laws." https://sci.waikato.ac.nz/evolution/Theories.shtml  

Stanford Encyclopedia of Philosophy, Robert Grosseteste. May 3, 2019. https://plato.stanford.edu/entries/grosseteste/  

Encyclopedia Britannica, "Jonas Salk." October 21, 2021. https://www.britannica.com/ biography /Jonas-Salk

National Human Genome Research Institute, "​Phosphate Backbone." https://www.genome.gov/genetics-glossary/Phosphate-Backbone  

National Human Genome Research Institute, "What is the Human Genome Project?" https://www.genome.gov/human-genome-project/What  

‌ Live Science contributor Ashley Hamer updated this article on Jan. 16, 2022.

Sign up for the Live Science daily newsletter now

Get the world’s most fascinating discoveries delivered straight to your inbox.

Alina Bradford

Largest gold nugget ever found in England unearthed with faulty metal detector

Polar vortex is 'spinning backwards' above Arctic after major reversal event

Rare meningitis and bloodstream infections on the rise in the US, CDC warns

Most Popular

By Anna Gora December 27, 2023

By Anna Gora December 26, 2023

By Anna Gora December 25, 2023

By Emily Cooke December 23, 2023

By Victoria Atkinson December 22, 2023

By Anna Gora December 16, 2023

By Anna Gora December 15, 2023

By Anna Gora November 09, 2023

By Donavyn Coffey November 06, 2023

By Anna Gora October 31, 2023

By Anna Gora October 26, 2023

  • 2 James Webb telescope confirms there is something seriously wrong with our understanding of the universe
  • 3 Sperm whales drop giant poop bombs to save themselves from orca attack
  • 4 Eleonora's falcon: The raptor that imprisons birds live by stripping their feathers and stuffing them in rocks
  • 5 Odysseus lunar lander, 1st US craft on the moon in 50 years, has died and will 'not complete another call home'
  • 2 Polar vortex is 'spinning backwards' above Arctic after major reversal event
  • 3 James Webb telescope confirms there is something seriously wrong with our understanding of the universe
  • 4 Single enormous object left 2 billion craters on Mars, scientists discover

What Is a Hypothesis? (Science)

If...,Then...

Angela Lumsden/Getty Images

  • Scientific Method
  • Chemical Laws
  • Periodic Table
  • Projects & Experiments
  • Biochemistry
  • Physical Chemistry
  • Medical Chemistry
  • Chemistry In Everyday Life
  • Famous Chemists
  • Activities for Kids
  • Abbreviations & Acronyms
  • Weather & Climate
  • Ph.D., Biomedical Sciences, University of Tennessee at Knoxville
  • B.A., Physics and Mathematics, Hastings College

A hypothesis (plural hypotheses) is a proposed explanation for an observation. The definition depends on the subject.

In science, a hypothesis is part of the scientific method. It is a prediction or explanation that is tested by an experiment. Observations and experiments may disprove a scientific hypothesis, but can never entirely prove one.

In the study of logic, a hypothesis is an if-then proposition, typically written in the form, "If X , then Y ."

In common usage, a hypothesis is simply a proposed explanation or prediction, which may or may not be tested.

Writing a Hypothesis

Most scientific hypotheses are proposed in the if-then format because it's easy to design an experiment to see whether or not a cause and effect relationship exists between the independent variable and the dependent variable . The hypothesis is written as a prediction of the outcome of the experiment.

  • Null Hypothesis and Alternative Hypothesis

Statistically, it's easier to show there is no relationship between two variables than to support their connection. So, scientists often propose the null hypothesis . The null hypothesis assumes changing the independent variable will have no effect on the dependent variable.

In contrast, the alternative hypothesis suggests changing the independent variable will have an effect on the dependent variable. Designing an experiment to test this hypothesis can be trickier because there are many ways to state an alternative hypothesis.

For example, consider a possible relationship between getting a good night's sleep and getting good grades. The null hypothesis might be stated: "The number of hours of sleep students get is unrelated to their grades" or "There is no correlation between hours of sleep and grades."

An experiment to test this hypothesis might involve collecting data, recording average hours of sleep for each student and grades. If a student who gets eight hours of sleep generally does better than students who get four hours of sleep or 10 hours of sleep, the hypothesis might be rejected.

But the alternative hypothesis is harder to propose and test. The most general statement would be: "The amount of sleep students get affects their grades." The hypothesis might also be stated as "If you get more sleep, your grades will improve" or "Students who get nine hours of sleep have better grades than those who get more or less sleep."

In an experiment, you can collect the same data, but the statistical analysis is less likely to give you a high confidence limit.

Usually, a scientist starts out with the null hypothesis. From there, it may be possible to propose and test an alternative hypothesis, to narrow down the relationship between the variables.

Example of a Hypothesis

Examples of a hypothesis include:

  • If you drop a rock and a feather, (then) they will fall at the same rate.
  • Plants need sunlight in order to live. (if sunlight, then life)
  • Eating sugar gives you energy. (if sugar, then energy)
  • White, Jay D.  Research in Public Administration . Conn., 1998.
  • Schick, Theodore, and Lewis Vaughn.  How to Think about Weird Things: Critical Thinking for a New Age . McGraw-Hill Higher Education, 2002.
  • Null Hypothesis Definition and Examples
  • Definition of a Hypothesis
  • What Are the Elements of a Good Hypothesis?
  • Six Steps of the Scientific Method
  • What Are Examples of a Hypothesis?
  • Understanding Simple vs Controlled Experiments
  • Scientific Method Flow Chart
  • Scientific Method Vocabulary Terms
  • What Is a Testable Hypothesis?
  • Null Hypothesis Examples
  • What 'Fail to Reject' Means in a Hypothesis Test
  • How To Design a Science Fair Experiment
  • What Is an Experiment? Definition and Design
  • Hypothesis Test for the Difference of Two Population Proportions
  • How to Conduct a Hypothesis Test
  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Scientific Method Steps in Psychology Research

Steps, Uses, and Key Terms

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

hypothesis of the scientific method

Emily is a board-certified science editor who has worked with top digital publishing brands like Voices for Biodiversity, Study.com, GoodTherapy, Vox, and Verywell.

hypothesis of the scientific method

Verywell / Theresa Chiechi

How do researchers investigate psychological phenomena? They utilize a process known as the scientific method to study different aspects of how people think and behave.

When conducting research, the scientific method steps to follow are:

  • Observe what you want to investigate
  • Ask a research question and make predictions
  • Test the hypothesis and collect data
  • Examine the results and draw conclusions
  • Report and share the results 

This process not only allows scientists to investigate and understand different psychological phenomena but also provides researchers and others a way to share and discuss the results of their studies.

Generally, there are five main steps in the scientific method, although some may break down this process into six or seven steps. An additional step in the process can also include developing new research questions based on your findings.

What Is the Scientific Method?

What is the scientific method and how is it used in psychology?

The scientific method consists of five steps. It is essentially a step-by-step process that researchers can follow to determine if there is some type of relationship between two or more variables.

By knowing the steps of the scientific method, you can better understand the process researchers go through to arrive at conclusions about human behavior.

Scientific Method Steps

While research studies can vary, these are the basic steps that psychologists and scientists use when investigating human behavior.

The following are the scientific method steps:

Step 1. Make an Observation

Before a researcher can begin, they must choose a topic to study. Once an area of interest has been chosen, the researchers must then conduct a thorough review of the existing literature on the subject. This review will provide valuable information about what has already been learned about the topic and what questions remain to be answered.

A literature review might involve looking at a considerable amount of written material from both books and academic journals dating back decades.

The relevant information collected by the researcher will be presented in the introduction section of the final published study results. This background material will also help the researcher with the first major step in conducting a psychology study: formulating a hypothesis.

Step 2. Ask a Question

Once a researcher has observed something and gained some background information on the topic, the next step is to ask a question. The researcher will form a hypothesis, which is an educated guess about the relationship between two or more variables

For example, a researcher might ask a question about the relationship between sleep and academic performance: Do students who get more sleep perform better on tests at school?

In order to formulate a good hypothesis, it is important to think about different questions you might have about a particular topic.

You should also consider how you could investigate the causes. Falsifiability is an important part of any valid hypothesis. In other words, if a hypothesis was false, there needs to be a way for scientists to demonstrate that it is false.

Step 3. Test Your Hypothesis and Collect Data

Once you have a solid hypothesis, the next step of the scientific method is to put this hunch to the test by collecting data. The exact methods used to investigate a hypothesis depend on exactly what is being studied. There are two basic forms of research that a psychologist might utilize: descriptive research or experimental research.

Descriptive research is typically used when it would be difficult or even impossible to manipulate the variables in question. Examples of descriptive research include case studies, naturalistic observation , and correlation studies. Phone surveys that are often used by marketers are one example of descriptive research.

Correlational studies are quite common in psychology research. While they do not allow researchers to determine cause-and-effect, they do make it possible to spot relationships between different variables and to measure the strength of those relationships. 

Experimental research is used to explore cause-and-effect relationships between two or more variables. This type of research involves systematically manipulating an independent variable and then measuring the effect that it has on a defined dependent variable .

One of the major advantages of this method is that it allows researchers to actually determine if changes in one variable actually cause changes in another.

While psychology experiments are often quite complex, a simple experiment is fairly basic but does allow researchers to determine cause-and-effect relationships between variables. Most simple experiments use a control group (those who do not receive the treatment) and an experimental group (those who do receive the treatment).

Step 4. Examine the Results and Draw Conclusions

Once a researcher has designed the study and collected the data, it is time to examine this information and draw conclusions about what has been found.  Using statistics , researchers can summarize the data, analyze the results, and draw conclusions based on this evidence.

So how does a researcher decide what the results of a study mean? Not only can statistical analysis support (or refute) the researcher’s hypothesis; it can also be used to determine if the findings are statistically significant.

When results are said to be statistically significant, it means that it is unlikely that these results are due to chance.

Based on these observations, researchers must then determine what the results mean. In some cases, an experiment will support a hypothesis, but in other cases, it will fail to support the hypothesis.

So what happens if the results of a psychology experiment do not support the researcher's hypothesis? Does this mean that the study was worthless?

Just because the findings fail to support the hypothesis does not mean that the research is not useful or informative. In fact, such research plays an important role in helping scientists develop new questions and hypotheses to explore in the future.

After conclusions have been drawn, the next step is to share the results with the rest of the scientific community. This is an important part of the process because it contributes to the overall knowledge base and can help other scientists find new research avenues to explore.

Step 5. Report the Results

The final step in a psychology study is to report the findings. This is often done by writing up a description of the study and publishing the article in an academic or professional journal. The results of psychological studies can be seen in peer-reviewed journals such as  Psychological Bulletin , the  Journal of Social Psychology ,  Developmental Psychology , and many others.

The structure of a journal article follows a specified format that has been outlined by the  American Psychological Association (APA) . In these articles, researchers:

  • Provide a brief history and background on previous research
  • Present their hypothesis
  • Identify who participated in the study and how they were selected
  • Provide operational definitions for each variable
  • Describe the measures and procedures that were used to collect data
  • Explain how the information collected was analyzed
  • Discuss what the results mean

Why is such a detailed record of a psychological study so important? By clearly explaining the steps and procedures used throughout the study, other researchers can then replicate the results. The editorial process employed by academic and professional journals ensures that each article that is submitted undergoes a thorough peer review, which helps ensure that the study is scientifically sound.

Once published, the study becomes another piece of the existing puzzle of our knowledge base on that topic.

Before you begin exploring the scientific method steps, here's a review of some key terms and definitions that you should be familiar with:

  • Falsifiable : The variables can be measured so that if a hypothesis is false, it can be proven false
  • Hypothesis : An educated guess about the possible relationship between two or more variables
  • Variable : A factor or element that can change in observable and measurable ways
  • Operational definition : A full description of exactly how variables are defined, how they will be manipulated, and how they will be measured

Uses for the Scientific Method

The  goals of psychological studies  are to describe, explain, predict and perhaps influence mental processes or behaviors. In order to do this, psychologists utilize the scientific method to conduct psychological research. The scientific method is a set of principles and procedures that are used by researchers to develop questions, collect data, and reach conclusions.

Goals of Scientific Research in Psychology

Researchers seek not only to describe behaviors and explain why these behaviors occur; they also strive to create research that can be used to predict and even change human behavior.

Psychologists and other social scientists regularly propose explanations for human behavior. On a more informal level, people make judgments about the intentions, motivations , and actions of others on a daily basis.

While the everyday judgments we make about human behavior are subjective and anecdotal, researchers use the scientific method to study psychology in an objective and systematic way. The results of these studies are often reported in popular media, which leads many to wonder just how or why researchers arrived at the conclusions they did.

Examples of the Scientific Method

Now that you're familiar with the scientific method steps, it's useful to see how each step could work with a real-life example.

Say, for instance, that researchers set out to discover what the relationship is between psychotherapy and anxiety .

  • Step 1. Make an observation : The researchers choose to focus their study on adults ages 25 to 40 with generalized anxiety disorder.
  • Step 2. Ask a question : The question they want to answer in their study is: Do weekly psychotherapy sessions reduce symptoms in adults ages 25 to 40 with generalized anxiety disorder?
  • Step 3. Test your hypothesis : Researchers collect data on participants' anxiety symptoms . They work with therapists to create a consistent program that all participants undergo. Group 1 may attend therapy once per week, whereas group 2 does not attend therapy.
  • Step 4. Examine the results : Participants record their symptoms and any changes over a period of three months. After this period, people in group 1 report significant improvements in their anxiety symptoms, whereas those in group 2 report no significant changes.
  • Step 5. Report the results : Researchers write a report that includes their hypothesis, information on participants, variables, procedure, and conclusions drawn from the study. In this case, they say that "Weekly therapy sessions are shown to reduce anxiety symptoms in adults ages 25 to 40."

Of course, there are many details that go into planning and executing a study such as this. But this general outline gives you an idea of how an idea is formulated and tested, and how researchers arrive at results using the scientific method.

Erol A. How to conduct scientific research ? Noro Psikiyatr Ars . 2017;54(2):97-98. doi:10.5152/npa.2017.0120102

University of Minnesota. Psychologists use the scientific method to guide their research .

Shaughnessy, JJ, Zechmeister, EB, & Zechmeister, JS. Research Methods In Psychology . New York: McGraw Hill Education; 2015.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

extension logo for printing

The Scientific Method

Introduction.

There are many scientific disciplines that address topics from medicine and astrophysics to agriculture and zoology. In each discipline, modern scientists use a process called the "Scientific Method" to advance their knowledge and understanding. This publication describes the method scientists use to conduct research and describe and explain nature, ultimately trying prove or disprove theories.

Scientists all over the world conduct research using the Scientific Method. The University of Nevada Cooperative Extension exists to provide unbiased, research-based information on topics important and relevant to society. The scientific research efforts, analyses, and subsequent information disseminated by Cooperative Extension is driven by careful review and synthesis of relevant scientific research. Cooperative Extension presents useful information based on the best science available, and today that science is based on knowledge obtained by application of the Scientific Method.

The Scientific Method – What it’s Not

The Scientific Method is a process for explaining the world we see. It is:

  • Not a formula

The Scientific Method – What is it?

The Scientific Method is a process used to validate observations while minimizing observer bias. Its goal is for research to be conducted in a fair, unbiased and repeatable manner.

Long ago, people viewed the workings of nature and believed that the events and phenomena they observed were associated with the intrinsic nature of the beings or things being observed (Ackoff 1962, Wilson 1937). Today we view events and phenomena as having been caused , and science has evolved as a process to ask how and why things and events happen. Scientists seek to understand the relationships and intricacies between cause and effect in order to predict outcomes of future or similar events. To answer these questions and to help predict future happenings, scientists use the Scientific Method - a series of steps that lead to answers that accurately describe the things we observe, or at least improve our understanding of them.

The Scientific Method is not the only way, but is the best-known way to discover how and why the world works, without our knowledge being tainted by religious, political, or philosophical values. This method provides a means to formulate questions about general observations and devise theories of explanation. The approach lends itself to answering questions in fair and unbiased statements, as long as questions are posed correctly, in a hypothetical form that can be tested.

Definitions

It is important to understand three important terms before describing the Scientific Method.

This is a statement made by a researcher that is a working assumption to be tested and proven. It is something "considered true for the purpose of investigation" (Webster’s Dictionary 1995). An example might be “The earth is round.”

general principles drawn from facts that explain observations and can be used to predict new events. An example would be Newton’s theory of gravitation or Einstein’s theory of relativity. Each is based on falsifiable hypotheses of phenomenon we observe.

Falsifiable/ Null Hypothesis

to prove to be false (Webster’s Dictionary 1995). The hypothesis that is generated must be able to be tested, and either accepted or rejected. Scientists make hypotheses that they want to disprove in order that they may prove the working assumption describing the observed phenomena. This is done by declaring the statement or hypothesis as falsifiable . So, we would state the above hypothesis as “the earth is not round,” or “the earth is square” making it a working statement to be disproved.

The Scientific Method is not a formula, but rather a process with a number of sequential steps designed to create an explainable outcome that increases our knowledge base. This process is as follows:

STEP 1. Make an OBSERVATION

gather and assimilate information about an event, phenomenon, process, or an exception to a previous observation, etc.

STEP 2. Define the PROBLEM

ask questions about the observation that are relevant and testable. Define the null hypothesis to provide unbiased results.

STEP 3: Form the HYPOTHESIS

create an explanation, or educated guess, for the observation that is testable and falsifiable.

STEP 4: Conduct the EXPERIMENT

devise and perform an experiment to test the hypothesis.

STEP 5: Derive a THEORY

create a statement based in the outcome of the experiment that explains the observation(s) and predicts the likelihood of future observations.

Replication

Using the Scientific Method to answer questions about events or phenomena we observe can be repeated to fine-tune our theories. For example, if we conduct research using the Scientific Method and think we have answered a question, but different results occur the next time we make an observation, we may have to ask new questions and formulate new hypotheses that are tested by another experiment. Sometimes scientists must perform many experiments over many years or even decades using the Scientific Method to prove or disprove theories that are generated from one initial question. Numerous studies are often necessary to fully test the broad range of results that occur in order that scientists can formulate theories that truly account for the variation we see in our natural environment.

The Scientific Method – Is it worth all the effort?

Scientific knowledge can only advance when all scientists systematically use the same process to discover and disseminate new information. The advantage of all scientific research using the Scientific Method is that the experiments are repeatable by anyone, anywhere. When similar results occur in each experiment, these facts make the case for the theory stronger. If the same experiment is performed many times in many different locations, under a broad range of conditions, then the theory derived from these experiments is considered strong and widely applicable. If the questions are posed as testable hypotheses that rely on inductive reasoning and empiricism – that is, observations and data collection – then experiments can be devised to generate logical theories that explain the things we see. If we understand why the observed results occur, then we can accurately apply concepts derived from the experiment to other situations.

What do we need to consider when using the Scientific Method?

The Scientific Method requires that we ask questions and perform experiments to prove or disprove questions in ways that will lead to unbiased answers. Experiments must be well designed to provide accurate and repeatable (precise) results. If we test hypotheses correctly, then we can prove the cause of a phenomenon and determine the likelihood (probability) of the events to happen again. This provides predictive power. The Scientific Method enables us to test a hypothesis and distinguish between the correlation of two or more things happening in association with each other and the actual cause of the phenomenon we observe.

Correlation of two variables cannot explain the cause and effect of their relationship. Scientists design experiments using a number of methods to ensure the results reveal the likelihood of the observation happening (probability). Controlled experiments are used to analyze these relationships and develop cause and effect relationships. Statistical analysis is used to determine whether differences between treatments can be attributed to the treatment applied, if they are artifacts of the experimental design, or of natural variation.

In summary, the Scientific Method produces answers to questions posed in the form of a working hypothesis that enables us to derive theories about what we observe in the world around us. Its power lies in its ability to be repeated, providing unbiased answers to questions to derive theories. This information is powerful and offers opportunity to predict future events and phenomena.

Bibliography

  • Ackoff, R. 1962. Scientific Method, Optimizing Applied Research Decisions. Wiley and Sons, New York, NY.
  • Wilson, F. 1937. The Logic and Methodology of Science in Early Modern Thought. University of Toronto Press. Buffalo, NY.
  • Committee on Science, Engineering, and Public Policy. Experimental Error. 1995. From: On Being a Scientist: Responsible Conduct in Research. Second Edition.
  • The Gale Group. The Scientific Method. 2001. Gale Encyclopedia of Psychology. Second Edition.

Learn more about the author(s)

Angela O'Callaghan

Also of Interest:

An EEO/AA Institution. Copyright © 2024 , University of Nevada Cooperative Extension. A partnership of Nevada counties; University of Nevada, Reno; and the U.S. Department of Agriculture

NASA Logo

Scientific Consensus

hypothesis of the scientific method

It’s important to remember that scientists always focus on the evidence, not on opinions. Scientific evidence continues to show that human activities ( primarily the human burning of fossil fuels ) have warmed Earth’s surface and its ocean basins, which in turn have continued to impact Earth’s climate . This is based on over a century of scientific evidence forming the structural backbone of today's civilization.

NASA Global Climate Change presents the state of scientific knowledge about climate change while highlighting the role NASA plays in better understanding our home planet. This effort includes citing multiple peer-reviewed studies from research groups across the world, 1 illustrating the accuracy and consensus of research results (in this case, the scientific consensus on climate change) consistent with NASA’s scientific research portfolio.

With that said, multiple studies published in peer-reviewed scientific journals 1 show that climate-warming trends over the past century are extremely likely due to human activities. In addition, most of the leading scientific organizations worldwide have issued public statements endorsing this position. The following is a partial list of these organizations, along with links to their published statements and a selection of related resources.

American Scientific Societies

Statement on climate change from 18 scientific associations.

"Observations throughout the world make it clear that climate change is occurring, and rigorous scientific research demonstrates that the greenhouse gases emitted by human activities are the primary driver." (2009) 2

American Association for the Advancement of Science

"Based on well-established evidence, about 97% of climate scientists have concluded that human-caused climate change is happening." (2014) 3

AAAS emblem

American Chemical Society

"The Earth’s climate is changing in response to increasing concentrations of greenhouse gases (GHGs) and particulate matter in the atmosphere, largely as the result of human activities." (2016-2019) 4

ACS emblem

American Geophysical Union

"Based on extensive scientific evidence, it is extremely likely that human activities, especially emissions of greenhouse gases, are the dominant cause of the observed warming since the mid-20th century. There is no alterative explanation supported by convincing evidence." (2019) 5

AGU emblem

American Medical Association

"Our AMA ... supports the findings of the Intergovernmental Panel on Climate Change’s fourth assessment report and concurs with the scientific consensus that the Earth is undergoing adverse global climate change and that anthropogenic contributions are significant." (2019) 6

AMA emblem

American Meteorological Society

"Research has found a human influence on the climate of the past several decades ... The IPCC (2013), USGCRP (2017), and USGCRP (2018) indicate that it is extremely likely that human influence has been the dominant cause of the observed warming since the mid-twentieth century." (2019) 7

AMS emblem

American Physical Society

"Earth's changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe. While natural sources of climate variability are significant, multiple lines of evidence indicate that human influences have had an increasingly dominant effect on global climate warming observed since the mid-twentieth century." (2015) 8

APS emblem

The Geological Society of America

"The Geological Society of America (GSA) concurs with assessments by the National Academies of Science (2005), the National Research Council (2011), the Intergovernmental Panel on Climate Change (IPCC, 2013) and the U.S. Global Change Research Program (Melillo et al., 2014) that global climate has warmed in response to increasing concentrations of carbon dioxide (CO2) and other greenhouse gases ... Human activities (mainly greenhouse-gas emissions) are the dominant cause of the rapid warming since the middle 1900s (IPCC, 2013)." (2015) 9

GSA emblem

Science Academies

International academies: joint statement.

"Climate change is real. There will always be uncertainty in understanding a system as complex as the world’s climate. However there is now strong evidence that significant global warming is occurring. The evidence comes from direct measurements of rising surface air temperatures and subsurface ocean temperatures and from phenomena such as increases in average global sea levels, retreating glaciers, and changes to many physical and biological systems. It is likely that most of the warming in recent decades can be attributed to human activities (IPCC 2001)." (2005, 11 international science academies) 1 0

U.S. National Academy of Sciences

"Scientists have known for some time, from multiple lines of evidence, that humans are changing Earth’s climate, primarily through greenhouse gas emissions." 1 1

UNSAS emblem

U.S. Government Agencies

U.s. global change research program.

"Earth’s climate is now changing faster than at any point in the history of modern civilization, primarily as a result of human activities." (2018, 13 U.S. government departments and agencies) 12

USGCRP emblem

Intergovernmental Bodies

Intergovernmental panel on climate change.

“It is unequivocal that the increase of CO 2 , methane, and nitrous oxide in the atmosphere over the industrial era is the result of human activities and that human influence is the principal driver of many changes observed across the atmosphere, ocean, cryosphere, and biosphere. “Since systematic scientific assessments began in the 1970s, the influence of human activity on the warming of the climate system has evolved from theory to established fact.” 1 3-17

IPCC emblem

Other Resources

List of worldwide scientific organizations.

The following page lists the nearly 200 worldwide scientific organizations that hold the position that climate change has been caused by human action. http://www.opr.ca.gov/facts/list-of-scientific-organizations.html

U.S. Agencies

The following page contains information on what federal agencies are doing to adapt to climate change. https://www.c2es.org/site/assets/uploads/2012/02/climate-change-adaptation-what-federal-agencies-are-doing.pdf

Technically, a “consensus” is a general agreement of opinion, but the scientific method steers us away from this to an objective framework. In science, facts or observations are explained by a hypothesis (a statement of a possible explanation for some natural phenomenon), which can then be tested and retested until it is refuted (or disproved).

As scientists gather more observations, they will build off one explanation and add details to complete the picture. Eventually, a group of hypotheses might be integrated and generalized into a scientific theory, a scientifically acceptable general principle or body of principles offered to explain phenomena.

1. K. Myers, et al, "Consensus revisited: quantifying scientific agreement on climate change and climate expertise among Earth scientists 10 years later", Environmental Research Letters Vol.16 No. 10, 104030 (20 October 2021); DOI:10.1088/1748-9326/ac2774 M. Lynas, et al, "Greater than 99% consensus on human caused climate change in the peer-reviewed scientific literature", Environmental Research Letters Vol.16 No. 11, 114005 (19 October 2021); DOI:10.1088/1748-9326/ac2966 J. Cook et al., "Consensus on consensus: a synthesis of consensus estimates on human-caused global warming", Environmental Research Letters Vol. 11 No. 4, (13 April 2016); DOI:10.1088/1748-9326/11/4/048002 J. Cook et al., "Quantifying the consensus on anthropogenic global warming in the scientific literature", Environmental Research Letters Vol. 8 No. 2, (15 May 2013); DOI:10.1088/1748-9326/8/2/024024 W. R. L. Anderegg, “Expert Credibility in Climate Change”, Proceedings of the National Academy of Sciences Vol. 107 No. 27, 12107-12109 (21 June 2010); DOI: 10.1073/pnas.1003187107 P. T. Doran & M. K. Zimmerman, "Examining the Scientific Consensus on Climate Change", Eos Transactions American Geophysical Union Vol. 90 Issue 3 (2009), 22; DOI: 10.1029/2009EO030002 N. Oreskes, “Beyond the Ivory Tower: The Scientific Consensus on Climate Change”, Science Vol. 306 no. 5702, p. 1686 (3 December 2004); DOI: 10.1126/science.1103618

2. Statement on climate change from 18 scientific associations (2009)

3. AAAS Board Statement on Climate Change (2014)

4. ACS Public Policy Statement: Climate Change (2016-2019)

5. Society Must Address the Growing Climate Crisis Now (2019)

6. Global Climate Change and Human Health (2019)

7. Climate Change: An Information Statement of the American Meteorological Society (2019)

8. American Physical Society (2021)

9. GSA Position Statement on Climate Change (2015)

10. Joint science academies' statement: Global response to climate change (2005)

11. Climate at the National Academies

12. Fourth National Climate Assessment: Volume II (2018)

13. IPCC Fifth Assessment Report, Summary for Policymakers, SPM 1.1 (2014)

14. IPCC Fifth Assessment Report, Summary for Policymakers, SPM 1 (2014)

15. IPCC Sixth Assessment Report, Working Group 1 (2021)

16. IPCC Sixth Assessment Report, Working Group 2 (2022)

17. IPCC Sixth Assessment Report, Working Group 3 (2022)

Discover More Topics From NASA

Explore Earth Science

hypothesis of the scientific method

Earth Science in Action

Earth Action

Earth Science Data

The sum of Earth's plants, on land and in the ocean, changes slightly from year to year as weather patterns shift.

Facts About Earth

hypothesis of the scientific method

  • Open supplemental data
  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, learning scientific observation with worked examples in a digital learning environment.

hypothesis of the scientific method

  • 1 Department Educational Sciences, Chair for Formal and Informal Learning, Technical University Munich School of Social Sciences and Technology, Munich, Germany
  • 2 Aquatic Systems Biology Unit, TUM School of Life Sciences, Technical University of Munich, Freising, Germany

Science education often aims to increase learners’ acquisition of fundamental principles, such as learning the basic steps of scientific methods. Worked examples (WE) have proven particularly useful for supporting the development of such cognitive schemas and successive actions in order to avoid using up more cognitive resources than are necessary. Therefore, we investigated the extent to which heuristic WE are beneficial for supporting the acquisition of a basic scientific methodological skill—conducting scientific observation. The current study has a one-factorial, quasi-experimental, comparative research design and was conducted as a field experiment. Sixty two students of a German University learned about scientific observation steps during a course on applying a fluvial audit, in which several sections of a river were classified based on specific morphological characteristics. In the two experimental groups scientific observation was supported either via faded WE or via non-faded WE both presented as short videos. The control group did not receive support via WE. We assessed factual and applied knowledge acquisition regarding scientific observation, motivational aspects and cognitive load. The results suggest that WE promoted knowledge application: Learners from both experimental groups were able to perform the individual steps of scientific observation more accurately. Fading of WE did not show any additional advantage compared to the non-faded version in this regard. Furthermore, the descriptive results reveal higher motivation and reduced extraneous cognitive load within the experimental groups, but none of these differences were statistically significant. Our findings add to existing evidence that WE may be useful to establish scientific competences.

1 Introduction

Learning in science education frequently involves the acquisition of basic principles or generalities, whether of domain-specific topics (e.g., applying a mathematical multiplication rule) or of rather universal scientific methodologies (e.g., performing the steps of scientific observation) ( Lunetta et al., 2007 ). Previous research has shown that worked examples (WE) can be considered particularly useful for developing such cognitive schemata during learning to avoid using more cognitive resources than necessary for learning successive actions ( Renkl et al., 2004 ; Renkl, 2017 ). WE consist of the presentation of a problem, consecutive solution steps and the solution itself. This is especially advantageous in initial cognitive skill acquisition, i.e., for novice learners with low prior knowledge ( Kalyuga et al., 2001 ). With growing knowledge, fading WE can lead from example-based learning to independent problem-solving ( Renkl et al., 2002 ). Preliminary work has shown the advantage of WE in specific STEM domains like mathematics ( Booth et al., 2015 ; Barbieri et al., 2021 ), but less studies have investigated their impact on the acquisition of basic scientific competencies that involve heuristic problem-solving processes (scientific argumentation, Schworm and Renkl, 2007 ; Hefter et al., 2014 ; Koenen et al., 2017 ). In the realm of natural sciences, various basic scientific methodologies are employed to acquire knowledge, such as experimentation or scientific observation ( Wellnitz and Mayer, 2013 ). During the pursuit of knowledge through scientific inquiry activities, learners may encounter several challenges and difficulties. Similar to the hurdles faced in experimentation, where understanding the criteria for appropriate experimental design, including the development, measurement, and evaluation of results, is crucial ( Sirum and Humburg, 2011 ; Brownell et al., 2014 ; Dasgupta et al., 2014 ; Deane et al., 2014 ), scientific observation additionally presents its own set of issues. In scientific observation, e.g., the acquisition of new insights may be somewhat incidental due to spontaneous and uncoordinated observations ( Jensen, 2014 ). To address these challenges, it is crucial to provide instructional support, including the use of WE, particularly when observations are carried out in a more self-directed manner.

For this reason, the aim of the present study was to determine the usefulness of digitally presented WE to support the acquisition of a basic scientific methodological skill—conducting scientific observations—using a digital learning environment. In this regard, this study examined the effects of different forms of digitally presented WE (non-faded vs. faded) on students’ cognitive and motivational outcomes and compared them to a control group without WE. Furthermore, the combined perspective of factual and applied knowledge, as well as motivational and cognitive aspects, represent further value added to the study.

2 Theoretical background

2.1 worked examples.

WE have been commonly used in the fields of STEM education (science, technology, engineering, and mathematics) ( Booth et al., 2015 ). They consist of a problem statement, the steps to solve the problem, and the solution itself ( Atkinson et al., 2000 ; Renkl et al., 2002 ; Renkl, 2014 ). The success of WE can be explained by their impact on cognitive load (CL) during learning, based on assumptions from Cognitive Load Theory ( Sweller, 2006 ).

Learning with WE is considered time-efficient, effective, and superior to problem-based learning (presentation of the problem without demonstration of solution steps) when it comes to knowledge acquisition and transfer (WE-effect, Atkinson et al., 2000 ; Van Gog et al., 2011 ). Especially WE can help by reducing the extraneous load (presentation and design of the learning material) and, in turn, can lead to an increase in germane load (effort of the learner to understand the learning material) ( Paas et al., 2003 ; Renkl, 2014 ). With regard to intrinsic load (difficulty and complexity of the learning material), it is still controversially discussed if it can be altered by instructional design, e.g., WE ( Gerjets et al., 2004 ). WE have a positive effect on learning and knowledge transfer, especially for novices, as the step-by-step presentation of the solution requires less extraneous mental effort compared to problem-based learning ( Sweller et al., 1998 ; Atkinson et al., 2000 ; Bokosmaty et al., 2015 ). With growing knowledge, WE can lose their advantages (due to the expertise-reversal effect), and scaffolding learning via faded WE might be more successful for knowledge gain and transfer ( Renkl, 2014 ). Faded WE are similar to complete WE, but fade out solution steps as knowledge and competencies grow. Faded WE enhance near-knowledge transfer and reduce errors compared to non-faded WE ( Renkl et al., 2000 ).

In addition, the reduction of intrinsic and extraneous CL by WE also has an impact on learner motivation, such as interest ( Van Gog and Paas, 2006 ). Um et al. (2012) showed that there is a strong positive correlation between germane CL and the motivational aspects of learning, like satisfaction and emotion. Gupta (2019) mentions a positive correlation between CL and interest. Van Harsel et al. (2019) found that WE positively affect learning motivation, while no such effect was found for problem-solving. Furthermore, learning with WE increases the learners’ belief in their competence in completing a task. In addition, fading WE can lead to higher motivation for more experienced learners, while non-faded WE can be particularly motivating for learners without prior knowledge ( Paas et al., 2005 ). In general, fundamental motivational aspects during the learning process, such as situational interest ( Lewalter and Knogler, 2014 ) or motivation-relevant experiences, like basic needs, are influenced by learning environments. At the same time, their use also depends on motivational characteristics of the learning process, such as self-determined motivation ( Deci and Ryan, 2012 ). Therefore, we assume that learning with WE as a relevant component of a learning environment might also influence situational interest and basic needs.

2.1.1 Presentation of worked examples

WE are frequently used in digital learning scenarios ( Renkl, 2014 ). When designing WE, the application via digital learning media can be helpful, as their content can be presented in different ways (video, audio, text, and images), tailored to the needs of the learners, so that individual use is possible according to their own prior knowledge or learning pace ( Mayer, 2001 ). Also, digital media can present relevant information in a timely, motivating, appealing and individualized way and support learning in an effective and needs-oriented way ( Mayer, 2001 ). The advantages of using digital media in designing WE have already been shown in previous studies. Dart et al. (2020) presented WE as short videos (WEV). They report that the use of WEV leads to increased student satisfaction and more positive attitudes. Approximately 90% of the students indicated an active learning approach when learning with the WEV. Furthermore, the results show that students improved their content knowledge through WEV and that they found WEV useful for other courses as well.

Another study ( Kay and Edwards, 2012 ) presented WE as video podcasts. Here, the advantages of WE regarding self-determined learning in terms of learning location, learning time, and learning speed were shown. Learning performance improved significantly after use. The step-by-step, easy-to-understand explanations, the diagrams, and the ability to determine the learning pace by oneself were seen as beneficial.

Multimedia WE can also be enhanced with self-explanation prompts ( Berthold et al., 2009 ). Learning from WE with self-explanation prompts was shown to be superior to other learning methods, such as hypertext learning and observational learning.

In addition to presenting WE in different medial ways, WE can also comprise different content domains.

2.1.2 Content and context of worked examples

Regarding the content of WE, algorithmic and heuristic WE, as well as single-content and double-content WE, can be distinguished ( Reiss et al., 2008 ; Koenen et al., 2017 ; Renkl, 2017 ). Algorithmic WE are traditionally used in the very structured mathematical–physical field. Here, an algorithm with very specific solution steps is to learn, for example, in probability calculation ( Koenen et al., 2017 ). In this study, however, we focus on heuristic double-content WE. Heuristic WE in science education comprise fundamental scientific working methods, e.g., conducting experiments ( Koenen et al., 2017 ). Furthermore, double-content WE contain two learning domains that are relevant for the learning process: (1) the learning domain describes the primarily to be learned abstract process or concept, e.g., scientific methodologies like observation (see section 2.2), while (2) the exemplifying domain consists of the content that is necessary to teach this process or concept, e.g., mapping of river structure ( Renkl et al., 2009 ).

Depending on the WE content to be learned, it may be necessary for learning to take place in different settings. This can be in a formal or informal learning setting or a non-formal field setting. In this study, the focus is on learning scientific observation (learning domain) through river structure mapping (exemplary domain), which takes place with the support of digital media in a formal (university) setting, but in an informal context (nature).

2.2 Scientific observation

Scientific observation is fundamental to all scientific activities and disciplines ( Kohlhauf et al., 2011 ). Scientific observation must be clearly distinguished from everyday observation, where observation is purely a matter of noticing and describing specific characteristics ( Chinn and Malhotra, 2001 ). In contrast to this everyday observation, scientific observation as a method of knowledge acquisition can be described as a rather complex activity, defined as the theory-based, systematic and selective perception of concrete systems and processes without any fundamental manipulation ( Wellnitz and Mayer, 2013 ). Wellnitz and Mayer (2013) described the scientific observation process via six steps: (1) formulation of the research question (s), (2) deduction of the null hypothesis and the alternative hypothesis, (3) planning of the research design, (4) conducting the observation, (5) analyzing the data, and (6) answering the research question(s) on this basis. Only through reliable and qualified observation, valid data can be obtained that provide solid scientific evidence ( Wellnitz and Mayer, 2013 ).

Since observation activities are not trivial and learners often observe without generating new knowledge or connecting their observations to scientific explanations and thoughts, it is important to provide support at the related cognitive level, so that observation activities can be conducted in a structured way according to pre-defined criteria ( Ford, 2005 ; Eberbach and Crowley, 2009 ). Especially during field-learning experiences, scientific observation is often spontaneous and uncoordinated, whereby random discoveries result in knowledge gain ( Jensen, 2014 ).

To promote successful observing in rather unstructured settings like field trips, instructional support for the observation process seems useful. To guide observation activities, digitally presented WE seem to be an appropriate way to introduce learners to the individual steps of scientific observation using concrete examples.

2.3 Research questions and hypothesis

The present study investigates the effect of digitally presented double-content WE that supports the mapping of a small Bavarian river by demonstrating the steps of scientific observation. In this analysis, we focus on the learning domain of the WE and do not investigate the exemplifying domain in detail. Distinct ways of integrating WE in the digital learning environment (faded WE vs. non-faded WE) are compared with each other and with a control group (no WE). The aim is to examine to what extent differences between those conditions exist with regard to (RQ1) learners’ competence acquisition [acquisition of factual knowledge about the scientific observation method (quantitative data) and practical application of the scientific observation method (quantified qualitative data)], (RQ2) learners’ motivation (situational interest and basic needs), and (RQ3) CL. It is assumed that (Hypothesis 1), the integration of WE (faded and non-faded) leads to significantly higher competence acquisition (factual and applied knowledge), significantly higher motivation and significantly lower extraneous CL as well as higher germane CL during the learning process compared to a learning environment without WE. No differences between the conditions are expected regarding intrinsic CL. Furthermore, it is assumed (Hypothesis 2) that the integration of faded WE leads to significantly higher competence acquisition, significantly higher motivation, and lower extraneous CL as well as higher germane CL during the learning processes compared to non-faded WE. No differences between the conditions are expected with regard to intrinsic CL.

The study took place during the field trips of a university course on the application of a fluvial audit (FA) using the German working aid for mapping the morphology of rivers and their floodplains ( Bayerisches Landesamt für Umwelt, 2019 ). FA is the leading fluvial geomorphological tool for application to data collection contiguously along all watercourses of interest ( Walker et al., 2007 ). It is widely used because it is a key example of environmental conservation and monitoring that needs to be taught to students of selected study programs; thus, knowing about the most effective ways of learning is of high practical relevance.

3.1 Sample and design

3.1.1 sample.

The study was conducted with 62 science students and doctoral students of a German University (age M  = 24.03 years; SD  = 4.20; 36 females; 26 males). A total of 37 participants had already conducted a scientific observation and would rate their knowledge in this regard at a medium level ( M  = 3.32 out of 5; SD  = 0.88). Seven participants had already conducted an FA and would rate their knowledge in this regard at a medium level ( M  = 3.14 out of 5; SD  = 0.90). A total of 25 participants had no experience at all. Two participants had to be excluded from the sample afterward because no posttest results were available.

3.1.2 Design

The study has a 1-factorial quasi-experimental comparative research design and is conducted as a field experiment using a pre/posttest design. Participants were randomly assigned to one of three conditions: no WE ( n  = 20), faded WE ( n  = 20), and non-faded WE ( n  = 20).

3.2 Implementation and material

3.2.1 implementation.

The study started with an online kick-off meeting where two lecturers informed all students within an hour about the basics regarding the assessment of the structural integrity of the study river and the course of the field trip days to conduct an FA. Afterward, within 2 weeks, students self-studied via Moodle the FA following the German standard method according to the scoresheets of Bayerisches Landesamt für Umwelt (2019) . This independent preparation using the online presented documents was a necessary prerequisite for participation in the field days and was checked in the pre-testing. The preparatory online documents included six short videos and four PDF files on the content, guidance on the German protocol of the FA, general information on river landscapes, information about anthropogenic changes in stream morphology and the scoresheets for applying the FA. In these sheets, the river and its floodplain are subdivided into sections of 100 m in length. Each of these sections is evaluated by assessing 21 habitat factors related to flow characteristics and structural variability. The findings are then transferred into a scoring system for the description of structural integrity from 1 (natural) to 7 (highly modified). Habitat factors have a decisive influence on the living conditions of animals and plants in and around rivers. They included, e.g., variability in water depth, stream width, substratum diversity, or diversity of flow velocities.

3.2.2 Materials

On the field trip days, participants were handed a tablet and a paper-based FA worksheet (last accessed 21st September 2022). 1 This four-page assessment sheet was accompanied by a digital learning environment presented on Moodle that instructed the participants on mapping the water body structure and guided the scientific observation method. All three Moodle courses were identical in structure and design; the only difference was the implementation of the WE. Below, the course without WE are described first. The other two courses have an identical structure, but contain additional WE in the form of learning videos.

3.2.3 No worked example

After a short welcome and introduction to the course navigation, the FA started with the description of a short hypothetical scenario: Participants should take the role of an employee of an urban planning office that assesses the ecomorphological status of a small river near a Bavarian city. The river was divided into five sections that had to be mapped separately. The course was structured accordingly. At the beginning of each section, participants had to formulate and write down a research question, and according to hypotheses regarding the ecomorphological status of the river’s section, they had to collect data in this regard via the mapping sheet and then evaluate their data and draw a conclusion. Since this course serves as a control group, no WE videos supporting the scientific observation method were integrated. The layout of the course is structured like a book, where it is not possible to scroll back. This is important insofar as the participants do not have the possibility to revisit information in order to keep the conditions comparable as well as distinguishable.

3.2.4 Non-faded worked example

In the course with no-faded WE, three instructional videos are shown for each of the five sections. In each of the three videos, two steps of the scientific observation method are presented so that, finally, all six steps of scientific observation are demonstrated. The mapping of the first section starts after the general introduction (as described above) with the instruction to work on the first two steps of scientific observation: the formulation of a research question and hypotheses. To support this, a video of about 4 min explains the features of scientific sound research questions and hypotheses. To this aim, a practical example, including explanations and tips, is given regarding the formulation of research questions and hypotheses for this section (e.g., “To what extent does the building development and the closeness of the path to the water body have an influence on the structure of the water body?” Alternative hypothesis: It is assumed that the housing development and the closeness of the path to the water body have a negative influence on the water body structure. Null hypothesis: It is assumed that the housing development and the closeness of the path to the watercourse have no negative influence on the watercourse structure.). Participants should now formulate their own research questions and hypotheses, write them down in a text field at the end of the page, and then skip to the next page. The next two steps of scientific observation, planning and conducting, are explained in a short 4-min video. To this aim, a practical example including explanations and tips is given regarding planning and conducting scientific for this section (e.g., “It’s best to go through each evaluation category carefully one by one that way you are sure not to forget anything!”). Now, participants were asked to collect data for the first section using their paper-based FA worksheet. Participants individually surveyed the river and reported their results in the mapping sheet by ticking the respective boxes in it. After collecting this data, they returned to the digital learning environment to learn how to use these data by studying the last two steps of scientific observation, evaluation, and conclusion. The third 4-min video explained how to evaluate and interpret collected data. For this purpose, a practical example with explanations and tips is given regarding evaluating and interpreting data for this section (e.g., “What were the individual points that led to the assessment? Have there been points that were weighted more than others? Remember the introduction video!”). At the end of the page, participants could answer their before-stated research questions and hypotheses by evaluating their collected data and drawing a conclusion. This brings participants to the end of the first mapping section. Afterward, the cycle begins again with the second section of the river that has to be mapped. Again, participants had to conduct the steps of scientific observation, guided by WE videos, explaining the steps in slightly different wording or with different examples. A total of five sections are mapped, in which the structure of the learning environment and the videos follow the same procedure.

3.2.5 Faded worked example

The digital learning environment with the faded WE follow the same structure as the version with the non-faded WE. However, in this version, the information in the WE videos is successively reduced. In the first section, all three videos are identical to the version with the non-faded WE. In the second section, faded content was presented as follows: the tip at the end was omitted in all three videos. In the third section, the tip and the practical example were omitted. In the fourth and fifth sections, no more videos were presented, only the work instructions.

3.3 Procedure

The data collection took place on four continuous days on the university campus, with a maximum group size of 15 participants on each day. The students were randomly assigned to one of the three conditions (no WE vs. faded WE vs. non-faded WE). After a short introduction to the procedure, the participants were handed the paper-based FA worksheet and one tablet per person. Students scanned the QR code on the first page of the worksheet that opened the pretest questionnaire, which took about 20 min to complete. After completing the questionnaire, the group walked for about 15 min to the nearby small river that was to be mapped. Upon arrival, there was first a short introduction to the digital learning environment and a check that the login (via university account on Moodle) worked. During the next 4 h, the participants individually mapped five segments of the river using the cartography worksheet. They were guided through the steps of scientific observation using the digital learning environment on the tablet. The results of their scientific observation were logged within the digital learning environment. At the end of the digital learning environment, participants were directed to the posttest via a link. After completing the test, the tablets and mapping sheets were returned. Overall, the study took about 5 h per group each day.

3.4 Instruments

In the pretest, sociodemographic data (age and gender), the study domain and the number of study semesters were collected. Additionally, the previous scientific observation experience and the estimation of one’s own ability in this regard were assessed. For example, it was asked whether scientific observation had already been conducted and, if so, how the abilities were rated on a 5-point scale from very low to very high. Preparation for the FA on the basis of the learning material was assessed: Participants were asked whether they had studied all six videos and all four PDF documents, with the response options not at all, partially, and completely. Furthermore, a factual knowledge test about scientific observation and questions about self-determination theory was administered. The posttest used the same knowledge test, and additional questions on basic needs, situational interest, measures of CL and questions about the usefulness of the WE. All scales were presented online, and participants reached the questionnaire via QR code.

3.4.1 Scientific observation competence acquisition

For the factual knowledge (quantitative assessment of the scientific observation competence), a single-choice knowledge test with 12 questions was developed and used as pre- and posttest with a maximum score of 12 points. It assesses the learners’ knowledge of the scientific observation method regarding the steps of scientific observation, e.g., formulating research questions and hypotheses or developing a research design. The questions are based on Wahser (2008 , adapted by Koenen, 2014 ) and adapted to scientific observation: “Although you are sure that you have conducted the scientific observation correctly, an unexpected result turns up. What conclusion can you draw?” Each question has four answer options (one of which is correct) and, in addition, one “I do not know” option.

For the applied knowledge (quantified qualitative assessment of the scientific observation competence), students’ scientific observations written in the digital learning environment were analyzed. A coding scheme was used with the following codes: 0 = insufficient (text field is empty or includes only insufficient key points), 1 = sufficient (a research question and no hypotheses or research question and inappropriate hypotheses are stated), 2 = comprehensive (research question and appropriate hypothesis or research question and hypotheses are stated, but, e.g., incorrect null hypothesis), 3 = very comprehensive (correct research question, hypothesis and null hypothesis are stated). One example of a very comprehensive answer regarding the research question and hypothesis is: To what extent does the lack of riparian vegetation have an impact on water body structure? Hypothesis: The lack of shore vegetation has a negative influence on the water body structure. Null hypothesis: The lack of shore vegetation has no influence on the water body structure. Afterward, a sum score was calculated for each participant. Five times, a research question and hypotheses (steps 1 and 2 in the observation process) had to be formulated (5 × max. 3 points = 15 points), and five times, the research questions and hypotheses had to be answered (steps 5 and 6 in the observation process: evaluation and conclusion) (5 × max. 3 points = 15 points). Overall, participants could reach up to 30 points. Since the observation and evaluation criteria in data collection and analysis were strongly predetermined by the scoresheet, steps 3 and 4 of the observation process (planning and conducting) were not included in the analysis.

All 600 cases (60 participants, each 10 responses to code) were coded by the first author. For verification, 240 cases (24 randomly selected participants, eight from each course) were cross-coded by an external coder. In 206 of the coded cases, the raters agreed. The cases in which the raters did not agree were discussed together, and a solution was found. This results in Cohen’s κ = 0.858, indicating a high to very high level of agreement. This indicates that the category system is clearly formulated and that the individual units of analysis could be correctly assigned.

3.4.2 Self-determination index

For the calculation of the self-determination index (SDI-index), Thomas and Müller (2011) scale for self-determination was used in the pretest. The scale consists of four subscales: intrinsic motivation (five items; e.g., I engage with the workshop content because I enjoy it; reliability of alpha = 0.87), identified motivation (four items; e.g., I engage with the workshop content because it gives me more options when choosing a career; alpha = 0.84), introjected motivation (five items; e.g., I engage with the workshop content because otherwise I would have a guilty feeling; alpha = 0.79), and external motivation (three items, e.g., I engage with the workshop content because I simply have to learn it; alpha = 0.74). Participants could indicate their answers on a 5-point Likert scale ranging from 1 = completely disagree to 5 = completely agree. To calculate the SDI-index, the sum of the self-determined regulation styles (intrinsic and identified) is subtracted from the sum of the external regulation styles (introjected and external), where intrinsic and external regulation are scored two times ( Thomas and Müller, 2011 ).

3.4.3 Motivation

Basic needs were measured in the posttest with the scale by Willems and Lewalter (2011) . The scale consists of three subscales: perceived competence (four items; e.g., during the workshop, I felt that I could meet the requirements; alpha = 0.90), perceived autonomy (five items; e.g., during the workshop, I felt that I had a lot of freedom; alpha = 0.75), and perceived autonomy regarding personal wishes and goals (APWG) (four items; e.g., during the workshop, I felt that the workshop was how I wish it would be; alpha = 0.93). We added all three subscales to one overall basic needs scale (alpha = 0.90). Participants could indicate their answers on a 5-point Likert scale ranging from 1 = completely disagree to 5 = completely agree.

Situational interest was measured in the posttest with the 12-item scale by Lewalter and Knogler (2014 ; Knogler et al., 2015 ; Lewalter, 2020 ; alpha = 0.84). The scale consists of two subscales: catch (six items; e.g., I found the workshop exciting; alpha = 0.81) and hold (six items; e.g., I would like to learn more about parts of the workshop; alpha = 0.80). Participants could indicate their answers on a 5-point Likert scale ranging from 1 = completely disagree to 5 = completely agree.

3.4.4 Cognitive load

In the posttest, CL was used to examine the mental load during the learning process. The intrinsic CL (three items; e.g., this task was very complex; alpha = 0.70) and extraneous CL (three items; e.g., in this task, it is difficult to identify the most important information; alpha = 0.61) are measured with the scales from Klepsch et al. (2017) . The germane CL (two items; e.g., the learning session contained elements that supported me to better understand the learning material; alpha = 0.72) is measured with the scale from Leppink et al. (2013) . Participants could indicate their answers on a 5-point Likert scale ranging from 1 = completely disagree to 5 = completely agree.

3.4.5 Attitudes toward worked examples

To measure how effective participants rated the WE, we used two scales related to the WE videos as instructional support. The first scale from Renkl (2001) relates to the usefulness of WE. The scale consists of four items (e.g., the explanations were helpful; alpha = 0.71). Two items were recoded because they were formulated negatively. The second scale is from Wachsmuth (2020) and relates to the participant’s evaluation of the WE. The scale consists of nine items (e.g., I always did what was explained in the learning videos; alpha = 0.76). Four items were recoded because they were formulated negatively. Participants could indicate their answers on a 5-point Likert scale ranging from 1 = completely disagree to 5 = completely agree.

3.5 Data analysis

An ANOVA was used to calculate if the variable’s prior knowledge and SDI index differed between the three groups. However, as no significant differences between the conditions were found [prior factual knowledge: F (2, 59) = 0.15, p  = 0.865, η 2  = 0.00 self-determination index: F (2, 59) = 0.19, p  = 0.829, η 2  = 0.00], they were not included as covariates in subsequent analyses.

Furthermore, a repeated measure, one-way analysis of variance (ANOVA), was conducted to compare the three treatment groups (no WE vs. faded WE vs. non-faded WE) regarding the increase in factual knowledge about the scientific observation method from pretest to posttest.

A MANOVA (multivariate analysis) was calculated with the three groups (no WE vs. non-faded WE vs. faded WE) as a fixed factor and the dependent variables being the practical application of the scientific observation method (first research question), situational interest, basic needs (second research question), and CL (third research question).

Additionally, to determine differences in applied knowledge even among the three groups, Bonferroni-adjusted post-hoc analyses were conducted.

The descriptive statistics between the three groups in terms of prior factual knowledge about the scientific observation method and the self-determination index are shown in Table 1 . The descriptive statistics revealed only small, non-significant differences between the three groups in terms of factual knowledge.

www.frontiersin.org

Table 1 . Means (standard deviations) of factual knowledge tests (pre- and posttest) and self-determination index for the three different groups.

The results of the ANOVA revealed that the overall increase in factual knowledge from pre- to posttest just misses significance [ F (1, 57) = 3.68, p  = 0.060, η 2  = 0 0.06]. Furthermore, no significant differences between the groups were found regarding the acquisition of factual knowledge from pre- to posttest [ F (2, 57) = 2.93, p  = 0.062, η 2  = 0.09].

An analysis of the descriptive statistics showed that the largest differences between the groups were found in applied knowledge (qualitative evaluation) and extraneous load (see Table 2 ).

www.frontiersin.org

Table 2 . Means (standard deviations) of dependent variables with the three different groups.

Results of the MANOVA revealed significant overall differences between the three groups [ F (12, 106) = 2.59, p  = 0.005, η 2  = 0.23]. Significant effects were found for the application of knowledge [ F (2, 57) = 13.26, p  = <0.001, η 2  = 0.32]. Extraneous CL just missed significance [ F (2, 57) = 2.68, p  = 0.065, η 2  = 0.09]. There were no significant effects for situational interest [ F (2, 57) = 0.44, p  = 0.644, η 2  = 0.02], basic needs [ F (2, 57) = 1.22, p  = 0.302, η 2  = 0.04], germane CL [ F (2, 57) = 2.68, p  = 0.077, η 2  = 0.09], and intrinsic CL [ F (2, 57) = 0.28, p  = 0.757, η 2  = 0.01].

Bonferroni-adjusted post hoc analysis revealed that the group without WE had significantly lower scores in the evaluation of the applied knowledge than the group with non-faded WE ( p  = <0.001, M diff  = −8.90, 95% CI [−13.47, −4.33]) and then the group with faded WE ( p  = <0.001, M diff  = −7.40, 95% CI [−11.97, −2.83]). No difference was found between the groups with faded and non-faded WE ( p  = 1.00, M diff  = −1.50, 95% CI [−6.07, 3.07]).

The descriptive statistics regarding the perceived usefulness of WE and participants’ evaluation of the WE revealed that the group with the faded WE rated usefulness slightly higher than the participants with non-faded WE and also reported a more positive evaluation. However, the results of a MANOVA revealed no significant overall differences [ F (2, 37) = 0.32, p  = 0.732, η 2  = 0 0.02] (see Table 3 ).

www.frontiersin.org

Table 3 . Means (standard deviations) of dependent variables with the three different groups.

5 Discussion

This study investigated the use of WE to support students’ acquisition of science observation. Below, the research questions are answered, and the implications and limitations of the study are discussed.

5.1 Results on factual and applied knowledge

In terms of knowledge gain (RQ1), our findings revealed no significant differences in participants’ results of the factual knowledge test both across all three groups and specifically between the two experimental groups. These results are in contradiction with related literature where WE had a positive impact on knowledge acquisition ( Renkl, 2014 ) and faded WE are considered to be more effective in knowledge acquisition and transfer, in contrast to non-faded WE ( Renkl et al., 2000 ; Renkl, 2014 ). A limitation of the study is the fact that the participants already scored very high on the pretest, so participation in the intervention would likely not yield significant knowledge gains due to ceiling effects ( Staus et al., 2021 ). Yet, nearly half of the students reported being novices in the field prior to the study, suggesting that the difficulty of some test items might have been too low. Here, it would be important to revise the factual knowledge test, e.g., the difficulty of the distractors in further study.

Nevertheless, with regard to application knowledge, the results revealed large significant differences: Participants of the two experimental groups performed better in conducting scientific observation steps than participants of the control group. In the experimental groups, the non-faded WE group performed better than the faded WE group. However, the absence of significant differences between the two experimental groups suggests that faded and non-faded WE used as double-content WE are suitable to teach applied knowledge about scientific observation in the learning domain ( Koenen, 2014 ). Furthermore, our results differ from the findings of Renkl et al. (2000) , in which the faded version led to the highest knowledge transfer. Despite the fact that the non-faded WE performed best in our study, the faded version of the WE was also appropriate to improve learning, confirming the findings of Renkl (2014) and Hesser and Gregory (2015) .

5.2 Results on learners’ motivation

Regarding participants’ motivation (RQ2; situational interest and basic needs), no significant differences were found across all three groups or between the two experimental groups. However, descriptive results reveal slightly higher motivation in the two experimental groups than in the control group. In this regard, our results confirm existing literature on a descriptive level showing that WE lead to higher learning-relevant motivation ( Paas et al., 2005 ; Van Harsel et al., 2019 ). Additionally, both experimental groups rated the usefulness of the WE as high and reported a positive evaluation of the WE. Therefore, we assume that even non-faded WE do not lead to over-instruction. Regarding the descriptive tendency, a larger sample might yield significant results and detect even small effects in future investigations. However, because this study also focused on comprehensive qualitative data analysis, it was not possible to evaluate a larger sample in this study.

5.3 Results on cognitive load

Finally, CL did not vary significantly across all three groups (RQ3). However, differences in extraneous CL just slightly missed significance. In descriptive values, the control group reported the highest extrinsic and lowest germane CL. The faded WE group showed the lowest extrinsic CL and a similar germane CL as the non-faded WE group. These results are consistent with Paas et al. (2003) and Renkl (2014) , reporting that WE can help to reduce the extraneous CL and, in return, lead to an increase in germane CL. Again, these differences were just above the significance level, and it would be advantageous to retest with a larger sample to detect even small effects.

Taken together, our results only partially confirm H1: the integration of WE (both faded and non-faded WE) led to a higher acquisition of application knowledge than the control group without WE, but higher factual knowledge was not found. Furthermore, higher motivation or different CL was found on a descriptive level only. The control group provided the basis for comparison with the treatment in order to investigate if there is an effect at all and, if so, how large the effect is. This is an important point to assess whether the effort of implementing WE is justified. Additionally, regarding H2, our results reveal no significant differences between the two WE conditions. We assume that the high complexity of the FA could play a role in this regard, which might be hard to handle, especially for beginners, so learners could benefit from support throughout (i.e., non-faded WE).

In addition to the limitations already mentioned, it must be noted that only one exemplary topic was investigated, and the sample only consisted of students. Since only the learning domain of the double-content WE was investigated, the exemplifying domain could also be analyzed, or further variables like motivation could be included in further studies. Furthermore, the influence of learners’ prior knowledge on learning with WE could be investigated, as studies have found that WE are particularly beneficial in the initial acquisition of cognitive skills ( Kalyuga et al., 2001 ).

6 Conclusion

Overall, the results of the current study suggest a beneficial role for WE in supporting the application of scientific observation steps. A major implication of these findings is that both faded and non-faded WE should be considered, as no general advantage of faded WE over non-faded WE was found. This information can be used to develop targeted interventions aimed at the support of scientific observation skills.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

Ethical approval was not required for the study involving human participants in accordance with the local legislation and institutional requirements. Written informed consent to participate in this study was not required from the participants in accordance with the national legislation and the institutional requirements.

Author contributions

ML: Writing – original draft. SM: Writing – review & editing. JP: Writing – review & editing. JG: Writing – review & editing. DL: Writing – review & editing.

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2024.1293516/full#supplementary-material

1. ^ https://www.lfu.bayern.de/wasser/gewaesserstrukturkartierung/index.htm

Atkinson, R. K., Derry, S. J., Renkl, A., and Wortham, D. (2000). Learning from examples: instructional principles from the worked examples research. Rev. Educ. Res. 70, 181–214. doi: 10.3102/00346543070002181

Crossref Full Text | Google Scholar

Barbieri, C. A., Booth, J. L., Begolli, K. N., and McCann, N. (2021). The effect of worked examples on student learning and error anticipation in algebra. Instr. Sci. 49, 419–439. doi: 10.1007/s11251-021-09545-6

Bayerisches Landesamt für Umwelt. (2019). Gewässerstrukturkartierung von Fließgewässern in Bayern – Erläuterungen zur Erfassung und Bewertung. (Water structure mapping of flowing waters in Bavaria - Explanations for recording and assessment) . Available at: https://www.bestellen.bayern.de/application/eshop_app000005?SID=1020555825&ACTIONxSESSxSHOWPIC(BILDxKEY:%27lfu_was_00152%27,BILDxCLASS:%27Artikel%27,BILDxTYPE:%27PDF%27)

Google Scholar

Berthold, K., Eysink, T. H., and Renkl, A. (2009). Assisting self-explanation prompts are more effective than open prompts when learning with multiple representations. Instr. Sci. 37, 345–363. doi: 10.1007/s11251-008-9051-z

Bokosmaty, S., Sweller, J., and Kalyuga, S. (2015). Learning geometry problem solving by studying worked examples: effects of learner guidance and expertise. Am. Educ. Res. J. 52, 307–333. doi: 10.3102/0002831214549450

Booth, J. L., McGinn, K., Young, L. K., and Barbieri, C. A. (2015). Simple practice doesn’t always make perfect. Policy Insights Behav. Brain Sci. 2, 24–32. doi: 10.1177/2372732215601691

Brownell, S. E., Wenderoth, M. P., Theobald, R., Okoroafor, N., Koval, M., Freeman, S., et al. (2014). How students think about experimental design: novel conceptions revealed by in-class activities. Bioscience 64, 125–137. doi: 10.1093/biosci/bit016

Chinn, C. A., and Malhotra, B. A. (2001). “Epistemologically authentic scientific reasoning” in Designing for science: implications from everyday, classroom, and professional settings . eds. K. Crowley, C. D. Schunn, and T. Okada (Mahwah, NJ: Lawrence Erlbaum), 351–392.

Dart, S., Pickering, E., and Dawes, L. (2020). Worked example videos for blended learning in undergraduate engineering. AEE J. 8, 1–22. doi: 10.18260/3-1-1153-36021

Dasgupta, A., Anderson, T. R., and Pelaez, N. J. (2014). Development and validation of a rubric for diagnosing students’ experimental design knowledge and difficulties. CBE Life Sci. Educ. 13, 265–284. doi: 10.1187/cbe.13-09-0192

PubMed Abstract | Crossref Full Text | Google Scholar

Deane, T., Nomme, K. M., Jeffery, E., Pollock, C. A., and Birol, G. (2014). Development of the biological experimental design concept inventory (BEDCI). CBE Life Sci. Educ. 13, 540–551. doi: 10.1187/cbe.13-11-0218

Deci, E. L., and Ryan, R. M. (2012). Self-determination theory. In P. A. M. LangeVan, A. W. Kruglanski, and E. T. Higgins (Eds.), Handbook of theories of social psychology , 416–436.

Eberbach, C., and Crowley, K. (2009). From everyday to scientific observation: how children learn to observe the Biologist’s world. Rev. Educ. Res. 79, 39–68. doi: 10.3102/0034654308325899

Ford, D. (2005). The challenges of observing geologically: third graders’ descriptions of rock and mineral properties. Sci. Educ. 89, 276–295. doi: 10.1002/sce.20049

Gerjets, P., Scheiter, K., and Catrambone, R. (2004). Designing instructional examples to reduce intrinsic cognitive load: molar versus modular presentation of solution procedures. Instr. Sci. 32, 33–58. doi: 10.1023/B:TRUC.0000021809.10236.71

Gupta, U. (2019). Interplay of germane load and motivation during math problem solving using worked examples. Educ. Res. Theory Pract. 30, 67–71.

Hefter, M. H., Berthold, K., Renkl, A., Riess, W., Schmid, S., and Fries, S. (2014). Effects of a training intervention to foster argumentation skills while processing conflicting scientific positions. Instr. Sci. 42, 929–947. doi: 10.1007/s11251-014-9320-y

Hesser, T. L., and Gregory, J. L. (2015). Exploring the Use of Faded Worked Examples as a Problem Solving Approach for Underprepared Students. High. Educ. Stud. 5, 36–46.

Jensen, E. (2014). Evaluating children’s conservation biology learning at the zoo. Conserv. Biol. 28, 1004–1011. doi: 10.1111/cobi.12263

Kalyuga, S., Chandler, P., Tuovinen, J., and Sweller, J. (2001). When problem solving is superior to studying worked examples. J. Educ. Psychol. 93, 579–588. doi: 10.1037/0022-0663.93.3.579

Kay, R. H., and Edwards, J. (2012). Examining the use of worked example video podcasts in middle school mathematics classrooms: a formative analysis. Can. J. Learn. Technol. 38, 1–20. doi: 10.21432/T2PK5Z

Klepsch, M., Schmitz, F., and Seufert, T. (2017). Development and validation of two instruments measuring intrinsic, extraneous, and germane cognitive load. Front. Psychol. 8:1997. doi: 10.3389/fpsyg.2017.01997

Knogler, M., Harackiewicz, J. M., Gegenfurtner, A., and Lewalter, D. (2015). How situational is situational interest? Investigating the longitudinal structure of situational interest. Contemp. Educ. Psychol. 43, 39–50. doi: 10.1016/j.cedpsych.2015.08.004

Koenen, J. (2014). Entwicklung und Evaluation von experimentunterstützten Lösungsbeispielen zur Förderung naturwissenschaftlich experimenteller Arbeitsweisen . Dissertation.

Koenen, J., Emden, M., and Sumfleth, E. (2017). Naturwissenschaftlich-experimentelles Arbeiten. Potenziale des Lernens mit Lösungsbeispielen und Experimentierboxen. (scientific-experimental work. Potentials of learning with solution examples and experimentation boxes). Zeitschrift für Didaktik der Naturwissenschaften 23, 81–98. doi: 10.1007/s40573-017-0056-5

Kohlhauf, L., Rutke, U., and Neuhaus, B. J. (2011). Influence of previous knowledge, language skills and domain-specific interest on observation competency. J. Sci. Educ. Technol. 20, 667–678. doi: 10.1007/s10956-011-9322-3

Leppink, J., Paas, F., Van der Vleuten, C. P., Van Gog, T., and Van Merriënboer, J. J. (2013). Development of an instrument for measuring different types of cognitive load. Behav. Res. Methods 45, 1058–1072. doi: 10.3758/s13428-013-0334-1

Lewalter, D. (2020). “Schülerlaborbesuche aus motivationaler Sicht unter besonderer Berücksichtigung des Interesses. (Student laboratory visits from a motivational perspective with special attention to interest)” in Handbuch Forschen im Schülerlabor – theoretische Grundlagen, empirische Forschungsmethoden und aktuelle Anwendungsgebiete . eds. K. Sommer, J. Wirth, and M. Vanderbeke (Münster: Waxmann-Verlag), 62–70.

Lewalter, D., and Knogler, M. (2014). “A questionnaire to assess situational interest – theoretical considerations and findings” in Poster Presented at the 50th Annual Meeting of the American Educational Research Association (AERA) (Philadelphia, PA)

Lunetta, V., Hofstein, A., and Clough, M. P. (2007). Learning and teaching in the school science laboratory: an analysis of research, theory, and practice. In N. Lederman and S. Abel (Eds.). Handbook of research on science education , Mahwah, NJ: Lawrence Erlbaum, 393–441.

Mayer, R. E. (2001). Multimedia learning. Cambridge University Press.

Paas, F., Renkl, A., and Sweller, J. (2003). Cognitive load theory and instructional design: recent developments. Educ. Psychol. 38, 1–4. doi: 10.1207/S15326985EP3801_1

Paas, F., Tuovinen, J., van Merriënboer, J. J. G., and Darabi, A. (2005). A motivational perspective on the relation between mental effort and performance: optimizing learner involvement in instruction. Educ. Technol. Res. Dev. 53, 25–34. doi: 10.1007/BF02504795

Reiss, K., Heinze, A., Renkl, A., and Groß, C. (2008). Reasoning and proof in geometry: effects of a learning environment based on heuristic worked-out examples. ZDM Int. J. Math. Educ. 40, 455–467. doi: 10.1007/s11858-008-0105-0

Renkl, A. (2001). Explorative Analysen zur effektiven Nutzung von instruktionalen Erklärungen beim Lernen aus Lösungsbeispielen. (Exploratory analyses of the effective use of instructional explanations in learning from worked examples). Unterrichtswissenschaft 29, 41–63. doi: 10.25656/01:7677

Renkl, A. (2014). “The worked examples principle in multimedia learning” in Cambridge handbook of multimedia learning . ed. R. E. Mayer (Cambridge University Press), 391–412.

Renkl, A. (2017). Learning from worked-examples in mathematics: students relate procedures to principles. ZDM 49, 571–584. doi: 10.1007/s11858-017-0859-3

Renkl, A., Atkinson, R. K., and Große, C. S. (2004). How fading worked solution steps works. A cognitive load perspective. Instr. Sci. 32, 59–82. doi: 10.1023/B:TRUC.0000021815.74806.f6

Renkl, A., Atkinson, R. K., and Maier, U. H. (2000). “From studying examples to solving problems: fading worked-out solution steps helps learning” in Proceeding of the 22nd Annual Conference of the Cognitive Science Society . eds. L. Gleitman and A. K. Joshi (Mahwah, NJ: Erlbaum), 393–398.

Renkl, A., Atkinson, R. K., Maier, U. H., and Staley, R. (2002). From example study to problem solving: smooth transitions help learning. J. Exp. Educ. 70, 293–315. doi: 10.1080/00220970209599510

Renkl, A., Hilbert, T., and Schworm, S. (2009). Example-based learning in heuristic domains: a cognitive load theory account. Educ. Psychol. Rev. 21, 67–78. doi: 10.1007/s10648-008-9093-4

Schworm, S., and Renkl, A. (2007). Learning argumentation skills through the use of prompts for self-explaining examples. J. Educ. Psychol. 99, 285–296. doi: 10.1037/0022-0663.99.2.285

Sirum, K., and Humburg, J. (2011). The experimental design ability test (EDAT). Bioscene 37, 8–16.

Staus, N. L., O’Connell, K., and Storksdieck, M. (2021). Addressing the ceiling effect when assessing STEM out-of-school time experiences. Front. Educ. 6:690431. doi: 10.3389/feduc.2021.690431

Sweller, J. (2006). The worked example effect and human cognition. Learn. Instr. 16, 165–169. doi: 10.1016/j.learninstruc.2006.02.005

Sweller, J., Van Merriënboer, J. J. G., and Paas, F. (1998). Cognitive architecture and instructional design. Educ. Psychol. Rev. 10, 251–295. doi: 10.1023/A:1022193728205

Thomas, A. E., and Müller, F. H. (2011). “Skalen zur motivationalen Regulation beim Lernen von Schülerinnen und Schülern. Skalen zur akademischen Selbstregulation von Schüler/innen SRQ-A [G] (überarbeitete Fassung)” in Scales of motivational regulation in student learning. Student academic self-regulation scales SRQ-A [G] (revised version). Wissenschaftliche Beiträge aus dem Institut für Unterrichts- und Schulentwicklung Nr. 5 (Klagenfurt: Alpen-Adria-Universität)

Um, E., Plass, J. L., Hayward, E. O., and Homer, B. D. (2012). Emotional design in multimedia learning. J. Educ. Psychol. 104, 485–498. doi: 10.1037/a0026609

Van Gog, T., Kester, L., and Paas, F. (2011). Effects of worked examples, example-problem, and problem- example pairs on novices’ learning. Contemp. Educ. Psychol. 36, 212–218. doi: 10.1016/j.cedpsych.2010.10.004

Van Gog, T., and Paas, G. W. C. (2006). Optimising worked example instruction: different ways to increase germane cognitive load. Learn. Instr. 16, 87–91. doi: 10.1016/j.learninstruc.2006.02.004

Van Harsel, M., Hoogerheide, V., Verkoeijen, P., and van Gog, T. (2019). Effects of different sequences of examples and problems on motivation and learning. Contemp. Educ. Psychol. 58, 260–275. doi: 10.1002/acp.3649

Wachsmuth, C. (2020). Computerbasiertes Lernen mit Aufmerksamkeitsdefizit: Unterstützung des selbstregulierten Lernens durch metakognitive prompts. (Computer-based learning with attention deficit: supporting self-regulated learning through metacognitive prompts) . Chemnitz: Dissertation Technische Universität Chemnitz.

Wahser, I. (2008). Training von naturwissenschaftlichen Arbeitsweisen zur Unterstützung experimenteller Kleingruppenarbeit im Fach Chemie (Training of scientific working methods to support experimental small group work in chemistry) . Dissertation

Walker, J., Gibson, J., and Brown, D. (2007). Selecting fluvial geomorphological methods for river management including catchment scale restoration within the environment agency of England and Wales. Int. J. River Basin Manag. 5, 131–141. doi: 10.1080/15715124.2007.9635313

Wellnitz, N., and Mayer, J. (2013). Erkenntnismethoden in der Biologie – Entwicklung und evaluation eines Kompetenzmodells. (Methods of knowledge in biology - development and evaluation of a competence model). Z. Didaktik Naturwissensch. 19, 315–345.

Willems, A. S., and Lewalter, D. (2011). “Welche Rolle spielt das motivationsrelevante Erleben von Schülern für ihr situationales Interesse im Mathematikunterricht? (What role does students’ motivational experience play in their situational interest in mathematics classrooms?). Befunde aus der SIGMA-Studie” in Erziehungswissenschaftliche Forschung – nachhaltige Bildung. Beiträge zur 5. DGfE-Sektionstagung “Empirische Bildungsforschung”/AEPF-KBBB im Frühjahr 2009 . eds. B. Schwarz, P. Nenninger, and R. S. Jäger (Landau: Verlag Empirische Pädagogik), 288–294.

Keywords: digital media, worked examples, scientific observation, motivation, cognitive load

Citation: Lechner M, Moser S, Pander J, Geist J and Lewalter D (2024) Learning scientific observation with worked examples in a digital learning environment. Front. Educ . 9:1293516. doi: 10.3389/feduc.2024.1293516

Received: 13 September 2023; Accepted: 29 February 2024; Published: 18 March 2024.

Reviewed by:

Copyright © 2024 Lechner, Moser, Pander, Geist and Lewalter. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Miriam Lechner, [email protected]

IMAGES

  1. Scientific Method: Definition and Examples

    hypothesis of the scientific method

  2. Formula for Using the Scientific Method

    hypothesis of the scientific method

  3. The scientific method is a process for experimentation

    hypothesis of the scientific method

  4. Scientific Method

    hypothesis of the scientific method

  5. Scientific Method

    hypothesis of the scientific method

  6. Scientific Method Worksheet & Example for Kids

    hypothesis of the scientific method

VIDEO

  1. Basics of Hypothesis, theory and scientific laws

  2. Null and Alternative Hypothesis

  3. In the scientific method, a hypothesis is an a observation b measurement c test d propos

  4. What is the Role of Hypotheses in Scientific Investigations?

  5. Research Hypothesis and its Types with examples /urdu/hindi

  6. Scientific Method Steps Part 3 (Types of Variables)

COMMENTS

  1. The scientific method (article)

    The scientific method. At the core of biology and other sciences lies a problem-solving approach called the scientific method. The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis.

  2. Scientific hypothesis

    The formulation and testing of a hypothesis is part of the scientific method, the approach scientists use when attempting to understand and test ideas about natural phenomena. The generation of a hypothesis frequently is described as a creative process and is based on existing scientific knowledge, intuition, or experience.

  3. Scientific Method

    The study of scientific method is the attempt to discern the activities by which that success is achieved. Among the activities often identified as characteristic of science are systematic observation and experimentation, inductive and deductive reasoning, and the formation and testing of hypotheses and theories.

  4. Scientific method

    The scientific method is critical to the development of scientific theories, which explain empirical (experiential) laws in a scientifically rational manner.In a typical application of the scientific method, a researcher develops a hypothesis, tests it through various means, and then modifies the hypothesis on the basis of the outcome of the tests and experiments.

  5. Scientific method

    Theory. The scientific method one of the scholarly methods. It is a quantitative approach to research. The ubiquitous element in the scientific method is empiricism. This is in opposition to stringent forms of rationalism: the scientific method embodies the position that reason alone cannot solve a particular scientific problem.

  6. 2.1: The Scientific Method

    Hypothesis Testing and The scientific Method. The scientific method is a process of research with defined steps that include data collection and careful observation. The scientific method was used even in ancient times, but it was first documented by England's Sir Francis Bacon (1561-1626) (Figure \(\PageIndex{5}\)), who set up inductive methods for scientific inquiry.

  7. Scientific Method: Observation, Hypothesis and Experiment

    The scientific method is a detailed, empirical problem-solving process used by biologists and other scientists. This iterative approach involves formulating a question based on observation, developing a testable potential explanation for the observation (called a hypothesis), making and testing predictions based on the hypothesis, and using the findings to create new hypotheses and predictions.

  8. Steps of the Scientific Method

    The scientific method is a system scientists and other people use to ask and answer questions about the natural world. In a nutshell, the scientific method works by making observations, asking a question or identifying a problem, and then designing and analyzing an experiment to test a prediction of what you expect will happen.

  9. What is a scientific hypothesis?

    A scientific hypothesis is a tentative, testable explanation for a phenomenon in the natural world. It's the initial building block in the scientific method.Many describe it as an "educated guess ...

  10. Steps of the Scientific Method

    The six steps of the scientific method include: 1) asking a question about something you observe, 2) doing background research to learn what is already known about the topic, 3) constructing a hypothesis, 4) experimenting to test the hypothesis, 5) analyzing the data from the experiment and drawing conclusions, and 6) communicating the results ...

  11. Scientific Method: Definition and Examples

    The scientific method is a series of steps followed by scientific investigators to answer specific questions about the natural world. It involves making observations, formulating a hypothesis, and conducting scientific experiments. Scientific inquiry starts with an observation followed by the formulation of a question about what has been observed.

  12. 6 Steps of the Scientific Method

    The scientific method is a systematic way of learning about the world around us and answering questions. The key difference between the scientific method and other ways of acquiring knowledge are forming a hypothesis and then testing it with an experiment.

  13. 1.2 The Scientific Methods

    This, in a nutshell, describes the scientific method that scientists employ to decide scientific issues on the basis of evidence from observation and experiment. An investigation often begins with a scientist making an observation. The scientist observes a pattern or trend within the natural world.

  14. The scientific method (article)

    The scientific method. At the core of physics and other sciences lies a problem-solving approach called the scientific method. The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis.

  15. Science and the scientific method: Definitions and examples

    The process of generating and testing a hypothesis forms the backbone of the scientific method. When an idea has been confirmed over many experiments, it can be called a scientific theory.

  16. What Is a Hypothesis? The Scientific Method

    A hypothesis (plural hypotheses) is a proposed explanation for an observation. The definition depends on the subject. In science, a hypothesis is part of the scientific method. It is a prediction or explanation that is tested by an experiment. Observations and experiments may disprove a scientific hypothesis, but can never entirely prove one.

  17. What Are The Steps Of The Scientific Method?

    The scientific method is a process that includes several steps: First, an observation or question arises about a phenomenon. Then a hypothesis is formulated to explain the phenomenon, which is used to make predictions about other related occurrences or to predict the results of new observations quantitatively. Finally, these predictions are put to the test through experiments or further ...

  18. Theory vs. Hypothesis: Basics of the Scientific Method

    A scientific hypothesis is a proposed explanation for an observable phenomenon. In other words, a hypothesis is an educated guess about the relationship between multiple variables. A hypothesis is a fresh, unchallenged idea that a scientist proposes prior to conducting research. The purpose of a hypothesis is to provide a tentative explanation ...

  19. What is the Scientific Method: How does it work and why is it important

    The scientific method is a systematic process involving steps like defining questions, forming hypotheses, conducting experiments, and analyzing data. It minimizes biases and enables replicable research, leading to groundbreaking discoveries like Einstein's theory of relativity, penicillin, and the structure of DNA.

  20. The Scientific Method Steps, Uses, and Key Terms

    When conducting research, the scientific method steps to follow are: Observe what you want to investigate. Ask a research question and make predictions. Test the hypothesis and collect data. Examine the results and draw conclusions. Report and share the results. This process not only allows scientists to investigate and understand different ...

  21. The Scientific Method

    This publication describes the method scientists use to conduct research and describe and explain nature, ultimately trying prove or disprove theories. Scientists all over the world conduct research using the Scientific Method. The University of Nevada Cooperative Extension exists to provide unbiased, research-based information on topics ...

  22. Scientific Consensus

    Technically, a "consensus" is a general agreement of opinion, but the scientific method steers us away from this to an objective framework. In science, facts or observations are explained by a hypothesis (a statement of a possible explanation for some natural phenomenon), which can then be tested and retested until it is refuted (or disproved).

  23. Frontiers

    Science education often aims to increase learners' acquisition of fundamental principles, such as learning the basic steps of scientific methods. Worked examples (WE) have proven particularly useful for supporting the development of such cognitive schemas and successive actions in order to avoid using up more cognitive resources than are necessary. Therefore, we investigated the extent to ...

  24. Lean Hypotheses and Effectual Commitments: An Integrative Framework

    Whereas lean startup centers around hypothesis testing, effectuation focuses on cocreative commitments from self-selecting stakeholders. In other words, the former takes markets as exogenous, while the latter explicates how they can be made endogenous and why that matters. ... Scientific method: An historical and philosophical introduction ...