• Copy/Paste Link Link Copied

Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research to Make Curricular & Instructional Decisions

Paula J. Stanovich and Keith E. Stanovich University of Toronto

Produced by RMC Research Corporation, Portsmouth, New Hampshire

This publication was produced under National Institute for Literacy Contract No. ED-00CO-0093 with RMC Research Corporation. Sandra Baxter served as the contracting officer's technical representative. The views expressed herein do not necessarily represent the policies of the National Institute for Literacy. No official endorsement by the National Institute for Literacy or any product, commodity, service, or enterprise is intended or should be inferred.

The National Institute for Literacy

Sandra Baxter, Interim Executive Director Lynn Reddy, Communications Director

To order copies of this booklet, contact the National Institute for Literacy at EdPubs, PO Box 1398, Jessup, MD 20794-1398. Call 800-228-8813 or email [email protected] .

The National Institute for Literacy, an independent federal organization, supports the development of high quality state, regional, and national literacy services so that all Americans can develop the literacy skills they need to succeed at work, at home, and in the community.

The Partnership for Reading, a project administered by the National Institute for Literacy, is a collaborative effort of the National Institute for Literacy, the National Institute of Child Health and Human Development, the U.S. Department of Education, and the U.S. Department of Health and Human Services to make evidence-based reading research available to educators, parents, policy makers, and others with an interest in helping all people learn to read well.

Editorial support provided by C. Ralph Adler and Elizabeth Goldman, and design/production support provided by Diane Draper and Bob Kozman, all of RMC Research Corporation.

Introduction

In the recent move toward standards-based reform in public education, many educational reform efforts require schools to demonstrate that they are achieving educational outcomes with students performing at a required level of achievement. Federal and state legislation, in particular, has codified this standards-based movement and tied funding and other incentives to student achievement.

At first, demonstrating student learning may seem like a simple task, but reflection reveals that it is a complex challenge requiring educators to use specific knowledge and skills. Standards-based reform has many curricular and instructional prerequisites. The curriculum must represent the most important knowledge, skills, and attributes that schools want their students to acquire because these learning outcomes will serve as the basis of assessment instruments. Likewise, instructional methods should be appropriate for the designed curriculum. Teaching methods should lead to students learning the outcomes that are the focus of the assessment standards.

Standards- and assessment-based educational reforms seek to obligate schools and teachers to supply evidence that their instructional methods are effective. But testing is only one of three ways to gather evidence about the effectiveness of instructional methods. Evidence of instructional effectiveness can come from any of the following sources:

  • Demonstrated student achievement in formal testing situations implemented by the teacher, school district, or state;
  • Published findings of research-based evidence that the instructional methods being used by teachers lead to student achievement; or
  • Proof of reason-based practice that converges with a research-based consensus in the scientific literature. This type of justification of educational practice becomes important when direct evidence may be lacking (a direct test of the instructional efficacy of a particular method is absent), but there is a theoretical link to research-based evidence that can be traced.

Each of these methods has its pluses and minuses. While testing seems the most straightforward, it is not necessarily the clear indicator of good educational practice that the public seems to think it is. The meaning of test results is often not immediately clear. For example, comparing averages or other indicators of overall performance from tests across classrooms, schools, or school districts takes no account of the resources and support provided to a school, school district, or individual professional. Poor outcomes do not necessarily indict the efforts of physicians in Third World countries who work with substandard equipment and supplies. Likewise, objective evidence of below-grade or below-standard mean performance of a group of students should not necessarily indict their teachers if essential resources and supports (e.g., curriculum materials, institutional aid, parental cooperation) to support teaching efforts were lacking. However, the extent to which children could learn effectively even in under-equipped schools is not known because evidence-based practices are, by and large, not implemented. That is, there is evidence that children experiencing academic difficulties can achieve more educationally if they are taught with effective methods; sadly, scientific research about what works does not usually find its way into most classrooms.

Testing provides a useful professional calibrator, but it requires great contextual sensitivity in interpretation. It is not the entire solution for assessing the quality of instructional efforts. This is why research-based and reason-based educational practice are also crucial for determining the quality and impact of programs. Teachers thus have the responsibility to be effective users and interpreters of research. Providing a survey and synthesis of the most effective practices for a variety of key curriculum goals (such as literacy and numeracy) would seem to be a helpful idea, but no document could provide all of that information. (Many excellent research syntheses exist, such as the National Reading Panel, 2000; Snow, Burns, & Griffin, 1998; Swanson, 1999, but the knowledge base about effective educational practices is constantly being updated, and many issues remain to be settled.)

As professionals, teachers can become more effective and powerful by developing the skills to recognize scientifically based practice and, when the evidence is not available, use some basic research concepts to draw conclusions on their own. This paper offers a primer for those skills that will allow teachers to become independent evaluators of educational research.

The Formal Scientific Method and Scientific Thinking in Educational Practice

When you go to your family physician with a medical complaint, you expect that the recommended treatment has proven to be effective with many other patients who have had the same symptoms. You may even ask why a particular medication is being recommended for you. The doctor may summarize the background knowledge that led to that recommendation and very likely will cite summary evidence from the drug's many clinical trials and perhaps even give you an overview of the theory behind the drug's success in treating symptoms like yours.

All of this discussion will probably occur in rather simple terms, but that does not obscure the fact that the doctor has provided you with data to support a theory about your complaint and its treatment. The doctor has shared knowledge of medical science with you. And while everyone would agree that the practice of medicine has its "artful" components (for example, the creation of a healing relationship between doctor and patient), we have come to expect and depend upon the scientific foundation that underpins even the artful aspects of medical treatment. Even when we do not ask our doctors specifically for the data, we assume it is there, supporting our course of treatment.

Actually, Vaughn and Dammann (2001) have argued that the correct analogy is to say that teaching is in part a craft, rather than an art. They point out that craft knowledge is superior to alternative forms of knowledge such as superstition and folklore because, among other things, craft knowledge is compatible with scientific knowledge and can be more easily integrated with it. One could argue that in this age of education reform and accountability, educators are being asked to demonstrate that their craft has been integrated with science--that their instructional models, methods, and materials can be likened to the evidence a physician should be able to produce showing that a specific treatment will be effective. As with medicine, constructing teaching practice on a firm scientific foundation does not mean denying the craft aspects of teaching.

Architecture is another professional practice that, like medicine and education, grew from being purely a craft to a craft based firmly on a scientific foundation. Architects wish to design beautiful buildings and environments, but they must also apply many foundational principles of engineering and adhere to structural principles. If they do not, their buildings, however beautiful they may be, will not stand. Similarly, a teacher seeks to design lessons that stimulate students and entice them to learn--lessons that are sometimes a beauty to behold. But if the lessons are not based in the science of pedagogy, they, like poorly constructed buildings, will fail.

Education is informed by formal scientific research through the use of archival research-based knowledge such as that found in peer-reviewed educational journals. Preservice teachers are first exposed to the formal scientific research in their university teacher preparation courses (it is hoped), through the instruction received from their professors, and in their course readings (e.g., textbooks, journal articles). Practicing teachers continue their exposure to the results of formal scientific research by subscribing to and reading professional journals, by enrolling in graduate programs, and by becoming lifelong learners.

Scientific thinking in practice is what characterizes reflective teachers--those who inquire into their own practice and who examine their own classrooms to find out what works best for them and their students. What follows in this document is, first, a "short course" on how to become an effective consumer of the archival literature that results from the conduct of formal scientific research in education and, second, a section describing how teachers can think scientifically in their ongoing reflection about their classroom practice.

Being able to access mechanisms that evaluate claims about teaching methods and to recognize scientific research and its findings is especially important for teachers because they are often confronted with the view that "anything goes" in the field of education--that there is no such thing as best practice in education, that there are no ways to verify what works best, that teachers should base their practice on intuition, or that the latest fad must be the best way to teach, please a principal, or address local school reform. The "anything goes" mentality actually represents a threat to teachers' professional autonomy. It provides a fertile environment for gurus to sell untested educational "remedies" that are not supported by an established research base.

Teachers as independent evaluators of research evidence

One factor that has impeded teachers from being active and effective consumers of educational science has been a lack of orientation and training in how to understand the scientific process and how that process results in the cumulative growth of knowledge that leads to validated educational practice. Educators have only recently attempted to resolve educational disputes scientifically, and teachers have not yet been armed with the skills to evaluate disputes on their own.

Educational practice has suffered greatly because its dominant model for resolving or adjudicating disputes has been more political (with its corresponding factions and interest groups) than scientific. The field's failure to ground practice in the attitudes and values of science has made educators susceptible to the "authority syndrome" as well as fads and gimmicks that ignore evidence-based practice.

When our ancestors needed information about how to act, they would ask their elders and other wise people. Contemporary society and culture are much more complex. Mass communication allows virtually anyone (on the Internet, through self-help books) to proffer advice, to appear to be a "wise elder." The current problem is how to sift through the avalanche of misguided and uninformed advice to find genuine knowledge. Our problem is not information; we have tons of information. What we need are quality control mechanisms.

Peer-reviewed research journals in various disciplines provide those mechanisms. However, even with mechanisms like these in behavioral science and education, it is all too easy to do an "end run" around the quality control they provide. Powerful information dissemination outlets such as publishing houses and mass media frequently do not discriminate between good and bad information. This provides a fertile environment for gurus to sell untested educational "remedies" that are not supported by an established research base and, often, to discredit science, scientific evidence, and the notion of research-based best practice in education. As Gersten (2001) notes, both seasoned and novice teachers are "deluged with misinformation" (p. 45).

We need tools for evaluating the credibility of these many and varied sources of information; the ability to recognize research-based conclusions is especially important. Acquiring those tools means understanding scientific values and learning methods for making inferences from the research evidence that arises through the scientific process. These values and methods were recently summarized by a panel of the National Academy of Sciences convened on scientific inquiry in education (Shavelson & Towne, 2002), and our discussion here will be completely consistent with the conclusions of that NAS panel.

The scientific criteria for evaluating knowledge claims are not complicated and could easily be included in initial teacher preparation programs, but they usually are not (which deprives teachers from an opportunity to become more efficient and autonomous in their work right at the beginning of their careers). These criteria include:

  • the publication of findings in refereed journals (scientific publications that employ a process of peer review),
  • the duplication of the results by other investigators, and
  • a consensus within a particular research community on whether there is a critical mass of studies that point toward a particular conclusion.

In their discussion of the evolution of the American Educational Research Association (AERA) conference and the importance of separating research evidence from opinion when making decisions about instructional practice, Levin and O'Donnell (2000) highlight the importance of enabling teachers to become independent evaluators of research evidence. Being aware of the importance of research published in peer-reviewed scientific journals is only the first step because this represents only the most minimal of criteria. Following is a review of some of the principles of research-based evaluation that teachers will find useful in their work.

Publicly verifiable research conclusions: Replication and Peer Review

Source credibility: the consumer protection of peer reviewed journals..

The front line of defense for teachers against incorrect information in education is the existence of peer-reviewed journals in education, psychology, and other related social sciences. These journals publish empirical research on topics relevant to classroom practice and human cognition and learning. They are the first place that teachers should look for evidence of validated instructional practices.

As a general quality control mechanism, peer review journals provide a "first pass" filter that teachers can use to evaluate the plausibility of educational claims. To put it more concretely, one ironclad criterion that will always work for teachers when presented with claims of uncertain validity is the question: Have findings supporting this method been published in recognized scientific journals that use some type of peer review procedure? The answer to this question will almost always separate pseudoscientific claims from the real thing.

In a peer review, authors submit a paper to a journal for publication, where it is critiqued by several scientists. The critiques are reviewed by an editor (usually a scientist with an extensive history of work in the specialty area covered by the journal). The editor then decides whether the weight of opinion warrants immediate publication, publication after further experimentation and statistical analysis, or rejection because the research is flawed or does not add to the knowledge base. Most journals carry a statement of editorial policy outlining their exact procedures for publication, so it is easy to check whether a journal is in fact, peer-reviewed.

Peer review is a minimal criterion, not a stringent one. Not all information in peer-reviewed scientific journals is necessarily correct, but it has at the very least undergone a cycle of peer criticism and scrutiny. However, it is because the presence of peer-reviewed research is such a minimal criterion that its absence becomes so diagnostic. The failure of an idea, a theory, an educational practice, behavioral therapy, or a remediation technique to have adequate documentation in the peer-reviewed literature of a scientific discipline is a very strong indication to be wary of the practice.

The mechanisms of peer review vary somewhat from discipline to discipline, but the underlying rationale is the same. Peer review is one way (replication of a research finding is another) that science institutionalizes the attitudes of objectivity and public criticism. Ideas and experimentation undergo a honing process in which they are submitted to other critical minds for evaluation. Ideas that survive this critical process have begun to meet the criterion of public verifiability. The peer review process is far from perfect, but it really is the only external consumer protection that teachers have.

The history of reading instruction illustrates the high cost that is paid when the peer-reviewed literature is ignored, when the normal processes of scientific adjudication are replaced with political debates and rhetorical posturing. A vast literature has been generated on best practices that foster children's reading acquisition (Adams, 1990; Anderson, Hiebert, Scott, & Wilkinson, 1985; Chard & Osborn, 1999; Cunningham & Allington, 1994; Ehri, Nunes, Stahl, & Willows, 2001; Moats, 1999; National Reading Panel, 2000; Pearson, 1993; Pressley, 1998; Pressley, Rankin, & Yokol, 1996; Rayner, Foorman, Perfetti, Pesetsky, & Seidenberg, 2002; Reading Coherence Initiative, 1999; Snow, Burns, & Griffin, 1998; Spear-Swerling & Sternberg, 2001). Yet much of this literature remains unknown to many teachers, contributing to the frustrating lack of clarity about accepted, scientifically validated findings and conclusions on reading acquisition.

Teachers should also be forewarned about the difference between professional education journals that are magazines of opinion in contrast to journals where primary reports of research, or reviews of research, are peer reviewed. For example, the magazines Phi Delta Kappan and Educational Leadership both contain stimulating discussions of educational issues, but neither is a peer-reviewed journal of original research. In contrast, the American Educational Research Journal (a flagship journal of the AERA) and the Journal of Educational Psychology (a flagship journal of the American Psychological Association) are both peer-reviewed journals of original research. Both are main sources for evidence on validated techniques of reading instruction and for research on aspects of the reading process that are relevant to a teacher's instructional decisions.

This is true, too, of presentations at conferences of educational organizations. Some are data-based presentations of original research. Others are speeches reflecting personal opinion about educational problems. While these talks can be stimulating and informative, they are not a substitute for empirical research on educational effectiveness.

Replication and the importance of public verifiability.

Research-based conclusions about educational practice are public in an important sense: they do not exist solely in the mind of a particular individual but have been submitted to the scientific community for criticism and empirical testing by others. Knowledge considered "special"--the province of the thought of an individual and immune from scrutiny and criticism by others--can never have the status of scientific knowledge. Research-based conclusions, when published in a peer reviewed journal, become part of the public realm, available to all, in a way that claims of "special expertise" are not.

Replication is the second way that science uses to make research-based conclusions concrete and "public." In order to be considered scientific, a research finding must be presented to other researchers in the scientific community in a way that enables them to attempt the same experiment and obtain the same results. When the same results occur, the finding has been replicated . This process ensures that a finding is not the result of the errors or biases of a particular investigator. Replicable findings become part of the converging evidence that forms the basis of a research-based conclusion about educational practice.

John Donne told us that "no man is an island." Similarly, in science, no researcher is an island. Each investigator is connected to the research community and its knowledge base. This interconnection enables science to grow cumulatively and for research-based educational practice to be built on a convergence of knowledge from a variety of sources. Researchers constantly build on previous knowledge in order to go beyond what is currently known. This process is possible only if research findings are presented in such a way that any investigator can use them to build on.

Philosopher Daniel Dennett (1995) has said that science is "making mistakes in public. Making mistakes for all to see, in the hopes of getting the others to help with the corrections" (p. 380). We might ask those proposing an educational innovation for the evidence that they have in fact "made some mistakes in public." Legitimate scientific disciplines can easily provide such evidence. For example, scientists studying the psychology of reading once thought that reading difficulties were caused by faulty eye movements. This hypothesis has been shown to be in error, as has another that followed it, that so-called visual reversal errors were a major cause of reading difficulty. Both hypotheses were found not to square with the empirical evidence (Rayner, 1998; Share & Stanovich, 1995). The hypothesis that reading difficulties can be related to language difficulties at the phonological level has received much more support (Liberman, 1999; National Reading Panel, 2000; Rayner, Foorman, Perfetti, Pesetsky, & Seidenberg, 2002; Shankweiler, 1999; Stanovich, 2000).

After making a few such "errors" in public, reading scientists have begun, in the last 20 years, to get it right. But the only reason teachers can have confidence that researchers are now "getting it right" is that researchers made it open, public knowledge when they got things wrong. Proponents of untested and pseudoscientific educational practices will never point to cases where they "got it wrong" because they are not committed to public knowledge in the way that actual science is. These proponents do not need, as Dennett says, "to get others to help in making the corrections" because they have no intention of correcting their beliefs and prescriptions based on empirical evidence.

Education is so susceptible to fads and unproven practices because of its tacit endorsement of a personalistic view of knowledge acquisition--one that is antithetical to the scientific value of the public verifiability of knowledge claims. Many educators believe that knowledge resides within particular individuals--with particularly elite insights--who then must be called upon to dispense this knowledge to others. Indeed, some educators reject public, depersonalized knowledge in social science because they believe it dehumanizes people. Science, however, with its conception of publicly verifiable knowledge, actually democratizes knowledge. It frees practitioners and researchers from slavish dependence on authority.

Subjective, personalized views of knowledge degrade the human intellect by creating conditions that subjugate it to an elite whose "personal" knowledge is not accessible to all (Bronowski, 1956, 1977; Dawkins, 1998; Gross, Levitt, & Lewis, 1997; Medawar, 1982, 1984, 1990; Popper, 1972; Wilson, 1998). Empirical science, by generating knowledge and moving it into the public domain, is a liberating force. Teachers can consult the research and decide for themselves whether the state of the literature is as the expert portrays it. All teachers can benefit from some rudimentary grounding in the most fundamental principles of scientific inference. With knowledge of a few uncomplicated research principles, such as control, manipulation, and randomization, anyone can enter the open, public discourse about empirical findings. In fact, with the exception of a few select areas such as the eye movement research mentioned previously, much of the work described in noted summaries of reading research (e.g., Adams, 1990; Snow, Burns, & Griffin, 1998) could easily be replicated by teachers themselves.

There are many ways that the criteria of replication and peer review can be utilized in education to base practitioner training on research-based best practice. Take continuing teacher education in the form of inservice sessions, for example. Teachers and principals who select speakers for professional development activities should ask speakers for the sources of their conclusions in the form of research evidence in peer-reviewed journals. They should ask speakers for bibliographies of the research evidence published on the practices recommended in their presentations.

The science behind research-based practice relies on systematic empiricism

Empiricism is the practice of relying on observation. Scientists find out about the world by examining it. The refusal by some scientists to look into Galileo's telescope is an example of how empiricism has been ignored at certain points in history. It was long believed that knowledge was best obtained through pure thought or by appealing to authority. Galileo claimed to have seen moons around the planet Jupiter. Another scholar, Francesco Sizi, attempted to refute Galileo, not with observations, but with the following argument:

There are seven windows in the head, two nostrils, two ears, two eyes and a mouth; so in the heavens there are two favorable stars, two unpropitious, two luminaries, and Mercury alone undecided and indifferent. From which and many other similar phenomena of nature such as the seven metals, etc., which it were tedious to enumerate, we gather that the number of planets is necessarily seven...ancient nations, as well as modern Europeans, have adopted the division of the week into seven days, and have named them from the seven planets; now if we increase the number of planets, this whole system falls to the ground...moreover, the satellites are invisible to the naked eye and therefore can have no influence on the earth and therefore would be useless and therefore do not exist. (Holton & Roller, 1958, p. 160)

Three centuries of the demonstrated power of the empirical approach give us an edge on poor Sizi. Take away those years of empiricism, and many of us might have been there nodding our heads and urging him on. In fact, the empirical approach is not necessarily obvious, which is why we often have to teach it, even in a society that is dominated by science.

Empiricism pure and simple is not enough, however. Observation itself is fine and necessary, but pure, unstructured observation of the natural world will not lead to scientific knowledge. Write down every observation you make from the time you get up in the morning to the time you go to bed on a given day. When you finish, you will have a great number of facts, but you will not have a greater understanding of the world. Scientific observation is termed systematic because it is structured so that the results of the observation reveal something about the underlying causal structure of events in the world. Observations are structured so that, depending upon the outcome of the observation, some theories of the causes of the outcome are supported and others rejected.

Teachers can benefit by understanding two things about research and causal inferences. The first is the simple (but sometimes obscured) fact that statements about best instructional practices are statements that contain a causal claim. These statements claim that one type of method or practice causes superior educational outcomes. Second, teachers must understand how the logic of the experimental method provides the critical support for making causal inferences.

Science addresses testable questions

Science advances by positing theories to account for particular phenomena in the world, by deriving predictions from these theories, by testing the predictions empirically, and by modifying the theories based on the tests (the sequence is typically theory -> prediction -> test -> theory modification). What makes a theory testable? A theory must have specific implications for observable events in the natural world.

Science deals only with a certain class of problem: the kind that is empirically solvable. That does not mean that different classes of problems are inherently solvable or unsolvable and that this division is fixed forever. Quite the contrary: some problems that are currently unsolvable may become solvable as theory and empirical techniques become more sophisticated. For example, decades ago historians would not have believed that the controversial issue of whether Thomas Jefferson had a child with his slave Sally Hemings was an empirically solvable question. Yet, by 1998, this problem had become solvable through advances in genetic technology, and a paper was published in the journal Nature (Foster, Jobling, Taylor, Donnelly, Deknijeff, Renemieremet, Zerjal, & Tyler-Smith, 1998) on the question.

The criterion of whether a problem is "testable" is called the falsifiability criterion: a scientific theory must always be stated in such a way that the predictions derived from it can potentially be shown to be false. The falsifiability criterion states that, for a theory to be useful, the predictions drawn from it must be specific. The theory must go out on a limb, so to speak, because in telling us what should happen, the theory must also imply that certain things will not happen. If these latter things do happen, it is a clear signal that something is wrong with the theory. It may need to be modified, or we may need to look for an entirely new theory. Either way, we will end up with a theory that is closer to the truth.

In contrast, if a theory does not rule out any possible observations, then the theory can never be changed, and we are frozen into our current way of thinking with no possibility of progress. A successful theory cannot posit or account for every possible happening. Such a theory robs itself of any predictive power.

What we are talking about here is a certain type of intellectual honesty. In science, the proponent of a theory is always asked to address this question before the data are collected: "What data pattern would cause you to give up, or at least to alter, this theory?" In the same way, the falsifiability criterion is a useful consumer protection for the teacher when evaluating claims of educational effectiveness. Proponents of an educational practice should be asked for evidence; they should also be willing to admit that contrary data will lead them to abandon the practice. True scientific knowledge is held tentatively and is subject to change based on contrary evidence. Educational remedies not based on scientific evidence will often fail to put themselves at risk by specifying what data patterns would prove them false.

Objectivity and intellectual honesty

Objectivity, another form of intellectual honesty in research, means that we let nature "speak for itself" without imposing our wishes on it--that we report the results of experimentation as accurately as we can and that we interpret them as fairly as possible. (The fact that this goal is unattainable for any single human being should not dissuade us from holding objectivity as a value.)

In the language of the general public, open-mindedness means being open to possible theories and explanations for a particular phenomenon. But in science it means that and something more. Philosopher Jonathan Adler (1998) teaches us that science values another aspect of open-mindedness even more highly: "What truly marks an open-minded person is the willingness to follow where evidence leads. The open-minded person is willing to defer to impartial investigations rather than to his own predilections...Scientific method is attunement to the world, not to ourselves" (p. 44).

Objectivity is critical to the process of science, but it does not mean that such attitudes must characterize each and every scientist for science as a whole to work. Jacob Bronowski (1973, 1977) often argued that the unique power of science to reveal knowledge about the world does not arise because scientists are uniquely virtuous (that they are completely objective or that they are never biased in interpreting findings, for example). It arises because fallible scientists are immersed in a process of checks and balances --a process in which scientists are always there to criticize and to root out errors. Philosopher Daniel Dennett (1999/2000) points out that "scientists take themselves to be just as weak and fallible as anybody else, but recognizing those very sources of error in themselvesÉthey have devised elaborate systems to tie their own hands, forcibly preventing their frailties and prejudices from infecting their results" (p. 42). More humorously, psychologist Ray Nickerson (1998) makes the related point that the vanities of scientists are actually put to use by the scientific process, by noting that it is "not so much the critical attitude that individual scientists have taken with respect to their own ideas that has given science its success...but more the fact that individual scientists have been highly motivated to demonstrate that hypotheses that are held by some other scientists are false" (p. 32). These authors suggest that the strength of scientific knowledge comes not because scientists are virtuous, but from the social process where scientists constantly cross-check each others' knowledge and conclusions.

The public criteria of peer review and replication of findings exist in part to keep checks on the objectivity of individual scientists. Individuals cannot hide bias and nonobjectivity by personalizing their claims and keeping them from public scrutiny. Science does not accept findings that have failed the tests of replication and peer review precisely because it wants to ensure that all findings in science are in the public domain, as defined above. Purveyors of pseudoscientific educational practices fail the test of objectivity and are often identifiable by their attempts to do an "end run" around the public mechanisms of science by avoiding established peer review mechanisms and the information-sharing mechanisms that make replication possible. Instead, they attempt to promulgate their findings directly to consumers, such as teachers.

The principle of converging evidence

The principle of converging evidence has been well illustrated in the controversies surrounding the teaching of reading. The methods of systematic empiricism employed in the study of reading acquisition are many and varied. They include case studies, correlational studies, experimental studies, narratives, quasi-experimental studies, surveys, epidemiological studies and many others. The results of many of these studies have been synthesized in several important research syntheses (Adams, 1990; Ehri et al., 2001; National Reading Panel, 2000; Pressley, 1998; Rayner et al., 2002; Reading Coherence Initiative, 1999; Share & Stanovich, 1995; Snow, Burns, & Griffin, 1998; Snowling, 2000; Spear-Swerling & Sternberg, 2001; Stanovich, 2000). These studies were used in a process of establishing converging evidence, a principle that governs the drawing of the conclusion that a particular educational practice is research-based.

The principle of converging evidence is applied in situations requiring a judgment about where the "preponderance of evidence" points. Most areas of science contain competing theories. The extent to which a particular study can be seen as uniquely supporting one particular theory depends on whether other competing explanations have been ruled out. A particular experimental result is never equally relevant to all competing theories. An experiment may be a very strong test of one or two alternative theories but a weak test of others. Thus, research is considered highly convergent when a series of experiments consistently supports a given theory while collectively eliminating the most important competing explanations. Although no single experiment can rule out all alternative explanations, taken collectively, a series of partially diagnostic experiments can lead to a strong conclusion if the data converge.

Contrast this idea of converging evidence with the mistaken view that a problem in science can be solved with a single, crucial experiment, or that a single critical insight can advance theory and overturn all previous knowledge. This view of scientific progress fits nicely with the operation of the news media, in which history is tracked by presenting separate, disconnected "events" in bite-sized units. This is a gross misunderstanding of scientific progress and, if taken too seriously, leads to misconceptions about how conclusions are reached about research-based practices.

One experiment rarely decides an issue, supporting one theory and ruling out all others. Issues are most often decided when the community of scientists gradually begins to agree that the preponderance of evidence supports one alternative theory rather than another. Scientists do not evaluate data from a single experiment that has finally been designed in the perfect way. They most often evaluate data from dozens of experiments, each containing some flaws but providing part of the answer.

Although there are many ways in which an experiment can go wrong (or become confounded ), a scientist with experience working on a particular problem usually has a good idea of what most of the critical factors are, and there are usually only a few. The idea of converging evidence tells us to examine the pattern of flaws running through the research literature because the nature of this pattern can either support or undermine the conclusions that we might draw.

For example, suppose that the findings from a number of different experiments were largely consistent in supporting a particular conclusion. Given the imperfect nature of experiments, we would evaluate the extent and nature of the flaws in these studies. If all the experiments were flawed in a similar way, this circumstance would undermine confidence in the conclusions drawn from them because the consistency of the outcome may simply have resulted from a particular, consistent flaw. On the other hand, if all the experiments were flawed in different ways, our confidence in the conclusions increases because it is less likely that the consistency in the results was due to a contaminating factor that confounded all the experiments. As Anderson and Anderson (1996) note, "When a conceptual hypothesis survives many potential falsifications based on different sets of assumptions, we have a robust effect." (p. 742).

Suppose that five different theoretical summaries (call them A, B, C, D, and E) of a given set of phenomena exist at one time and are investigated in a series of experiments. Suppose that one set of experiments represents a strong test of theories A, B, and C, and that the data largely refute theories A and B and support C. Imagine also that another set of experiments is a particularly strong test of theories C, D, and E, and that the data largely refute theories D and E and support C. In such a situation, we would have strong converging evidence for theory C. Not only do we have data supportive of theory C, but we have data that contradict its major competitors. Note that no one experiment tests all the theories, but taken together, the entire set of experiments allows a strong inference.

In contrast, if the two sets of experiments each represent strong tests of B, C, and E, and the data strongly support C and refute B and E, the overall support for theory C would be less strong than in our previous example. The reason is that, although data supporting theory C have been generated, there is no strong evidence ruling out two viable alternative theories (A and D). Thus research is highly convergent when a series of experiments consistently supports a given theory while collectively eliminating the most important competing explanations. Although no single experiment can rule out all alternative explanations, taken collectively, a series of partially diagnostic experiments can lead to a strong conclusion if the data converge in the manner of our first example.

Increasingly, the combining of evidence from disparate studies to form a conclusion is being done more formally by the use of the statistical technique termed meta-analysis (Cooper & Hedges, 1994; Hedges & Olkin, 1985; Hunter & Schmidt, 1990; Rosenthal, 1995; Schmidt, 1992; Swanson, 1999) which has been used extensively to establish whether various medical practices are research based. In a medical context, meta-analysis:

involves adding together the data from many clinical trials to create a single pool of data big enough to eliminate much of the statistical uncertainty that plagues individual trials...The great virtue of meta-analysis is that clear findings can emerge from a group of studies whose findings are scattered all over the map. (Plotkin,1996, p. 70)

The use of meta-analysis for determining the research validation of educational practices is just the same as in medicine. The effects obtained when one practice is compared against another are expressed in a common statistical metric that allows comparison of effects across studies. The findings are then statistically amalgamated in some standard ways (Cooper & Hedges, 1994; Hedges & Olkin, 1985; Swanson, 1999) and a conclusion about differential efficacy is reached if the amalgamation process passes certain statistical criteria. In some cases, of course, no conclusion can be drawn with confidence, and the result of the meta-analysis is inconclusive.

More and more commentators on the educational research literature are calling for a greater emphasis on meta-analysis as a way of dampening the contentious disputes about conflicting studies that plague education and other behavioral sciences (Kavale & Forness, 1995; Rosnow & Rosenthal, 1989; Schmidt, 1996; Stanovich, 2001; Swanson, 1999). The method is useful for ending disputes that seem to be nothing more than a "he-said, she-said" debate. An emphasis on meta-analysis has often revealed that we actually have more stable and useful findings than is apparent from a perusal of the conflicts in our journals.

The National Reading Panel (2000) found just this in their meta-analysis of the evidence surrounding several issues in reading education. For example, they concluded that the results of a meta-analysis of the results of 66 comparisons from 38 different studies indicated "solid support for the conclusion that systematic phonics instruction makes a bigger contribution to children's growth in reading than alternative programs providing unsystematic or no phonics instruction" (p. 2-84). In another section of their report, the National Reading Panel reported that a meta-analysis of 52 studies of phonemic awareness training indicated that "teaching children to manipulate the sounds in language helps them learn to read. Across the various conditions of teaching, testing, and participant characteristics, the effect sizes were all significantly greater than chance and ranged from large to small, with the majority in the moderate range. Effects of phonemic awareness training on reading lasted well beyond the end of training" (p. 2-5).

A statement by a task force of the American Psychological Association (Wilkinson, 1999) on statistical methods in psychology journals provides an apt summary for this section. The task force stated that investigators should not "interpret a single study's results as having importance independent of the effects reported elsewhere in the relevant literature" (p. 602). Science progresses by convergence upon conclusions. The outcomes of one study can only be interpreted in the context of the present state of the convergence on the particular issue in question.

The logic of the experimental method

Scientific thinking is based on the ideas of comparison, control, and manipulation . In a true experimental study, these characteristics of scientific investigation must be arranged to work in concert.

Comparison alone is not enough to justify a causal inference. In methodology texts, correlational investigations (which involve comparison only) are distinguished from true experimental investigations that warrant much stronger causal inferences because they involve comparison, control, and manipulation. The mere existence of a relationship between two variables does not guarantee that changes in one are causing changes in the other. Correlation does not imply causation.

There are two potential problems with drawing causal inferences from correlational evidence. The first is called the third-variable problem. It occurs when the correlation between the two variables does not indicate a direct causal path between them but arises because both variables are related to a third variable that has not even been measured.

The second reason is called the directionality problem. It creates potential interpretive difficulties because even if two variables have a direct causal relationship, the direction of that relationship is not indicated by the mere presence of the correlation. In short, a correlation between variables A and B could arise because changes in A are causing changes in B or because changes in B are causing changes in A. The mere presence of the correlation does not allow us to decide between these two possibilities.

The heart of the experimental method lies in manipulation and control. In contrast to a correlational study, where the investigator simply observes whether the natural fluctuation in two variables displays a relationship, the investigator in a true experiment manipulates the variable thought to be the cause (the independent variable) and looks for an effect on the variable thought to be the effect (the dependent variable ) while holding all other variables constant by control and randomization. This method removes the third-variable problem because, in the natural world, many different things are related. The experimental method may be viewed as a way of prying apart these naturally occurring relationships. It does so because it isolates one particular variable (the hypothesized cause) by manipulating it and holding everything else constant (control).

When manipulation is combined with a procedure known as random assignment (in which the subjects themselves do not determine which experimental condition they will be in but, instead, are randomly assigned to one of the experimental groups), scientists can rule out alternative explanations of data patterns. By using manipulation, experimental control, and random assignment, investigators construct stronger comparisons so that the outcome eliminates alternative theories and explanations.

The need for both correlational methods and true experiments

As strong as they are methodologically, studies employing true experimental logic are not the only type that can be used to draw conclusions. Correlational studies have value. The results from many different types of investigation, including correlational studies, can be amalgamated to derive a general conclusion. The basis for conclusion rests on the convergence observed from the variety of methods used. This is most certainly true in classroom and curriculum research. It is necessary to amalgamate the results from not only experimental investigations, but correlational studies, nonequivalent control group studies, time series designs, and various other quasi-experimental designs and multivariate correlational designs, All have their strengths and weaknesses. For example, it is often (but not always) the case that experimental investigations are high in internal validity, but limited in external validity, whereas correlational studies are often high in external validity, but low in internal validity.

Internal validity concerns whether we can infer a causal effect for a particular variable. The more a study employs the logic of a true experiment (i.e., includes manipulation, control, and randomization), the more we can make a strong causal inference. External validity concerns the generalizability of the conclusion to the population and setting of interest. Internal and external validity are often traded off across different methodologies. Experimental laboratory investigations are high in internal validity but may not fully address concerns about external validity. Field classroom investigations, on the other hand, are often quite high in external validity but because of the logistical difficulties involved in carrying them out, they are often quite low in internal validity. That is why we need to look for a convergence of results, not just consistency from one method. Convergence increases our confidence in the external and internal validity of our conclusions.

Again, this underscores why correlational studies can contribute to knowledge. First, some variables simply cannot be manipulated for ethical reasons (for instance, human malnutrition or physical disabilities). Other variables, such as birth order, sex, and age, are inherently correlational because they cannot be manipulated, and therefore the scientific knowledge concerning them must be based on correlational evidence. Finally, logistical difficulties in classroom and curriculum research often make it impossible to achieve the logic of the true experiment. However, this circumstance is not unique to educational or psychological research. Astronomers obviously cannot manipulate all the variables affecting the objects they study, yet they are able to arrive at conclusions.

Complex correlational techniques are essential in the absence of experimental research because complex correlational statistics such as multiple regression, path analysis, and structural equation modeling that allow for the partial control of third variables when those variables can be measured. These statistics allow us to recalculate the correlation between two variables after the influence of other variables is removed. If a potential third variable can be measured, complex correlational statistics can help us determine whether that third variable is determining the relationship. These correlational statistics and designs help to rule out certain causal hypotheses, even if they cannot demonstrate the true causal relation definitively.

Stages of scientific investigation: The Role of Case Studies and Qualitative Investigations

The educational literature includes many qualitative investigations that focus less on issues of causal explanation and variable control and more on thick description , in the manner of the anthropologist (Geertz, 1973, 1979). The context of a person's behavior is described as much as possible from the standpoint of the participant. Many different fields (e.g., anthropology, psychology, education) contain case studies where the focus is detailed description and contextualization of the situation of a single participant (or very few participants).

The usefulness of case studies and qualitative investigations is strongly determined by how far scientific investigation has advanced in a particular area. The insights gained from case studies or qualitative investigations may be quite useful in the early stages of an investigation of a certain problem. They can help us determine which variables deserve more intense study by drawing attention to heretofore unrecognized aspects of a person's behavior and by suggesting how understanding of behavior might be sharpened by incorporating the participant's perspective.

However, when we move from the early stages of scientific investigation, where case studies may be very useful, to the more mature stages of theory testing--where adjudicating between causal explanations is the main task--the situation changes drastically. Case studies and qualitative description are not useful at the later stages of scientific investigation because they cannot be used to confirm or disconfirm a particular causal theory. They lack the comparative information necessary to rule out alternative explanations.

Where qualitative investigations are useful relates strongly to a distinction in philosophy of science between the context of discovery and the context of justification . Qualitative research, case studies, and clinical observations support a context of discovery where, as Levin and O'Donnell (2000) note in an educational context, such research must be regarded as "preliminary/exploratory, observational, hypothesis generating" (p. 26). They rightly point to the essential importance of qualitative investigations because "in the early stages of inquiry into a research topic, one has to look before one can leap into designing interventions, making predictions, or testing hypotheses" (p. 26). The orientation provided by qualitative investigations is critical in such cases. Even more important, the results of quantitative investigations--which must sometimes abstract away some of the contextual features of a situation--are often contextualized by the thick situational description provided by qualitative work.

However, in the context of justification, variables must be measured precisely, large groups must be tested to make sure the conclusion generalizes and, most importantly, many variables must be controlled because alternative causal explanations must be ruled out. Gersten (2001) summarizes the value of qualitative research accurately when he says that "despite the rich insights they often provide, descriptive studies cannot be used as evidence for an intervention's efficacy...descriptive research can only suggest innovative strategies to teach students and lay the groundwork for development of such strategies" (p. 47). Qualitative research does, however, help to identify fruitful directions for future experimental studies.

Nevertheless, here is why the sole reliance on qualitative techniques to determine the effectiveness of curricula and instructional strategies has become problematic. As a researcher, you desire to do one of two things.

Objective A

The researcher wishes to make some type of statement about a relationship, however minimal. That is, you at least want to use terms like greater than, or less than, or equal to. You want to say that such and such an educational program or practice is better than another. "Better than" and "worse than" are, of course, quantitative statements--and, in the context of issues about what leads to or fosters greater educational achievement, they are causal statements as well . As quantitative causal statements, the support for such claims obviously must be found in the experimental logic that has been outlined above. To justify such statements, you must adhere to the canons of quantitative research logic.

Objective B

The researcher seeks to adhere to an exclusively qualitative path that abjures statements about relationships and never uses comparative terms of magnitude. The investigator desires to simply engage in thick description of a domain that may well prompt hypotheses when later work moves on to the more quantitative methods that are necessary to justify a causal inference.

Investigators pursuing Objective B are doing essential work. They provide quantitative information with suggestions for richer hypotheses to study. In education, however, investigators sometimes claim to be pursuing Objective B but slide over into Objective A without realizing they have made a crucial switch. They want to make comparative, or quantitative, statements, but have not carried out the proper types of investigation to justify them. They want to say that a certain educational program is better than another (that is, it causes better school outcomes). They want to give educational strictures that are assumed to hold for a population of students, not just to the single or few individuals who were the objects of the qualitative study. They want to condemn an educational practice (and, by inference, deem an alternative quantitatively and causally better). But instead of taking the necessary course of pursuing Objective A, they carry out their investigation in the manner of Objective B.

Let's recall why the use of single case or qualitative description as evidence in support of a particular causal explanation is inappropriate. The idea of alternative explanations is critical to an understanding of theory testing. The goal of experimental design is to structure events so that support of one particular explanation simultaneously disconfirms other explanations. Scientific progress can occur only if the data that are collected rule out some explanations. Science sets up conditions for the natural selection of ideas. Some survive empirical testing and others do not.

This is the honing process by which ideas are sifted so that those that contain the most truth are found. But there must be selection in this process: data collected as support for a particular theory must not leave many other alternative explanations as equally viable candidates. For this reason, scientists construct control or comparison groups in their experimentation. These groups are formed so that, when their results are compared with those from an experimental group, some alternative explanations are ruled out.

Case studies and qualitative description lack the comparative information necessary to prove that a particular theory or educational practice is superior, because they fail to test an alternative; they rule nothing out. Take the seminal work of Jean Piaget for example. His case studies were critical in pointing developmental psychology in new and important directions, but many of his theoretical conclusions and causal explanations did not hold up in controlled experiments (Bjorklund, 1995; Goswami, 1998; Siegler, 1991).

In summary, as educational psychologist Richard Mayer (2000) notes, "the domain of science includes both some quantitative and qualitative methodologies" (p. 39), and the key is to use each where it is most effective (see Kamil, 1995). Likewise, in their recent book on research-based best practices in comprehension instruction, Block and Pressley (2002) argue that future progress in understanding how comprehension works will depend on a healthy interaction between qualitative and quantitative approaches. They point out that getting an initial idea of the comprehension processes involved in hypertext and Web-based environments will involve detailed descriptive studies using think-alouds and assessments of qualitative decision making. Qualitative studies of real reading environments will set the stage for more controlled investigations of causal hypotheses.

The progression to more powerful methods

A final useful concept is the progression to more powerful research methods ("more powerful" in this context meaning more diagnostic of a causal explanation). Research on a particular problem often proceeds from weaker methods (ones less likely to yield a causal explanation) to ones that allow stronger causal inferences. For example, interest in a particular hypothesis may originally emerge from a particular case study of unusual interest. This is the proper role for case studies: to suggest hypotheses for further study with more powerful techniques and to motivate scientists to apply more rigorous methods to a research problem. Thus, following the case studies, researchers often undertake correlational investigations to verify whether the link between variables is real rather than the result of the peculiarities of a few case studies. If the correlational studies support the relationship between relevant variables, then researchers will attempt experiments in which variables are manipulated in order to isolate a causal relationship between the variables.

Summary of principles that support research-based inferences about best practice

Our sketch of the principles that support research-based inferences about best practice in education has revealed that:

  • Science progresses by investigating solvable, or testable, empirical problems.
  • To be testable, a theory must yield predictions that could possible be shown to be wrong.
  • The concepts in the theories in science evolve as evidence accumulates. Scientific knowledge is not infallible knowledge, but knowledge that has at least passed some minimal tests. The theories behind research-based practice can be proven wrong, and therefore they contain a mechanism for growth and advancement.
  • Theories are tested by systematic empiricism. The data obtained from empirical research are in the public domain in the sense that they are presented in a manner that allows replication and criticism by other scientists.
  • Data and theories in science are considered in the public domain only after publication in peer-reviewed scientific journals.
  • Empiricism is systematic because it strives for the logic of control and manipulation that characterizes a true experiment.
  • Correlational techniques are helpful when the logic of an experiment cannot be approximated, but because these techniques only help rule out hypotheses, they are considered weaker than true experimental methods.
  • Researchers use many different methods to arrive at their conclusions, and the strengths and weaknesses of these methods vary. Most often, conclusions are drawn only after a slow accumulation of data from many studies.

Scientific thinking in educational practice: Reason-based practice in the absence of direct evidence

Some areas in educational research, to date, lack a research-based consensus, for a number of reasons. Perhaps the problem or issue has not been researched extensively. Perhaps research into the issue is in the early stages of investigation, where descriptive studies are suggesting interesting avenues, but no controlled research justifying a causal inference has been completed. Perhaps many correlational studies and experiments have been conducted on the issue, but the research evidence has not yet converged in a consistent direction.

Even if teachers know the principles of scientific evaluation described earlier, the research literature sometimes fails to give them clear direction. They will have to fall back on their own reasoning processes as informed by their own teaching experiences. In those cases, teachers still have many ways of reasoning scientifically.

Tracing the link from scientific research to scientific thinking in practice

Scientific thinking in can be done in several ways. Earlier we discussed different types of professional publications that teachers can read to improve their practice. The most important defining feature of these outlets is whether they are peer reviewed. Another defining feature is whether the publication contains primary research rather than presenting opinion pieces or essays on educational issues. If a journal presents primary research, we can evaluate the research using the formal scientific principles outlined above.

If the journal is presenting opinion pieces about what constitutes best practice, we need to trace the link between those opinions and archival peer-reviewed research. We would look to see whether the authors have based their opinions on peer-reviewed research by reading the reference list. Do the authors provide a significant amount of original research citations (is their opinion based on more than one study)? Do the authors cite work other than their own (have the results been replicated)? Are the cited journals peer-reviewed? For example, in the case of best practice for reading instruction, if we came across an article in an opinion-oriented journal such as Intervention in School and Clinic, we might look to see if the authors have cited work that has appeared in such peer-reviewed journals as Journal of Educational Psychology , Elementary School Journal , Journal of Literacy Research , Scientific Studies of Reading , or the Journal of Learning Disabilities .

These same evaluative criteria can be applied to presenters at professional development workshops or papers given at conferences. Are they conversant with primary research in the area on which they are presenting? Can they provide evidence for their methods and does that evidence represent a scientific consensus? Do they understand what is required to justify causal statements? Are they open to the possibility that their claims could be proven false? What evidence would cause them to shift their thinking?

An important principle of scientific evaluation--the connectivity principle (Stanovich, 2001)--can be generalized to scientific thinking in the classroom. Suppose a teacher comes upon a new teaching method, curriculum component, or process. The method is advertised as totally new, which provides an explanation for the lack of direct empirical evidence for the method. A lack of direct empirical evidence should be grounds for suspicion, but should not immediately rule it out. The principle of connectivity means that the teacher now has another question to ask: "OK, there is no direct evidence for this method, but how is the theory behind it (the causal model of the effects it has) connected to the research consensus in the literature surrounding this curriculum area?" Even in the absence of direct empirical evidence on a particular method or technique, there could be a theoretical link to the consensus in the existing literature that would support the method.

For further tips on translating research into classroom practice, see Warby, Greene, Higgins, & Lovitt (1999). They present a format for selecting, reading, and evaluating research articles, and then importing the knowledge gained into the classroom.

Let's take an imaginary example from the domain of treatments for children with extreme reading difficulties. Imagine two treatments have been introduced to a teacher. No direct empirical tests of efficacy have been carried out using either treatment. The first, Treatment A, is a training program to facilitate the awareness of the segmental nature of language at the phonological level. The second, Treatment B, involves giving children training in vestibular sensitivity by having them walk on balance beams while blindfolded. Treatment A and B are equal in one respect--neither has had a direct empirical test of its efficacy, which reflects badly on both. Nevertheless, one of the treatments has the edge when it comes to the principle of connectivity. Treatment A makes contact with a broad consensus in the research literature that children with extraordinary reading difficulties are hampered because of insufficiently developed awareness of the segmental structure of language. Treatment B is not connected to any corresponding research literature consensus. Reason dictates that Treatment A is a better choice, even though neither has been directly tested.

Direct connections with research-based evidence and use of the connectivity principle when direct empirical evidence is absent give us necessary cross-checks on some of the pitfalls that arise when we rely solely on personal experience. Drawing upon personal experience is necessary and desirable in a veteran teacher, but it is not sufficient for making critical judgments about the effectiveness of an instructional strategy or curriculum. The insufficiency of personal experience becomes clear if we consider that the educational judgments--even of veteran teachers--often are in conflict. That is why we have to adjudicate conflicting knowledge claims using the scientific method.

Let us consider two further examples that demonstrate why we need controlled experimentation to verify even the most seemingly definitive personal observations. In the 1990s, considerable media and professional attention were directed at a method for aiding the communicative capacity of autistic individuals. This method is called facilitated communication. Autistic individuals who had previously been nonverbal were reported to have typed highly literate messages on a keyboard when their hands and arms were supported over the typewriter by a so-called facilitator. These startlingly verbal performances by autistic children who had previously shown very limited linguistic behavior raised incredible hopes among many parents of autistic children.

Unfortunately, claims for the efficacy of facilitated communication were disseminated by many media outlets before any controlled studies had been conducted. Since then, many studies have appeared in journals in speech science, linguistics, and psychology and each study has unequivocally demonstrated the same thing: the autistic child's performance is dependent upon tactile cueing from the facilitator. In the experiments, it was shown that when both child and facilitator were looking at the same drawing, the child typed the correct name of the drawing. When the viewing was occluded so that the child and the facilitator were shown different drawings, the child typed the name of the facilitator's drawing, not the one that the child herself was looking at (Beck & Pirovano, 1996; Burgess, Kirsch, Shane, Niederauer, Graham, & Bacon, 1998; Hudson, Melita, & Arnold, 1993; Jacobson, Mulick, & Schwartz, 1995; Wheeler, Jacobson, Paglieri, & Schwartz, 1993). The experimental studies directly contradicted the extensive case studies of the experiences of the facilitators of the children. These individuals invariably deny that they have inadvertently cued the children. Their personal experience, honest and heartfelt though it is, suggests the wrong model for explaining this outcome. The case study evidence told us something about the social connections between the children and their facilitators. But that is something different than what we got from the controlled experimental studies, which provided direct tests of the claim that the technique unlocks hidden linguistic skills in these children. Even if the claim had turned out to be true, the verification of the proof of its truth would not have come from the case studies or personal experiences, but from the necessary controlled studies.

Another example of the need for controlled experimentation to test the insights gleaned from personal experience is provided by the concept of learning styles--the idea that various modality preferences (or variants of this theme in terms of analytic/holistic processing or "learning styles") will interact with instructional methods, allowing teachers to individualize learning. The idea seems to "feel right" to many of us. It does seem to have some face validity, but it has never been demonstrated to work in practice. Its modern incarnation (see Gersten, 2001, Spear-Swerling & Sternberg, 2001) takes a particularly harmful form, one where students identified as auditory learners are matched with phonics instruction and visual and/or kinesthetic learners matched with holistic instruction. The newest form is particularly troublesome because the major syntheses of reading research demonstrate that many children can benefit from phonics-based instruction, not just "auditory" learners (National Reading Panel, 2000; Rayner et al., 2002; Stanovich, 2000). Excluding students identified as "visual/kinesthetic" learners from effective phonics instruction is a bad instructional practice--bad because it is not only not research based, it is actually contradicted by research.

A thorough review of the literature by Arter and Jenkins (1979) found no consistent evidence for the idea that modality strengths and weaknesses could be identified in a reliable and valid way that warranted differential instructional prescriptions. A review of the research evidence by Tarver and Dawson (1978) found likewise that the idea of modality preferences did not hold up to empirical scrutiny. They concluded, "This review found no evidence supporting an interaction between modality preference and method of teaching reading" (p. 17). Kampwirth and Bates (1980) confirmed the conclusions of the earlier reviews, although they stated their conclusions a little more baldly: "Given the rather general acceptance of this idea, and its common-sense appeal, one would presume that there exists a body of evidence to support it. UnfortunatelyÉno such firm evidence exists" (p. 598).

More recently, the idea of modality preferences (also referred to as learning styles, holistic versus analytic processing styles, and right versus left hemispheric processing) has again surfaced in the reading community. The focus of the recent implementations refers more to teaching to strengths, as opposed to remediating weaknesses (the latter being more the focus of the earlier efforts in the learning disabilities field). The research of the 1980s was summarized in an article by Steven Stahl (1988). His conclusions are largely negative because his review of the literature indicates that the methods that have been used in actual implementations of the learning styles idea have not been validated. Stahl concludes: "As intuitively appealing as this notion of matching instruction with learning style may be, past research has turned up little evidence supporting the claim that different teaching methods are more or less effective for children with different reading styles" (p. 317).

Obviously, such research reviews cannot prove that there is no possible implementation of the idea of learning styles that could work. However, the burden of proof in science rests on the investigator who is making a new claim about the nature of the world. It is not incumbent upon critics of a particular claim to show that it "couldn't be true." The question teachers might ask is, "Have the advocates for this new technique provided sufficient proof that it works?" Their burden of responsibility is to provide proof that their favored methods work. Teachers should not allow curricular advocates to avoid this responsibility by introducing confusion about where the burden of proof lies. For example, it is totally inappropriate and illogical to ask "Has anyone proved that it can't work?" One does not "prove a negative" in science. Instead, hypotheses are stated, and then must be tested by those asserting the hypotheses.

Reason-based practice in the classroom

Effective teachers engage in scientific thinking in their classrooms in a variety of ways: when they assess and evaluate student performance, develop Individual Education Plans (IEPs) for their students with disabilities, reflect on their practice, or engage in action research. For example, consider the assessment and evaluation activities in which teachers engage. The scientific mechanisms of systematic empiricism--iterative testing of hypotheses that are revised after the collection of data--can be seen when teachers plan for instruction: they evaluate their students' previous knowledge, develop hypotheses about the best methods for attaining lesson objectives, develop a teaching plan based on those hypotheses, observe the results, and base further instruction on the evidence collected.

This assessment cycle looks even more like the scientific method when teachers (as part of a multidisciplinary team) are developing and implementing an IEP for a student with a disability. The team must assess and evaluate the student's learning strengths and difficulties, develop hypotheses about the learning problems, select curriculum goals and objectives, base instruction on the hypotheses and the goals selected, teach, and evaluate the outcomes of that teaching. If the teaching is successful (goals and objectives are attained), the cycle continues with new goals. If the teaching has been unsuccessful (goals and objectives have not been achieved), the cycle begins again with new hypotheses. We can also see the principle of converging evidence here. No one piece of evidence might be decisive, but collectively the evidence might strongly point in one direction.

Scientific thinking in practice occurs when teachers engage in action research. Action research is research into one's own practice that has, as its main aim, the improvement of that practice. Stokes (1997) discusses how many advances in science came about as a result of "use-inspired research" which draws upon observations in applied settings. According to McNiff, Lomax, and Whitehead (1996), action research shares several characteristics with other types of research: "it leads to knowledge, it provides evidence to support this knowledge, it makes explicit the process of enquiry through which knowledge emerges, and it links new knowledge with existing knowledge" (p. 14). Notice the links to several important concepts: systematic empiricism, publicly verifiable knowledge, converging evidence, and the connectivity principle.

Teachers and Research Commonality in a "what works" epistemology

Many educational researchers have drawn attention to the epistemological commonalities between researchers and teachers (Gersten, Vaughn, Deshler, & Schiller, 1997; Stanovich, 1993/1994). A "what works" epistemology is a critical source of underlying unity in the world views of educators and researchers (Gersten & Dimino, 2001; Gersten, Chard, & Baker, 2000). Empiricism, broadly construed (as opposed to the caricature of white coats, numbers, and test tubes that is often used to discredit scientists) is about watching the world, manipulating it when possible, observing outcomes, and trying to associate outcomes with features observed and with manipulations. This is what the best teachers do. And this is true despite the grain of truth in the statement that "teaching is an art." As Berliner (1987) notes: "No one I know denies the artistic component to teaching. I now think, however, that such artistry should be research-based. I view medicine as an art, but I recognize that without its close ties to science it would be without success, status, or power in our society. Teaching, like medicine, is an art that also can be greatly enhanced by developing a close relationship to science (p. 4)."

In his review of the work of the Committee on the Prevention of Reading Difficulties for the National Research Council of the National Academy of Sciences (Snow, Burns, & Griffin, 1998), Pearson (1999) warned educators that resisting evaluation by hiding behind the "art of teaching" defense will eventually threaten teacher autonomy. Teachers need creativity, but they also need to demonstrate that they know what evidence is, and that they recognize that they practice in a profession based in behavioral science. While making it absolutely clear that he opposes legislative mandates, Pearson (1999) cautions:

We have a professional responsibility to forge best practice out of the raw materials provided by our most current and most valid readings of research...If professional groups wish to retain the privileges of teacher prerogative and choice that we value so dearly, then the price we must pay is constant attention to new knowledge as a vehicle for fine-tuning our individual and collective views of best practice. This is the path that other professions, such as medicine, have taken in order to maintain their professional prerogative, and we must take it, too. My fear is that if the professional groups in education fail to assume this responsibility squarely and openly, then we will find ourselves victims of the most onerous of legislative mandates (p. 245).

Those hostile to a research-based approach to educational practice like to imply that the insights of teachers and those of researchers conflict. Nothing could be farther from the truth. Take reading, for example. Teachers often do observe exactly what the research shows--that most of their children who are struggling with reading have trouble decoding words. In an address to the Reading Hall of Fame at the 1996 meeting of the International Reading Association, Isabel Beck (1996) illustrated this point by reviewing her own intellectual history (see Beck, 1998, for an archival version). She relates her surprise upon coming as an experienced teacher to the Learning Research and Development Center at the University of Pittsburgh and finding "that there were some people there (psychologists) who had not taught anyone to read, yet they were able to describe phenomena that I had observed in the course of teaching reading" (Beck, 1996, p. 5). In fact, what Beck was observing was the triangulation of two empirical approaches to the same issue--two perspectives on the same underlying reality. And she also came to appreciate how these two perspectives fit together: "What I knew were a number of whats--what some kids, and indeed adults, do in the early course of learning to read. And what the psychologists knew were some whys--why some novice readers might do what they do" (pp. 5-6).

Beck speculates on why the disputes about early reading instruction have dragged on so long without resolution and posits that it is due to the power of a particular kind of evidence--evidence from personal observation. The determination of whole language advocates is no doubt sustained because "people keep noticing the fact that some children or perhaps many children--in any event a subset of children--especially those who grow up in print-rich environments, don't seem to need much more of a boost in learning to read than to have their questions answered and to point things out to them in the course of dealing with books and various other authentic literacy acts" (Beck, 1996, p. 8). But Beck points out that it is equally true that proponents of the importance of decoding skills are also fueled by personal observation: "People keep noticing the fact that some children or perhaps many children--in any event a subset of children--don't seem to figure out the alphabetic principle, let alone some of the intricacies involved without having the system directly and systematically presented" (p. 8). But clearly we have lost sight of the basic fact that the two observations are not mutually exclusive--one doesn't negate the other. This is just the type of situation for which the scientific method was invented: a situation requiring a consensual view, triangulated across differing observations by different observers.

Teachers, like scientists, are ruthless pragmatists (Gersten & Dimino, 2001; Gersten, Chard, & Baker, 2000). They believe that some explanations and methods are better than others. They think there is a real world out there--a world in flux, obviously--but still one that is trackable by triangulating observations and observers. They believe that there are valid, if fallible, ways of finding out which educational practices are best. Teachers believe in a world that is predictable and controllable by manipulations that they use in their professional practice, just as scientists do. Researchers and educators are kindred spirits in their approach to knowledge, an important fact that can be used to forge a coalition to bring hard-won research knowledge to light in the classroom.

  • Adams, M. J. (1990). Beginning to read: Thinking and learning about print . Cambridge, MA: MIT Press.
  • Adler, J. E. (1998, January). Open minds and the argument from ignorance. Skeptical Inquirer , 22 (1), 41-44.
  • Anderson, C. A., & Anderson, K. B. (1996). Violent crime rate studies in philosophical context: A destructive testing approach to heat and Southern culture of violence effects. Journal of Personality and Social Psychology , 70 , 740-756.
  • Anderson, R. C., Hiebert, E. H., Scott, J., & Wilkinson, I. (1985). Becoming a nation of readers . Washington, D. C.: National Institute of Education.
  • Arter, A. and Jenkins, J. (1979). Differential diagnosis-prescriptive teaching: A critical appraisal, Review of Educational Research , 49 , 517-555.
  • Beck, A. R., & Pirovano, C. M. (1996). Facilitated communications' performance on a task of receptive language with children and youth with autism. Journal of Autism and Developmental Disorders , 26 , 497-512.
  • Beck, I. L. (1996, April). Discovering reading research: Why I didn't go to law school . Paper presented at the Reading Hall of Fame, International Reading Association, New Orleans.
  • Beck, I. (1998). Understanding beginning reading: A journey through teaching and research. In J. Osborn & F. Lehr (Eds.), Literacy for all: Issues in teaching and learning (pp. 11-31). New York: Guilford Press.
  • Berliner, D. C. (1987). Knowledge is power: A talk to teachers about a revolution in the teaching profession. In D. C. Berliner & B. V. Rosenshine (Eds.), Talks to teachers (pp. 3-33). New York: Random House.
  • Bjorklund, D. F. (1995). Children's thinking: Developmental function and individual differences (Second Edition) . Pacific Grove, CA: Brooks/Cole.
  • Block, C. C., & Pressley, M. (Eds.). (2002). Comprehension instruction: Research-based best practices . New York: Guilford Press.
  • Bronowski, J. (1956). Science and human values . New York: Harper & Row.
  • Bronowski, J. (1973). The ascent of man . Boston: Little, Brown.
  • Bronowski, J. (1977). A sense of the future . Cambridge: MIT Press.
  • Burgess, C. A., Kirsch, I., Shane, H., Niederauer, K., Graham, S., & Bacon, A. (1998). Facilitated communication as an ideomotor response. Psychological Science , 9 , 71-74.
  • Chard, D. J., & Osborn, J. (1999). Phonics and word recognition in early reading programs: Guidelines for accessibility. Learning Disabilities Research & Practice , 14 , 107-117.
  • Cooper, H. & Hedges, L. V. (Eds.), (1994). The handbook of research synthesis . New York: Russell Sage Foundation.
  • Cunningham, P. M., & Allington, R. L. (1994). Classrooms that work: They all can read and write . New York: HarperCollins.
  • Dawkins, R. (1998). Unweaving the rainbow . Boston: Houghton Mifflin.
  • Dennett, D. C. (1995). Darwin's dangerous idea: Evolution and the meanings of life . New York: Simon & Schuster.
  • Dennett, D. C. (1999/2000, Winter). Why getting it right matters. Free Inquiry , 20 (1), 40-43.
  • Ehri, L. C., Nunes, S., Stahl, S., & Willows, D. (2001). Systematic phonics instruction helps students learn to read: Evidence from the National Reading Panel's Meta-Analysis. Review of Educational Research , 71 , 393-447.
  • Foster, E. A., Jobling, M. A., Taylor, P. G., Donnelly, P., Deknijff, P., Renemieremet, J., Zerjal, T., & Tyler-Smith, C. (1998). Jefferson fathered slave's last child. Nature , 396 , 27-28.
  • Fraenkel, J. R., & Wallen, N. R. (1996). How to design and evaluate research in education (Third Edition). New York: McGraw-Hill.
  • Geertz, C. (1973). The interpretation of cultures . New York: Basic Books.
  • Geertz, C. (1979). From the native's point of view: On the nature of anthropological understanding. In P. Rabinow & W. Sullivan (Eds.), Interpretive social science (pp. 225-242). Berkeley: University of California Press.
  • Gersten, R. (2001). Sorting out the roles of research in the improvement of practice. Learning Disabilities: Research & Practice , 16 (1), 45-50.
  • Gersten, R., Chard, D., & Baker, S. (2000). Factors enhancing sustained use of research-based instructional practices. Journal of Learning Disabilities , 33 (5), 445-457.
  • Gersten, R., & Dimino, J. (2001). The realities of translating research into classroom practice. Learning Disabilities: Research & Practice , 16 (2), 120-130.
  • Gersten, R., Vaughn, S., Deshler, D., & Schiller, E. (1997).What we know about using research findings: Implications for improving special education practice. Journal of Learning Disabilities , 30 (5), 466-476.
  • Goswami, U. (1998). Cognition in children . Hove, England: Psychology Press.
  • Gross, P. R., Levitt, N., & Lewis, M. (1997). The flight from science and reason . New York: New York Academy of Science.
  • Hedges, L. V., & Olkin, I. (1985). Statistical Methods for Meta-Analysis . New York: Academic Press.
  • Holton, G., & Roller, D. (1958). Foundations of modern physical science . Reading, MA: Addison-Wesley.
  • Hudson, A., Melita, B., & Arnold, N. (1993). A case study assessing the validity of facilitated communication. Journal of Autism and Developmental Disorders , 23 , 165-173.
  • Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in research findings . Newbury Park, CA: Sage.
  • Jacobson, J. W., Mulick, J. A., & Schwartz, A. A. (1995). A history of facilitated communication: Science, pseudoscience, and antiscience. American Psychologist , 50 , 750-765.
  • Kamil, M. L. (1995). Some alternatives to paradigm wars in literacy research. Journal of Reading Behavior , 27 , 243-261.
  • Kampwirth, R., and Bates, E. (1980). Modality preference and teaching method: A review of the research, Academic Therapy , 15 , 597-605.
  • Kavale, K. A., & Forness, S. R. (1995). The nature of learning disabilities: Critical elements of diagnosis and classification . Mahweh, NJ: Lawrence Erlbaum Associates.
  • Levin, J. R., & O'Donnell, A. M. (2000). What to do about educational research's credibility gaps? Issues in Education: Contributions from Educational Psychology , 5 , 1-87.
  • Liberman, A. M. (1999). The reading researcher and the reading teacher need the right theory of speech. Scientific Studies of Reading , 3 , 95-111.
  • Magee, B. (1985). Philosophy and the real world: An introduction to Karl Popper . LaSalle, IL: Open Court.
  • Mayer, R. E. (2000). What is the place of science in educational research? Educational Researcher , 29 (6), 38-39.
  • McNiff, J.,Lomax, P., & Whitehead, J. (1996). You and your action research project . London: Routledge.
  • Medawar, P. B. (1982). Pluto's republic . Oxford: Oxford University Press.
  • Medawar, P. B. (1984). The limits of science . New York: Harper & Row.
  • Medawar, P. B. (1990). The threat and the glory . New York: Harper Collins.
  • Moats, L. (1999). Teaching reading is rocket science . Washington, DC: American Federation of Teachers.
  • National Reading Panel: Reports of the Subgroups. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction . Washington, DC.
  • Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology , 2 , 175-220.
  • Pearson, P. D. (1993). Teaching and learning to read: A research perspective. Language Arts , 70 , 502-511.
  • Pearson, P. D. (1999). A historically based review of preventing reading difficulties in young children. Reading Research Quarterly , 34 , 231-246.
  • Plotkin, D. (1996, June). Good news and bad news about breast cancer. Atlantic Monthly , 53-82.
  • Popper, K. R. (1972). Objective knowledge . Oxford: Oxford University Press.
  • Pressley, M. (1998). Reading instruction that works: The case for balanced teaching . New York: Guilford Press.
  • Pressley, M., Rankin, J., & Yokol, L. (1996). A survey of the instructional practices of outstanding primary-level literacy teachers. Elementary School Journal , 96 , 363-384.
  • Rayner, K. (1998). Eye movements in reading and information processing: 20 Years of research. Psychological Bulletin , 124 , 372-422.
  • Rayner, K., Foorman, B. R., Perfetti, C. A., Pesetsky, D., & Seidenberg, M. S. (2002, March). How should reading be taught? Scientific American , 286 (3), 84-91.
  • Reading Coherence Initiative. (1999). Understanding reading: What research says about how children learn to read . Austin, TX: Southwest Educational Development Laboratory.
  • Rosenthal, R. (1995). Writing meta-analytic reviews. Psychological Bulletin , 118 , 183-192.
  • Rosnow, R. L., & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist , 44 , 1276-1284.
  • Shankweiler, D. (1999). Words to meaning. Scientific Studies of Reading , 3 , 113-127.
  • Share, D. L., & Stanovich, K. E. (1995). Cognitive processes in early reading development: Accommodating individual differences into a model of acquisition. Issues in Education: Contributions from Educational Psychology , 1 , 1-57.
  • Shavelson, R. J., & Towne, L. (Eds.) (2002). Scientific research in education . Washington, DC: National Academy Press.
  • Siegler, R. S. (1991). Children's thinking (Second Edition) . Englewood Cliffs, NJ: Prentice Hall.
  • Snow, C. E., Burns, M. S., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in young children . Washington, DC: National Academy Press.
  • Snowling, M. (2000). Dyslexia (Second Edition) . Oxford: Blackwell.
  • Spear-Swerling, L., & Sternberg, R. J. (2001). What science offers teachers of reading. Learning Disabilities: Research & Practice , 16 (1), 51-57.
  • Stahl, S. (December, 1988). Is there evidence to support matching reading styles and initial reading methods? Phi Delta Kappan , 317-327.
  • Stanovich, K. E. (1993/1994). Romance and reality. The Reading Teacher , 47 (4), 280-291.
  • Stanovich, K. E. (2000). Progress in understanding reading: Scientific foundations and new frontiers . New York: Guilford Press.
  • Stanovich, K. E. (2001). How to think straight about psychology (Sixth Edition). Boston: Allyn & Bacon.
  • Stokes, D. E. (1997). Pasteur's quadrant: Basic science and technological innovation . Washington, DC: Brookings Institution Press.
  • Swanson, H. L. (1999). Interventions for students with learning disabilities: A meta-analysis of treatment outcomes . New York: Guilford Press.
  • Tarver, S. G., & Dawson, E. (1978). Modality preference and the teaching of reading: A review, Journal of Learning Disabilities , 11, 17-29.
  • Vaughn, S., & Dammann, J. E. (2001). Science and sanity in special education. Behavioral Disorders , 27, 21-29.
  • Warby, D. B., Greene, M. T., Higgins, K., & Lovitt, T. C. (1999). Suggestions for translating research into classroom practices. Intervention in School and Clinic , 34 (4), 205-211.
  • Wheeler, D. L., Jacobson, J. W., Paglieri, R. A., & Schwartz, A. A. (1993). An experimental assessment of facilitated communication. Mental Retardation , 31 , 49-60.
  • Wilkinson, L. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist , 54 , 595-604.
  • Wilson, E. O. (1998). Consilience: The unity of knowledge . New York: Knopf.

For additional copies of this document:

Contact the National Institute for Literacy at ED Pubs PO Box 1398, Jessup, Maryland 20794-1398

Phone 1-800-228-8813 Fax 301-430-1244 [email protected]

NICHD logo

Date Published: 2003 Date Posted: March 2010

Department of Education logo

  • Affiliate Program

Wordvice

  • UNITED STATES
  • 台灣 (TAIWAN)
  • TÜRKIYE (TURKEY)
  • Academic Editing Services
  • - Research Paper
  • - Journal Manuscript
  • - Dissertation
  • - College & University Assignments
  • Admissions Editing Services
  • - Application Essay
  • - Personal Statement
  • - Recommendation Letter
  • - Cover Letter
  • - CV/Resume
  • Business Editing Services
  • - Business Documents
  • - Report & Brochure
  • - Website & Blog
  • Writer Editing Services
  • - Script & Screenplay
  • Our Editors
  • Client Reviews
  • Editing & Proofreading Prices
  • Wordvice Points
  • Partner Discount
  • Plagiarism Checker
  • APA Citation Generator
  • MLA Citation Generator
  • Chicago Citation Generator
  • Vancouver Citation Generator
  • - APA Style
  • - MLA Style
  • - Chicago Style
  • - Vancouver Style
  • Writing & Editing Guide
  • Academic Resources
  • Admissions Resources

How to Write the Rationale of the Study in Research (Examples)

justify the need for research knowledge to students

What is the Rationale of the Study?

The rationale of the study is the justification for taking on a given study. It explains the reason the study was conducted or should be conducted. This means the study rationale should explain to the reader or examiner why the study is/was necessary. It is also sometimes called the “purpose” or “justification” of a study. While this is not difficult to grasp in itself, you might wonder how the rationale of the study is different from your research question or from the statement of the problem of your study, and how it fits into the rest of your thesis or research paper. 

The rationale of the study links the background of the study to your specific research question and justifies the need for the latter on the basis of the former. In brief, you first provide and discuss existing data on the topic, and then you tell the reader, based on the background evidence you just presented, where you identified gaps or issues and why you think it is important to address those. The problem statement, lastly, is the formulation of the specific research question you choose to investigate, following logically from your rationale, and the approach you are planning to use to do that.

Table of Contents:

How to write a rationale for a research paper , how do you justify the need for a research study.

  • Study Rationale Example: Where Does It Go In Your Paper?

The basis for writing a research rationale is preliminary data or a clear description of an observation. If you are doing basic/theoretical research, then a literature review will help you identify gaps in current knowledge. In applied/practical research, you base your rationale on an existing issue with a certain process (e.g., vaccine proof registration) or practice (e.g., patient treatment) that is well documented and needs to be addressed. By presenting the reader with earlier evidence or observations, you can (and have to) convince them that you are not just repeating what other people have already done or said and that your ideas are not coming out of thin air. 

Once you have explained where you are coming from, you should justify the need for doing additional research–this is essentially the rationale of your study. Finally, when you have convinced the reader of the purpose of your work, you can end your introduction section with the statement of the problem of your research that contains clear aims and objectives and also briefly describes (and justifies) your methodological approach. 

When is the Rationale for Research Written?

The author can present the study rationale both before and after the research is conducted. 

  • Before conducting research : The study rationale is a central component of the research proposal . It represents the plan of your work, constructed before the study is actually executed.
  • Once research has been conducted : After the study is completed, the rationale is presented in a research article or  PhD dissertation  to explain why you focused on this specific research question. When writing the study rationale for this purpose, the author should link the rationale of the research to the aims and outcomes of the study.

What to Include in the Study Rationale

Although every study rationale is different and discusses different specific elements of a study’s method or approach, there are some elements that should be included to write a good rationale. Make sure to touch on the following:

  • A summary of conclusions from your review of the relevant literature
  • What is currently unknown (gaps in knowledge)
  • Inconclusive or contested results  from previous studies on the same or similar topic
  • The necessity to improve or build on previous research, such as to improve methodology or utilize newer techniques and/or technologies

There are different types of limitations that you can use to justify the need for your study. In applied/practical research, the justification for investigating something is always that an existing process/practice has a problem or is not satisfactory. Let’s say, for example, that people in a certain country/city/community commonly complain about hospital care on weekends (not enough staff, not enough attention, no decisions being made), but you looked into it and realized that nobody ever investigated whether these perceived problems are actually based on objective shortages/non-availabilities of care or whether the lower numbers of patients who are treated during weekends are commensurate with the provided services.

In this case, “lack of data” is your justification for digging deeper into the problem. Or, if it is obvious that there is a shortage of staff and provided services on weekends, you could decide to investigate which of the usual procedures are skipped during weekends as a result and what the negative consequences are. 

In basic/theoretical research, lack of knowledge is of course a common and accepted justification for additional research—but make sure that it is not your only motivation. “Nobody has ever done this” is only a convincing reason for a study if you explain to the reader why you think we should know more about this specific phenomenon. If there is earlier research but you think it has limitations, then those can usually be classified into “methodological”, “contextual”, and “conceptual” limitations. To identify such limitations, you can ask specific questions and let those questions guide you when you explain to the reader why your study was necessary:

Methodological limitations

  • Did earlier studies try but failed to measure/identify a specific phenomenon?
  • Was earlier research based on incorrect conceptualizations of variables?
  • Were earlier studies based on questionable operationalizations of key concepts?
  • Did earlier studies use questionable or inappropriate research designs?

Contextual limitations

  • Have recent changes in the studied problem made previous studies irrelevant?
  • Are you studying a new/particular context that previous findings do not apply to?

Conceptual limitations

  • Do previous findings only make sense within a specific framework or ideology?

Study Rationale Examples

Let’s look at an example from one of our earlier articles on the statement of the problem to clarify how your rationale fits into your introduction section. This is a very short introduction for a practical research study on the challenges of online learning. Your introduction might be much longer (especially the context/background section), and this example does not contain any sources (which you will have to provide for all claims you make and all earlier studies you cite)—but please pay attention to how the background presentation , rationale, and problem statement blend into each other in a logical way so that the reader can follow and has no reason to question your motivation or the foundation of your research.

Background presentation

Since the beginning of the Covid pandemic, most educational institutions around the world have transitioned to a fully online study model, at least during peak times of infections and social distancing measures. This transition has not been easy and even two years into the pandemic, problems with online teaching and studying persist (reference needed) . 

While the increasing gap between those with access to technology and equipment and those without access has been determined to be one of the main challenges (reference needed) , others claim that online learning offers more opportunities for many students by breaking down barriers of location and distance (reference needed) .  

Rationale of the study

Since teachers and students cannot wait for circumstances to go back to normal, the measures that schools and universities have implemented during the last two years, their advantages and disadvantages, and the impact of those measures on students’ progress, satisfaction, and well-being need to be understood so that improvements can be made and demographics that have been left behind can receive the support they need as soon as possible.

Statement of the problem

To identify what changes in the learning environment were considered the most challenging and how those changes relate to a variety of student outcome measures, we conducted surveys and interviews among teachers and students at ten institutions of higher education in four different major cities, two in the US (New York and Chicago), one in South Korea (Seoul), and one in the UK (London). Responses were analyzed with a focus on different student demographics and how they might have been affected differently by the current situation.

How long is a study rationale?

In a research article bound for journal publication, your rationale should not be longer than a few sentences (no longer than one brief paragraph). A  dissertation or thesis  usually allows for a longer description; depending on the length and nature of your document, this could be up to a couple of paragraphs in length. A completely novel or unconventional approach might warrant a longer and more detailed justification than an approach that slightly deviates from well-established methods and approaches.

Consider Using Professional Academic Editing Services

Now that you know how to write the rationale of the study for a research proposal or paper, you should make use of our free AI grammar checker , Wordvice AI, or receive professional academic proofreading services from Wordvice, including research paper editing services and manuscript editing services to polish your submitted research documents.

You can also find many more articles, for example on writing the other parts of your research paper , on choosing a title , or on making sure you understand and adhere to the author instructions before you submit to a journal, on the Wordvice academic resources pages.

National Academies Press: OpenBook

Using Science as Evidence in Public Policy (2012)

Chapter: 3 the use of research knowledge: current scholarship.

3 The Use of Research Knowledge: Current Scholarship

W ith the arrival of big social science and the growth of the policy enterprise, the federal investment in social science brought attention to whether the knowledge being produced was being used. Research on what was labeled “knowledge utilization” got under way. We address that research under three headings: decisionism and its critique, the metaphor of two communities (researchers and policy makers), and the evidence-based policy and practice initiative.

As an introduction to these issues we take brief note of the characteristics of our three central topics—social science, policy, using science—that challenge any attempt at a comprehensive account for the when, how, and why of science use in policy.

A CHALLENGING LANDSCAPE

Scholarship on what happens at the interface of science and policy has to contend with two phenomena—policy making and use—that are particularly difficult to define. To begin with, investigations of these phenomena are launched in different disciplines, including anthropology, political science, psychology, and sociology and their myriad subfields and cross-fields, from science and technology studies to political psychology, from behavioral economics to historical sociology. Each of these fields has its own established principles of evidence and inference. They use different methods—experimental, analytic, quantitative, and qualitative. They work

at different levels of analysis—from individual behavioral decision theory to systems theory. They focus on different processes: from structural determinism and constrained probabilities at one end of a continuum to willful effort and chance happenings at the other. They draw on epistemologies as varied as positivism, critical realism, and postmodernism. Individual social scientists bring different motivations to their work—from expansion of theoretical knowledge to practical problem solving, from mapping policy options to advocacy of particular policies. Social scientists bring their expertise to universities, think tanks, the media, advocacy groups, corporations, and government agencies. This range—across fields of study and individual motivations and career lines—produces a lot of variability, which, of course, determines the way the science-policy nexus is framed.

Complicating matters is the absence of a generally accepted explanatory model of policy making. Instead, multiple descriptive policy process models offer ways to understand how policy is made and how science might enter into that process. There are, for example, rational models—including linear, cycle or stage, incrementalism, and interactive. There are models that question rational model assumptions, including behavioral economics, path dependency, and bureaucratic inertia. There are political models, including policy networks, agenda setting, policy narratives, advocacy coalition frameworks, punctuated equilibrium theory, and deliberative analysis models (see Baumgartner and Jones, 1993; Hajer and Wagenaar 2003; Kingdon, 1984; Lindblom, 1968; Neilson, 2001; Sabatier, 2007; Sabatier and Jenkins-Smith, 1993; Stone, Maxwell, and Keating, 2001).

There are models that focus on different stages of the policy process and thus on different ways that social science can contribute, including: descriptive analyses that present conditions needing policy attention, such as a slowdown in small business start-ups; social indicators that document long-term trends, such as gender differences in pay scales; social experiments on alternative policy designs, such as school vouchers; and evaluation research on the effectiveness of a policy, such as neighborhood policing. 1

Political science is the discipline that has devoted the most attention to the policy process. On the issue of use, it has reached a general conclusion (Henig, in press):

________________________

1 For a careful discussion of how evidence is used at different stages of the policy process, see McDonnell and Weatherford (2012).

[T]he main thrust of the political science literature serves as a warning against idealized visions of pure data being applied in depoliticized arenas. Although generalizations about an entire discipline inevitably are oversimplifications, the center of gravity within the field encourages skepticism about proposals for a rational, comprehensive, science of public policy making and regards data and information as sources of power first and foremost.

It is difficult to assess how widely this characterization is accepted outside of political science, but it is clear that the various models and frameworks do not coalesce into anything remotely resembling a powerfully predictive, coherent theory of policy making. Lacking that, it is improbable and perhaps impossible to reach a widely agreed-upon understanding of the use of science in policy making. “Use” itself, consequently, is elusive, seen differently depending on the perspectives brought to it and the policy and institutional arenas in which it is investigated (Neilson, 2001; Webber, 1991; Weiss, 1991). A political psychologist at the Central Intelligence Agency concerned with what transforms an angry, unemployed teenager into a terrorist uses research evidence very differently from an economist at the RAND Corporation designing a randomized controlled field trial (RCFT) on classroom size and school performance. Many researchers underscore the conceptual confusion about use and conclude that different definitions of use are needed and appropriate for different purposes (e.g., Oh, 1997; Rich, 1997; Weiss, 1979).

This conclusion is consistent with the fact that policy choices are context dependent. A school district deciding whether to establish charter schools is less interested in a comparative study of charter and public schools across the country than in wanting to know how well a charter school will perform under its conditions, which differ depending on whether the district is in the central city or suburb, with a homogenous or diverse population, with a historically competent or incompetent school administration. The usefulness of research is not assessed in terms of variance explained from a large sample of schools, but whether it is informative about a very specific choice.

Given the context-dependent nature of the use of science, typologies are a common way of mapping the landscape (for a summary, see Nutley et al., 2007; see also Bogenschneider and Corbett, 2010; Renn, 1995). A frequently cited typology is that of Weiss (1979, 1998; see also Weiss et al., 2005):

•   Instrumental uses occur when research knowledge is directly applied to decision making to address particular problems.

•   Conceptual uses occur when research influences or informs how policy makers and practitioners think about issues, problems, or potential solutions.

•   Tactical uses involve strategic and symbolic actions, such as calling on research evidence to support or challenge a specific idea or program, such as a legislative proposal or a reform effort.

•   Imposed uses (which is perhaps a variant on instrumental uses) describe mandates to apply research knowledge, such as a requirement that government budgeting be based on whether agencies have adopted programs backed by evidence.

Other scholars add a fifth category, symbolic or ritualistic use—that is, the organizational practice of collecting information with no real intent to take it seriously, except to persuade others of a predetermined position or even to delay action (Leviton and Hughes, 1981; Shulha and Cousins, 1997). It is a frequent complaint among scientists that policy makers use scientific evidence as confirmation of prior beliefs. This complaint, however, overlooks the fact that, when policy makers argue on the basis of evidence, it is more difficult for their opponents to ignore that evidence, or to leave it unchallenged. “My science versus your science” has the merit of putting science in play, and over time opens more space for policy arguments that include scientific evidence.

Weiss emphasizes that each of the four uses—which also applies to the fifth use noted—can be found in particular situations, but that no one of them offers a complete picture. Scholars who debate typologies of use generally conclude that, although typologies are heuristically valuable, they are not easily applied empirically. Boundaries are blurred, and access to users’ cognitive processes is unattainable. In fact, it is unlikely that users themselves can make sharp distinctions in explaining how they use knowledge (Contandriopoulos et al., 2010). The empirical application of typologies in research is difficult because use is “a dynamic, complex and mediated process, which is shaped by formal and informal structures, by multiple actors and bodies of knowledge, and by the relationships and play of politics and power that run through the wider policy context” (Nutley et al., 2007, p. 111).

Typologies of use fail to meet the standard criteria of scientific typologies in which each category consists of an internally coherent set of variables,

with the value of each variable predictably correlating with the values of each of the other variables in that particular category. In the periodic table of chemical elements, for example, hydrogen is distinguished from other chemical elements by its atomic weight, its specific gravity, its bonding properties, the temperature at which it freezes and boils, and other traits. Each of these traits differs consistently and predictably from those same traits in helium or in any other chemical element (see Stinchcombe, 1987). In the social world it is impossible, in any practical sense, to construct typologies that meet this standard. Typologies of social conflict, ethnic or racial groups, or government corruption are never going to have categories with internally coherent variables whose values covary in completely predictable ways. It is unrealistic to expect a clear and unambiguous typology for a phenomenon as complex as the use of science in policy.

To address the charge given to this committee—to understand the use of science in policy—is thus to simultaneously deal with three elusive phenomena:

•   Scientific findings from multiple sources and that are at times contradictory;

•   A policy-making process, that is variable along many dimensions; and

•   A phenomenon, “use,” that changes its meaning depending on the perspective brought to it and one’s location in the complex space where policy is made.

With this challenging landscape in mind, we turn to the recent scholarship on knowledge utilization.

DECISIONISM AND ITS CRITIQUE

The scholarship on knowledge utilization has, virtually from its beginnings, been skeptical of rational models of the relationship between research and policy. Rational models assume that decisions unfold through five stages (Nutley and Webb, 2000, p. 25):

1.   A policy problem requiring action is identified and goals, values, and objectives are clearly set forth;

2.   All significant ways of addressing the problem and achieving the goals or objectives are enumerated;

3.   The consequences of each alternative are predicted;

4.   The consequences are then compared with the goals and objectives; and

5.   A strategy is selected in which consequences most closely match the goals and objectives.

Weiss and Bucuvalas (1980, p. 263) summarized the essence of this model: “a decision is pending, research provides information that is lacking, and with the information in hand the decision maker makes a decision.” Rational models have also been characterized as “decisionism”—“a limited number of political actors engaged in making calculated choices among clearly conceived alternatives” (Majone, 1989, p. 12; see also Rein and White, 1977; Rich, 1997).

Criticisms of this model have focused on several significant defects; for example, that decisions made are optimal, that is, based on complete information and an examination of all possible alternative courses of action (see the work of Simon [1957], who introduced satisficing as a replacement for maximizing); or, that the model is a normative account of policy making (see the work of Braybrooke and Lindblom [1963] and Lindblom [1959], authors who substitute incrementalism for rational models). Other critics argue that rational models underemphasize or ignore the important role that value judgments play in policy arguments (Brewer and deLeon, 1983); or that linear problem solving is “wildly optimistic,” because it “takes an extraordinary concatenation of circumstances for research to influence policy decisions directly” (Weiss, 1979, p. 428).

More recent examinations of the relationship between research and policy making echo these concerns. For example, Gormley (2011, pp. 978-979) notes:

A hypodermic needle theory of scientific impact on policy, which anticipates direct, immediate, and powerful effects, is flawed for several reasons. First, scientific research is one of many inputs into the policy process.… Second, scientific knowledge accumulates through multiple studies, some of which reach different conclusions.… Third, the applicability of a given study to a particular policy choice is a matter of judgment.… Fourth, scientific research is translated, condensed, repackaged, and reinterpreted before it is used. Fifth, the use of scientific information by public officials, when it is occurs, is more likely to involve justification

(reinforcement of a prior opinion) than persuasion (conversion to a new opinion).

Although we share Gormley’s view, there are situations in which discrete decisions are directly triggered by the use of some specific scientific knowledge—for example, the direct, even formulaic translation of census results into congressional apportionment or formula-based fund allocations that are legislatively required. There also are situations in which a user is considered sovereign in her or his capacity to mobilize evidence and, consequently, to modify her or his behavior on the basis of that evidence—for example, the choice of a preferred clinical treatment (Contandriopoulos et al., 2010). But these examples are exceptions to the rule, and uncommon at that. It is estimated that evidence-based programs accounted for less than 0.2 percent of nonmilitary discretionary spending in fiscal 2011. 2

In almost all decision-making situations, the use of science takes place in “systems characterized by high levels of interdependency and interconnectedness among participants” (Contandriopoulos et al., 2010, p. 447). No single decision maker has the independent power to translate and apply research knowledge. Rather, multiple decision makers are embedded in systemic relations in which use not only depends on the available information, but also involves coalition building, rhetoric and persuasion, accommodation of conflicting values, and others’ expectations.

In criticizing rational models and decisionist thinking, Weiss and others suggest that use is less a matter of straightforward application of scientific findings to discrete decisions and more a matter of framing issues or influencing debate (Weiss, 1978, p. 77):

Social science research does not so much solve problems as provide an intellectual setting of concepts, propositions, orientations and empirical generalizations.… Over a span of time and much research, ideas … flter into the consciousness of policy-making officials and attentive publics. They come to play a part in how policy makers define problems and the options they examine for coping with them.

2 The George W. Bush administration piloted a program linking federal financing to clear demonstration of program effectiveness. These evidence-based programs “accounted for about $1.2 billion out of a $670 billion budget for nonmilitary discretionary programs in the 2011 fiscal year” (Lowrey, 2011).

Although Weiss suggested that this enlightenment model is perhaps the way science is most frequently used in policy making, she did not claim it was the way it ought to happen. “Many of the social science understandings that gain currency are partial, oversimplified, inadequate, or wrong.… The indirect diffusion process is vulnerable to oversimplification and distortion, and it may come to resemble ‘endarkenment’ as much as enlightenment” (Weiss, 1979, p. 430).

In sum, the research on knowledge utilization reflects a consensus about what should be ruled out: (1) that the science/policy nexus can be uniformly understood in terms of rational decision-making models; (2) the assumption of a specified single actor with freedom to achieve goals formulated through a careful process of rational analysis characterized by a complete, objective study of all relevant information and options; and (3) the definition of use as problem solving in the sense of a direct application of evidence from a specific set of studies to a pending decision. Although evidence may occasionally be used in such narrow ways, these depictions of “use” do not accurately reflect the full realities of policy making.

Knowledge utilization research, in agreement about what is ruled out, is less clear about what should be ruled in. It has, however, pointed to the importance of closing the distance between the “two communities” of scientists and policy makers.

THE TWO COMMUNITIES METAPHOR

Viewing use from the perspective of two communities has been a recurring motif in knowledge utilization studies (see Caplan, 1979). The basic idea is refreshingly simple. Scientists and policy makers are separated by their languages, values, norms, reward systems, and social and professional affiliations. The primary goal of scientists is the systematic search for a reliable and accurate understanding of the world; the primary goal of policy makers is a practical response to a particular public policy issue.

Like any binary distinction, this one oversimplifies, though there is a crude truth to several distinctions rooted in the different tasks facing researchers and policy makers. They differ in the outcomes they value—knowledge about the world in all its complexities versus knowledge helpful in reaching feasible solutions to pressing problems—and in the incentives, rewards, and cultural assumptions associated with these different outcomes. They also differ in habits of expression—probabilistic versus certain statements about conditions or people. And they differ even in modes of

thought—deductive and general versus inductive and particular (Szanton, 2001, p. 64). This is described as “research think” and “political think.” The “culture of the researcher tends to add complexity and resist closure. The culture of the political actor tends to demand straightforward and easily communicated lessons that will lead to some kind of action” (Henig, 2009, p. 144).

Differences between the two communities are associated with a contrasting list of supply-side and demand-side problems (Bogenschneider and Corbett, 2010; Furhman, 1994; Nutley et al., 2007; Rosenblatt and Tseng, 2010). On the supply side are researchers who fail to focus on policy-relevant issues and problems, cannot deliver research in the time frame generally necessary for effective policy making, do not relate findings from specific studies to the broad context of a policy issue, ineffectively communicate their findings, depend on technical arguments that are inaccessible to policy makers, and lack credibility because of perceived career interests or even partisan biases. On the demand side are policy makers who fail to spell out objectives in researchable terms, have few incentives to use science, and do not take time to understand research findings relevant to pending policy choices.

This framing of the use problem offers little guidance as to which of the long list of factors, from either side, best explains variance in use, let alone how the factors interact and whether they apply only in specific settings or have general applicability (Bogenschneider and Corbett, 2010; Johnson et al., 2009). Although the two communities framework has been helpful in understanding the differing expectations of researchers and policy makers and problems of communication between them, it has not been able to offer a systematic explanation of use. Thinking about how best to bridge the gap between the two communities has, however, led to practices of translation and brokering and to more intensive interactions between researchers and policy makers.

Translation

Translation is a supply-side solution to the use problem. It was developed in clinical diagnostic, preventive, and therapeutic practices. The idea is simple: basic science is translated into clinical efficacy, efficacy is translated into clinical effectiveness, and effectiveness is translated into everyday health care delivery (Drolet and Lorenzi, 2011). The oft-invoked catchphrase is “bench to bedside.” One important sign of the seriousness with which

translation is taken is the U.S. Department of Health and Human Services initiative, Translating Research into Practice (TRIP) Program, that focuses on implementation techniques and factors associated with successfully translating research findings into diverse applied health care settings (see Agency for Healthcare Research and Quality, 2012).

Translational strategies have now moved beyond health care, introducing additional and somewhat differently focused activities. One is evidence-based registries, a compilation of scientifically proven interventions. They are considered tools to improve practice in various fields, including social services, criminal justice, and education. A different initiative is the Campbell Collaboration, 3 an international organization conducting systematic reviews of the effects of social interventions.

The translation strategy is well institutionalized in education. The U.S. Department of Education’s Institute of Education Sciences (IES) was established in part to develop the science that could be translated into strategies to change education practice in public schools. The What Works Clearinghouse of the IES aims to provide educators, policy makers, and the public with an independent, and trusted source of scientific knowledge relevant to education policies and practices. 4 IES also supports 10 regional educational laboratories, the role of which is similar to that of extension agents in the agricultural field: taking research results and putting them into practice in school districts and classrooms (see U.S. Department of Education, 2012).

The movement toward evidence-based approaches in practice settings began more than 40 years ago in medical practice. Archibald Cochrane (1972) railed against ineffective and sometimes harmful therapies despite randomized clinical trials showing that better treatments were available. In response to his call for systematic reviews of such trials, the Cochrane Collaboration 5 was established. Its rigorous model of research synthesis has been adopted in other fields, including the above-noted Campbell Collaboration and the What Works Clearinghouse.

Although translation strategies have largely been applied to practices, the logic of translation is applicable to questions of using science in policy. Begin with a dependable, valid scientific base that provides evidence about

3 See the Campbell Collaboration: What Helps? What Harms? Based on What Evidence?, available: http://www.campbellcollaboration.org/ [August 2012].

4 For example, see the IES guides in education, such as “Turning Around Chronically Low-Performing Schools” (May 2008): available: http://ies.ed.gov/ncee/wwc/practiceguide.aspx?sid=7 [July 2012].

5 See the Cochrane Collaboration: available: http://www.cochrane.org/index.htm [August 2012].

what works so that policy makers can readily grasp its relevance to the decision or task at hand, and make that science available in the form of research summaries or lists of demonstrably effective social interventions. The research record, however, is far from clear on whether translation (of either social or medical science research) works and is an effective strategy for enhancing use (see, e.g., Glasgow and Emmons, 2007; Green and Seifert, 2005; Lavis, 2006; Slavin, 2006).

While translation is primarily a matter of repackaging technical findings in terms more readily consumable by policy makers, brokering is a two-way conversation aided or mediated by a third party. Brokering involves filtering, synthesizing, summarizing, and disseminating research findings in user-friendly packages. It is generally seen as the task of intermediary organizations, such as think tanks, evaluation firms, and policy-oriented organizations, including those focusing on specific target populations or specific social issues as well as those organized around particular political persuasions. These organizations (Bogenschneider and Corbett, 2010, p. 94):

do research and evaluation, but they also have one foot in the policy world. They see policymakers as their primary clients. In addition to producing knowledge, they also see their role as translating extant research and analysis in ways that enhance their utility for those doing public policy.… To greater and lesser degrees, these firms bridge the knowledge-producing and knowledge consuming worlds.

Science and technology studies describe brokering as occurring in boundary organizations occupying a territory between research and policy making (Guston, 2000). 6 In contrast to translation strategies that generally are one-way efforts in dissemination, brokering involves interaction and two-way communication. Intermediary organizations and knowledge brokers are increasingly being viewed as critical in promoting the capacity for evidence-based, or evidence-informed, decision making (e.g., Dobbins et al., 2009a).

6 In this view, the National Research Council can be viewed as a brokering organization, synthesizing research in a consensus-based process and then presenting it in a form intended to contribute to improved policy making.

If brokering occurs, use is not something that happens when experts “here” hand off research to policy makers “there.” A brokering model views use as emerging from multidirectional communication and ongoing negotiation among researchers, policy makers, planners, managers, service providers, and even the public. Often this interactive process will involve consideration of more than one stream of research as relevant to a given policy (e.g., Sudsawad, 2007).

To bridge the gap between the differing cultures of the producers and consumers of scientific knowledge will require, according to some scholars, cultural changes in each community. Bogenschneider and Corbett (2010, pp. 299 ff.) write that the culture of research should change, perhaps through education and training on how to do more policy-relevant research, developing incentives for doing such research and developing opportunities to work with policy makers. The user or consumer culture should also change, perhaps by institutional innovations that improve policy makers’ access to research, helping them communicate their policy needs to researchers, and providing forums to discuss research agendas. In more ambitious formulations, research literacy of the general public should be improved through education (see also Carr et al., 2007; Gigerenzer et al., 2008).

An Interaction Model

Closing the distance between the two communities has taken an additional step in what is labeled the interaction model (Contandriopoulos et al., 2010; Greenhalgh et al., 2004). This model goes beyond transfer, diffusion, and dissemination and even beyond translation and brokering. The interaction label covers a family of ideas directed to systemic changes in the means and opportunities for relationships between researchers and policy makers (Bogenschneider and Corbett, 2010). It holds that the relation between researchers and users is not only not linear it is iterative and even “disorderly” (Landry et al., 2001, p. 335).

One source for an interest in interaction is science and technology studies documenting the co-evolution of social and technological systems (Jasanoff, 2004; Jasanoff et al., 1995). Another source is the use of systems thinking to better understand the complex adaptive systems involved in diagnosing and solving public health problems and the interactions among the design of prevention interventions, testing their efficacy and effectiveness, and disseminating innovations in community practices. A third is the emphasis on practical reasoning, the argumentative turn in policy analysis discussed in

the next chapter (Fischer and Forester, 1993; Hajer and Wagenaar, 2003; Hoppe, 1999).

Research that works in close proximity to practice settings illustrates the interaction framework. First noted in corporate research (Pelz and Andrews, 1976), and later in the life sciences (Louis et al., 1989), the publication of Pasteur’s Quadrant (Stokes, 1997), with its emphasis on use-inspired research, increased its visibility. This research influenced how the National Academy of Education (1999) set research priorities and its interest in how to hold policy specialists, researchers, and professional educators, program developers, and curriculum specialists collectively accountable for educational outcomes. Collaborations of this kind formed the basic design concept for the Strategic Education Research Partnership. These involved connecting researchers to teachers, bringing in research communities, school administrations, and educational policy makers (see National Research Council, 1999a; Smith and Smith, 2009). The Carnegie Foundation for the Advancement of Teaching and Learning is also promoting a framework for research and development labeled improvement research (Bryk et al., 2011), which synthesizes the work of researchers and practitioners.

In this spirit, the Institute of Medicine (IOM) created a Roundtable on Evidence-Based Medicine, which then became the Roundtable on Value & Science-Driven Health Care, to foster interaction among stakeholders interested in building a continuously learning health care system in which science, information technology, incentives, and culture are aligned to bring together evidence-based practice and practice-based evidence (see Green, 2006). This effort and its attendant workshops (Institute of Medicine, 2007, 2010b, 2011a, 2011b) stress the importance of rigorous science and applying the best evidence available. The goal is understanding how health care can be restructured to develop knowledge from science and from the health care process and to then apply it on many fronts: health care delivery and health improvement, patient and public engagement, health professional training, infrastructure development, measurement, costs and incentives, and policy. The IOM’s reports on these activities draw attention to active collaboration, exchange, and appraisal of research and policy and to what is known by researchers and users of research about practice—drawn from the life-cycle of therapies, their development, testing, introduction, and evaluation.

As attractive as these initiatives are, there are cautionary voices. There is a difference across political time, policy time, and research time. One should take care not to mistake one for another (Henig 2009, p. 153):

The pressure for fast, simple, and confident conclusions, however, is generated by the needs of politicians—not necessarily the needs of the policy. Political time is defined by election cycles, scheduled reauthorization debates, and the need to respond to short-term crises or sudden shifts in public attention. But a consideration of the history of public policy suggests that societal learning about complex problems and large-scale policy responses takes place on a much more gradual curve.

Interaction models offer an insight into what the use of science means in practice. Evidence from science is not simply there for the taking. It emerges and is made sense of in the particular circumstances that give rise to a policy argument (see Chapter 4 for discussion of policy argument). “Making sense” is iterative. It involves negotiating what kind of situation-specific knowledge is relevant to a policy choice, whether it is firmly established and available under the constraints of time and budget, and what political consequences might follow from using it. In this framework, formal linkages and frequent exchanges among researchers, policy makers, and service providers occur at all steps between knowledge production and knowledge use (Huberman and Cox, 1990). What emerges is a social as well as a technical exercise. Conklin et al. (2008, p. 7) explain this framework:

Strategic interactions (between human actors within and between organizations) therefore address both sides of the research-policy interface. On the one hand, decision-makers highlight policy relevant research priorities; on the other hand, researchers can interpret research findings in local contexts. In so doing, a common understanding of a policy problem, and its possible solutions, is built between different actors in the two communities.…

Spillane and Miele (2007) underscore the point in observing that what information is noticed in a particular decision-making environment, whether it is understood as evidence pertaining to some problem, and how it is eventually used all depend on the cognitions of the individuals operating in that environment. Furthermore, what these actors notice and make sense of is determined in part by the circumstances of their practice environment. Examining use, then, also requires examining “the practice of sense making, viewing it as distributed across an interactive web of actors and key aspects of their situation—including tools and organizational routines”

(p. 49). It also introduces the idea that research might “be interpreted and reconstructed—alongside other forms of knowledge—in the process of its use” (Nutley et al., 2007, p. 304).

Focusing on understanding institutional arrangements—how the agencies, departments, and political institutions involved in policy making operate and relate to one another—may be what matters most in improving the connection between science and policy making. For example, a study of drug misuse in government agencies in Scotland and England (Nutley et al., 2002) suggests that three aspects of microinstitutional arrangements within and between the agencies mattered a great deal in understanding how research evidence was (or was not) used:

1.   How different agencies integrated research with other forms of evidence,

2.   How agencies collectively dealt with the fragmentation of research evidence resulting from different agencies producing different types of evidence given their respective research cultures, and

3.   What mechanisms were in place to integrate evidence and policy making (co-location of research and policy staff, cross-government work groups, establishment of quasi-policy bodies that specialize in the substance of a policy domain, etc.)?

Nutley et al. (2007, pp. 319-320) conclude

[T]here is now at least some credible evidence to underpin [their view] … that interactive, social, and interpretive models of research use—models that acknowledge and engage with context, models that admit roles for other types of knowledge, and models that see research use being more than just about individual behavior—are more likely to help us when it comes to understanding how research actually gets used, and to assist us in intervening to get research used more.…

If this conclusion holds up, it is a step toward accumulating what the committee believes is lacking: understanding institutional arrangements that facilitate the use of science in policy.

There is an important cautionary observation about efforts to overcome the “two communities” challenge. There are tensions between scientific

engagement with practical policy problems and the long-standing assumption that science maintains its authority by virtue of its independence from politics (Jasanoff, 1990; Jasanoff et al., 1995). Persons working to bring scientists and policy makers closer need to be mindful that this tension is never far from how scientists think about and engage the policy uses of their work.

EVIDENCE-BASED POLICY AND PRACTICE

Current discussions about the use of research knowledge are heavily influenced by “evidence-based policy and practice.” The goal is realizing better and more defensible policy decisions by grounding them in the conscientious, explicit, and judicious use of the best available scientific evidence (Davies et al., 2000). The initiative explicitly rejects habit, tradition, ideology, and personal experience as a basis for policy choices: they are to be replaced with a more dependable foundation of “what works,” that is, what the evidence shows about the consequences of a proposed policy or practice. With access to an evidence base, argue the proponents, policy makers will make better decisions about the direction, adoption, continuation, modification, or termination of policies and practices. Dunworth et al. (2008, p. 7) note:

[W]hile scientific evidence cannot help solve every problem or fix every program, it can illuminate the path to more effective public policy.… [T]he costs and lost opportunities of running public programs without rigorous monitoring and disinterested evaluation are high … without objective measurements of reach, impact, cost effectiveness, and unplanned side effects, how can government know when it’s time to pull the plug, regroup, or, in business lingo, “ramp up?”

The use of science is, of course, not a logical or inevitable outcome of having the science. In fact, the normative claim that policy should be grounded in an evidence base “is itself based on surprisingly weak evidence” (Sutherland et al., 2012, p. 4).

The approach of evidence-based policy and practice assumes that there is an agreement among policy makers and researchers on what the desired ends of policy should be. “The main contribution of social science research is to help identify and select the appropriate means to reach the goal” (Weiss 1979, p. 427). This, in turn, depends on the quality of the science providing

evidence to the policy maker, and thus the evidence-based approach places a premium on improving policy-relevant research, often through the use of RCFTs.

In the settings in which they are carried out, RCFTs provide a strong, if not the strongest, form of scientific evidence of cause and effect. Circumstances may permit such experiments in a desired setting, such as when scarce resources are allocated by lottery, for example with admission to magnet schools or charter schools or the allocation of health care resources. An example of the latter is the Oregon Health Insurance Experiment in which names were drawn by lottery for the state’s Medicaid program for low-income, uninsured adults (Finkelstein et al., 2012).

Even when RCFTs are conducted in one setting, inference from them may be applied to other settings or contexts with concurrent collection of information on other variables or factors that differ in different settings and that may influence the results. So-called substitutes for randomized trials, however, such as “natural” experiments and “quasi-experiments,” as Sims (2010) argues, are not actually experiments. They are often invoked as a way to avoid confronting “the complexities and ambiguities that inevitably arise in nonexperimental inference.” For these situations and even in conjunction with randomized experiments, there are nonexperimental methods of drawing causal inferences and model-based methods for adjusting experimental results for inherent biases. Appendix A provides a review of some of these research methods and sets them in the context of the varied statistical methods for research and evaluation.

The active debate regarding the appropriate methodology for a given research question promotes attention in the policy community to the desirability of producing the best possible evidence under a given set of circumstances, especially the strongest evidence that bears on policy implementation and policy consequences. Bringing attention to the importance of strong evidence in policy making advances the goal of using science even though the specific formulation of an evidence-based policy approach offers little insight into the conditions that bring about its use.

Despite their considerable value in other respects, studies of knowledge utilization have not advanced understanding of the use of evidence in the policy process much beyond the decades-old National Research Council (1978) report. The family of suggestive concepts, typologies, and frame-

works has yet to show with any reasonable certainty what changes have occurred in the nature, scope, and magnitude of the use of science as a result of different communication strategies or different forms of researcher-user collaborations (Dobbins et al., 2009b; Mitton et al., 2007). There is little assessment of whether innovations said to increase the use of science in policy have had or are having their desired effects.

A recent study reporting the results of a collaborative procedure among 52 participants covering a range of experiences in both science and policy identified 40 (!) key unanswered questions on the relationship between science and policy—this despite nearly four decades of research on the question of “use” (Sutherland et al., 2012). One extensive review of the literature reaches the striking conclusion that knowledge use is “so deeply embedded in organizational, policy, and institutional contexts that externally valid evidence pertaining to the efficacy of specific knowledge exchange strategies is unlikely to be forthcoming ” (Contandriopoulos et al., 2010, p. 468 [italics added]).

Our conclusion is not that pessimistic. If “use” is broadly understood to mean that science—or, more specifically, in the language of evidence-based policy and practice, scientific evidence of the effectiveness of interventions—is incorporated into policy arguments, we agree that there probably will never be a definitive explanation of what strategies best facilitate or ensure that incorporation. But this conclusion does not rule out that the possibility that new approaches in the study of the science-policy nexus might reveal factors or conditions that have thus far been missed. Perhaps the preoccupation with defining use, identifying factors that influence it, and determining how to increase it has detracted from the search for alternative ways in which social science can contribute to understanding the use of science in policy. That possibility is the subject of Chapter 4 .

Using Science as Evidence in Public Policy encourages scientists to think differently about the use of scientific evidence in policy making. This report investigates why scientific evidence is important to policy making and argues that an extensive body of research on knowledge utilization has not led to any widely accepted explanation of what it means to use science in public policy. Using Science as Evidence in Public Policy identifies the gaps in our understanding and develops a framework for a new field of research to fill those gaps.

For social scientists in a number of specialized fields, whether established scholars or Ph.D. students, Using Science as Evidence in Public Policy shows how to bring their expertise to bear on the study of using science to inform public policy. More generally, this report will be of special interest to scientists who want to see their research used in policy making, offering guidance on what is required beyond producing quality research, beyond translating results into more understandable terms, and beyond brokering the results through intermediaries, such as think tanks, lobbyists, and advocacy groups. For administrators and faculty in public policy programs and schools, Using Science as Evidence in Public Policy identifies critical elements of instruction that will better equip graduates to promote the use of science in policy making.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

Starting Point Logo

How to Engage Undergraduates in Research

"Why not have all incoming students join with the faculty right away as young scholars in the discovery of knowledge, in the integration of knowledge, in the application of knowledge, and in the communication of knowledge? Why not have these four dimensions of scholarship become the four essential goals of undergraduate education?" (Boyer, 1997, p. 79).

Like research itself, planning undergraduate research experiences involves creativity, attention to process, and flexibility. As a form of inquiry-based learning, undergraduate research experiences require faculty to prioritize facilitating the discovery of knowledge on the part of students over imparting existing knowledge directly. Here are some key steps in creating a successful undergraduate research experience for you and your students.

Undergraduate Research in Psychology

Identify Learning Objectives

Undergraduate research experiences engage students in the creation of knowledge, which to some may be motivation alone for such experiences. As a pedagogical practice, though, undergraduate research has an added benefit-it can be a tool for achieving a number of learning objectives, and in a way that often demonstrates their relationships to one another. Because there are so many forms and levels of undergraduate research experiences, to choose among them you must reflect on what you want your students to learn and the weight you wish to give to each learning goal. For instance, in an intermediate-level course, how much attention do you want to give to having students identify research questions and hypotheses on their own relative to collecting and interpreting data to test those hypotheses? Decide on your learning objectives before you decide on the form and intensity of an undergraduate research experience.

Learn More About Learning Objectives

Back to Top

Choose the Form and Intensity of the Undergraduate Research Experience

Undergraduate research experiences can vary in both form and intensity. When they are structured properly, class-based activities (naturalistic observation, surveys, quantitative writing assignments , and experiments) can be undergraduate research experiences. So can class-based research projects (term papers, service learning , community-based and campus-based learning ), capstone experiences (senior and honors theses), and out-of-class student/faculty collaborative research (like summer research experiences). For each of these forms of undergraduate research experiences, you can use your identified learning objectives to determine the intensity of both the overall research experience and of each of its parts.

Learn More About Choosing the Form and Intensity of an Undergraduate Research Experience

  • Determine Project Needs

Some undergraduate research projects require or benefit from special materials and resources. For instance, field work projects may involve off-campus travel. For campus- and community-based research projects, students may better understand the context and relevance of assignments when they have opportunities to meet and interview those organizations and communities who stand to be affected or served by the research. Other projects may require the use of laboratory equipment. Some projects may be best managed when class size is limited and/or when the class meets once a week for a long period rather than three times a week for shorter periods. Not all needs come at a financial cost, but perhaps there are internal or external funding options for any of yours that may.

Learn More About Determining Project Needs

Set Expectations-Yours and Theirs

Since undergraduate research teaches disciplinary practice, it is as critical to prepare students to both expect and tackle the real-world challenges of the research process as it is to set expectations about outcomes. Undergraduate research requires students to deal with ill-structured problems . While faculty may have lots of experience dealing with ill-structured problems in their own research, students rarely see evidence of this in traditional chalk-and-talk classroom environments. To maximize the benefits of an undergraduate research experience, it's important to condition students to the fact that research is an iterative process that involves grappling with uncertainty.

For faculty, the process of developing an undergraduate research experience for students can often feel like an ill-structured problem of its own. For undergraduate research experiences to be successful, it is critical that faculty and students identify and communicate what is different about this learning experience.

Learn More About Setting Expectations

  • Structure the Critical Elements

While experiencing triumphs and pitfalls is common to doing research (and learning that fact can be a goal of its own), research in your discipline does follow a process , and this process suggests a variety of ways to offer structure to an undergraduate research experience so it can be successful.

Learn More About Structuring the Critical Elements

  • Provide the Right Support

While it is important to the success of an undergraduate research experience to set expectations about the real-world challenges of the research process, it is also important to ensure that students have the right support for dealing with those challenges. Furthermore, undergraduate research involves student-faculty collaboration and sometimes also student-student collaboration, and these types of partnerships may be new to students. For a given research project, you'll want to identify the parts of the experience for which students are most likely to need support and design ways to make the sources of that support transparent and accessible to them. Fortunately, there are many existing support practices and support structures that you can use.

Learn More About Providing the Right Support

  • Assess the Experience

You need to assess the undergraduate research experience at several stages and in several ways to fully understand the nature and extent of student learning and to reflect on and refine the way your use this pedagogy. If your students are participating in a work-in-progress, you'll want to determine the quality of and progress on the project to date so you can explore the best ways to extend and improve it. From short-step assignments to research proposals and papers and informal written reflection pieces, there are a number of ways to assess the overall undergraduate research experience as well as individual parts of it.

Learn More About Assessing the Experience

Further the Experience

There are many ways for both you and your students to further your involvement in undergraduate research. Perhaps you both wish to disseminate or extend your work. Professionals present and publish their research, and so can students-even beyond their own campuses! There are numerous resources for students and faculty who wish to publish or present research. Also, for undergraduate research experiences that are parts of a work-in-progress, one can think about ways to advance the project by dealing with its next stages in other learning environments. For instance, a student who started an undergraduate research project in a service-learning course could finish it in an honors or independent study experience in a subsequent semester. There are a number of organizations, opportunities, and programs available to support and inform faculty and students engaged in undergraduate research.

Learn More About Furthering the Experience

« Previous Page       Next Page »

  • Campus Living Laboratory
  • ConcepTests
  • Conceptual Models
  • Cooperative Learning
  • Earth History Approach
  • Experience-Based Environmental Projects
  • First Day of Class
  • Gallery Walks
  • Indoor Labs
  • Interactive Lecture Demonstrations
  • Interactive Lectures
  • Investigative Case Based Learning
  • Just in Time Teaching
  • Mathematical and Statistical Models
  • Peer Review
  • Role Playing
  • Service Learning
  • Socratic Questioning
  • Spreadsheets Across the Curriculum
  • Studio Teaching in the Geosciences
  • Teaching Urban Students
  • Teaching with Data
  • Teaching with GIS
  • Teaching with Google Earth
  • ...click to see 28 more...
  • Teaching with Visualizations
  • Undergraduate Research
  • What is Undergraduate Research?
  • Why Use Undergraduate Research Experiences?
  • How To Engage Undergraduates in Research
  • Learning Objectives
  • Forms of Undergraduate Research Experiences
  • Set Expectations
  • Further the Undergraduate Research Experience
  • References and Resources
  • Using an Earth System Approach

SERC

  • About this Site
  • Accessibility

Citing and Terms of Use

Material on this page is offered under a Creative Commons license unless otherwise noted below.

Show terms of use for text on this page »

Show terms of use for media on this page »

Undergraduate Research in Psychology

  • None found in this page
  • Last Modified: February 27, 2024
  • Short URL: https://serc.carleton.edu/13231 What's This?

Logo for British Columbia/Yukon Open Authoring Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

4.3 Why Do You Learn to Research?

Jemma Llewellyn; Erin Kelly; Sara Humphreys; Tina Bebbington; Nancy Ami; and Natalie Boldt

As a student, you’re likely to hear the term “research” in the context of an assignment for a class. That is, you might be told that the assignment you need to complete will require “research.” This word might seem intimidating and mysterious. You might have seen announcements around campus for research presentations, conferences, symposiums or roundtables. This is where faculty (many of whom are also researchers) talk about their current research, which may include working with books and journal articles in the library, but also (depending on their field of expertise) might involve observing the life-cycle of fruit flies, interviewing hospital patients, running computer models to solve problems, or examining the night sky. In all of these cases, research can seem separate from your everyday life.

But that assumption falls apart when you recognize that research is the term we all use to describe a systematic process for learning more about a topic or, more colloquially, looking stuff up. Let’s start by talking about the research you might have performed to buy a phone. Did you simply walk up to a kiosk in a mall and buy the first phone you put your hands on? Probably not. Maybe you asked friends about their experiences with their phones, using their recommendations to eliminate some choices. Possibly, you went online to read reviews of the latest phone from a company several of your friends recommended. These reviews might help you narrow your list to a top three. Maybe you also did a bit of searching—and perhaps visited or called some shops—to see if your preferred phones were in stock or, better yet, on sale. And only then did you make a purchase. That’s all a form of research.

Sometimes, as when you are buying something relatively expensive (like a phone), research is a way of guiding a choice. Research can help you make choices about which political party to support or whether a proposed law aligns with your values. It might lead you to change your behaviour; for example, learning more about the health impacts of smoking could lead you to quit, while finding out that partner dancing improves long-term health outcomes could motivate you to learn how to tango. And sometimes you might want to research a topic simply because you find it fascinating. Perhaps you are interested in Ava DuVernay’s work: how many movies has she directed? And how did she get her start in film making (see Fig. 4.1)? And what does a film director do anyway?

At the top of the image is a search bar with the name “Ava DuVernay” inside. Beneath it are the top three search results, which are 1. DuVernay’s subject entry on Wikipedia.com; 2. her profile on IMDb.com; and 3. her Twitter profile. On the right hand side of the image are several photos of DuVernay. Below them is an excerpt and notable highlights from DuVernay’s Wikipedia entry. Attribution: Google and the Google logo are registered trademarks of Google LLC, used with permission.

Search engines, such as Google, can help you to answer a question when you are looking for everyday information. As an aside, Google lists ads first in a search and also collects your data—this is how they make their money . [1] Be very careful about what you click on when you are using Google and many other search engines (except if you search using the Firefox address bar in the Firefox browser )—a link might be an ad selling you something rather than the source you wanted. By the way, you can use a pseudonym when you sign up for Google or most online services listed as “free.” You do not owe any company your personal information for using their services. Also note that most university and college libraries have excellent search engines; please do use them rather than Google. Now, back to our discussion about everyday research and post-secondary research.

In Fig. 4.1 you can see two of the top choices for reading about Ava DuVernay’s work. Neither are scholarly, but they are sites with fairly good reputations (Wikipedia and the Internet Movie Database, respectively). These sites are perfectly fine when you are simply curious and want to look something up. They are even fine when you are just starting to think about a research topic. But maybe they aren’t so fine when you need peer-reviewed or even more reputable, reliable sources to support claims you are making in a research paper.

There is a difference between the everyday research you perform and the research that goes on in a university or college setting. Whether in assignments for classes or when scholars on campus perform lab experiments, well-designed human studies, or exploration of archival materials, post-secondary research has to do with standards of and systems for getting to reliable answers. This more academic type of research assumes that some sources are more reliable than others. There are more and less ethical ways of gathering, analyzing, and representing information (see section 4.2 Knowledges and Traditions ). And there are approaches to research that are more or less likely to lead to an accurate answer. Furthermore, each discipline has its own conventions, standards, methods, and language, which help situate that research within scholarly conversations.

To explain by way of example, when you are looking to buy a new pair of sunglasses, it’s okay to do research by seeing what brand most of your favourite singers wear when they are snapped by the paparazzi. Worst case scenario, you buy a too-expensive pair of sunglasses that don’t fit you well. But a medical researcher who is trying to figure out whether most sunglasses currently on the market offer sufficient UV protection to help prevent cataracts must look at different evidence and analyze it, because public health is at stake.

In other words, it’s a misconception that academic research is only for graduate students and faculty members. The research skills you develop while you are in university courses will be transferable to your profession or any other schooling you might want to continue with. Actually, there aren’t many professional positions that don’t ask for some kind of research skills and activity. And the habits of critical thinking about sources and information you develop through research projects will serve you well whenever you need to teach yourself something, figure out a problem, or determine a course of action. By and large most North American universities and colleges are committed to supporting undergraduate research. Many have clear statements about the benefits of undergraduate research, such as these from the University of Montana and the University of Victoria, respectively:

Research allows you to pursue your interests, to learn something new, to hone your problem-solving skills and to challenge yourself in new ways. Working on a faculty-initiated research project gives you the opportunity to work closely with a mentor–a faculty member or other experienced researcher. [2] In addition to the opportunity to create knowledge, research will develop your analytical skills and boost your success in course work and career achievement. Participating in research may inspire you to pursue a particular academic discipline, further your education with graduate studies or focus you on a fascinating career path. [3]

So now you know how important research really is to your everyday life, your academic life and then, very likely, down the line in your chosen profession. Much of the research you will perform is online, but that wasn’t always the case. During the pandemic we are all faced in academia with the need to perform even more research, teaching, and learning online. This way of working can seem rather discombobulating, but library research, on the other hand, has slowly moved online over the past twenty years. Perhaps we all know more about online learning than we think?

Research Then and Now

Not that long ago, there was no internet. There were only print-based indexes and journals, no research databases, no online journals, no ebooks, and no video chats with librarians. Students had to learn how to use these tools and resources to find an article using an index, how to find the article in order to read it, and how to read the article effectively. Students then learned how to analyze citations to find related information, look up definitions and facts in reference books, and order print materials from other libraries when they had no other options.

The card catalogue pictured in Fig. 4.2 was the key tool students and instructors alike used to look up the research materials they needed. Believe us, there were very long lines at photocopiers in libraries in those days.

A close-up image of a library card catalogue shows dozens of drawers. The angle of the shot alludes to the hundreds of references for library materials each drawer contains.

Moving all of these resources online was a lengthy process and resulted in a major shift in how scholars and students currently do research. As resources gradually moved online, they were refined and updated over time. Universities adapted and embraced these new tools and methods. Even now, not everything is online. But card catalogues have gone by the wayside as online library search engines have taken over as the means by which you will find what you need at your university or college library.

In the pre-pandemic world, you could stroll into the library, chat with a librarian about your project, and get the help you need. All those services are currently fully available online (as they have been for the last decade or so). While you can’t access physical libraries with their variety of services and print sources fully during the pandemic, you certainly can chat with librarians online or by phone; find resources in a number of library databases; and download the materials you need with no paywalls (meaning you need to pay to access the materials).

However, the skills you need to do research remain the same as they were in that print-only era:

  • curiosity, planning, and critical thinking;
  • a willingness to engage in scholarly conversation;
  • reading skills, note taking, and information literacy.

Your library support team is still here to help you at all stages of research and writing, and can advise on how to navigate research when you’re relying on online sources. Research tools and methodologies are always evolving and adapting as new technologies and needs arise. Scholars adapt and evolve with them and so will you!

  • Shoshana Zuboff, “Shoshana Zuboff: ‘Surveillance Capitalism is an Assault on Human Autonomy’,” interview by Joanna Kavenna, The Guardian , October 4, 2019, https://www.theguardian.com/books/2019/oct/04/shoshana-zuboff-surveillance-capitalism-assault-human-automomy-digital-privacy . ↵
  • “Why Do Research?” Undergraduate Research Experiential Learning and Career Success, University of Montana, accessed September 28, 2020, http://www.umt.edu/ugresearch/research/why-research.php . ↵
  • “Student Research Opportunities,” Research, University of Victoria, accessed October 9, 2020, https://www.uvic.ca/research/learnabout/home/studentopps/index.php . ↵

Why Write? A Guide for Students in Canada Copyright © 2020 by Jemma Llewellyn; Erin Kelly; Sara Humphreys; Tina Bebbington; Nancy Ami; and Natalie Boldt is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

How a Student Uses Knowledge as a Resource to Solve Scientific Problems: A Case Study on Science Learning as Rediscovery

  • Published: 21 May 2022
  • Volume 33 , pages 213–247, ( 2024 )

Cite this article

  • Phil Seok Oh   ORCID: orcid.org/0000-0001-9385-9428 1  

362 Accesses

Explore all metrics

Inspired by a theoretical view of knowledge as a resource, this study explored in detail how a student used knowledge as a resource when she engaged in problem-solving about rocks and what she learned as a result of the practice of solving scientific problems. The context of the study was an inquiry project conducted in an earth science course for preservice elementary teachers offered in a university in Korea. In the project, students were given three different rocks and asked to figure out how each rock had been formed. Data included a student’s written report about her inquiry and interviews with her, which were analyzed qualitatively. It was revealed that to solve the rock problems, the student used different types of knowledge as resources—everyday knowledge, personally internalized disciplinary knowledge, and externally retrieved scientific knowledge. The ways the student utilized the knowledge involved three distinctive processes—activation, adaptation, and evaluation of the resources, each of which had subordinate processes of using knowledge—elicitation, search, refinement, combination, selection, and deactivation of the resources. As a result of the scientific problem-solving, the student learned the usefulness and value of knowledge, enhanced previous knowledge, and realized both the limitations of her knowledge and the need for learning new knowledge. Based on these findings, the study proposed a new framework that conceptualized science learning as rediscovery (SLR) and discussed science pedagogies suitable to the SLR framework.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

justify the need for research knowledge to students

Similar content being viewed by others

justify the need for research knowledge to students

Characterizing Elementary-School Students’ Epistemology of Science: Science as Collective Theory-Building Process

justify the need for research knowledge to students

Discovery Learning—Jerome Bruner

justify the need for research knowledge to students

Knowledge Advancement in Environmental Science Through Knowledge Building

Abioui, M. (2016). Need for popularization in geoscience: Narrative and education. ASRO Journal of Education, 1 , 15–17.

Google Scholar  

Atkin, J. M., & Karplus, R. (1962). Discovery or invention? The Science Teacher , 29 (5), 45, 47, 49, 51.

Ausubel, D. P. (1964/1969). Some psychological and educational limitations of learning by discovery. In H. O. Andersen (Ed.), Readings in science education for the secondary school (pp. 97–113). The Macmillan Company.

Brown, D. E., & Hammer, D. (2013). Conceptual change in physics. In S. Vosniadou (Ed.), International handbook of research on conceptual change (pp. 121–137). Routledge.

Campbell, T., Schwarz, C., & Windschitl, M. (2016). What we call misconceptions may be necessary stepping-stones toward making sense of the world. The Science Teacher, 83 (3), 69–74.

Article   Google Scholar  

Cheng, M.-F., & Brown, D. E. (2010). Conceptual resources in self-developed explanatory models: The importance of integrating conscious and intuitive knowledge. International Journal of Science Education, 32 , 2367–2392. https://doi.org/10.1080/09500690903575755

Clement, J. (1993). Using bridging analogies and anchoring intuitions to deal with students’ preconceptions in physics. Journal of Research in Science Teaching, 30 , 1241–1257. https://doi.org/10.1002/tea.3660301007

Clement, J. J. (2008). Creative model construction in scientists and students: The role of imagery, analogy, and mental simulation . Dordrecht: Springer.

Cohen, D. K., Raudenbush, S. W., & Ball, D. L. (2003). Resources, instruction, and research. Educational Evaluation and Policy Analysis, 25 , 119–142. https://doi.org/10.3102/01623737025002119

Driver, R. (1995). Constructivist approaches to science teaching. In L. P. Steffe & J. Gale (Eds.), Constructivism in education (pp. 385–400). Lawrence Erlbaum Associates.

Glaser, B., & Strauss, A. (1967). The discovery of grounded theory: Strategies for qualitative research . Aldine Transaction.

Hammer, D. (2000). Student resources for learning introductory physics. Physics Education Research, American Journal of Physics, 68 (Suppl. 7), S52–S59. https://doi.org/10.1119/1.19520

Hammer, D., Elby, A., Scherr, R. E., & Redish, E. F. (2005). Resources, framing, and transfer. In J. Mestre (Ed.), Transfer of learning from a modern multidisciplinary perspective (pp. 89–120). Information Age Publishing.

Hammer, D., Goldberg, F., & Fargason, S. (2012). Responsive teaching and the beginnings of energy in a third grade classroom. Review of Science, Mathematics and ICT Education, 6 , 51–72.

Heller, P. M., & Finley, F. N. (1992). Variable uses of alternative conceptions: A case study in current electricity. Journal of Research in Science Teaching, 29 , 259–275. https://doi.org/10.1002/tea.3660290306

Kostyuchenko, Y., Pushkar, V., & Abioui, M. (2021). Review of “theorizing the future of science education research” edited by Vaughan Prain and Brian Hand. Science & Education, 30 , 775–778. https://doi.org/10.1007/s11191-021-00196-0

Levin, D. M., Hammer, D., Elby, A., & Coffey, J. E. (2013). Becoming a responsive science teacher: Focusing on student thinking in secondary science . National Science Teachers Association Press.

McNeill, K. L., & Krajcik, J. (2009). Synergy between teacher practices and curricular scaffold to support students in using domain-specific and domain-general knowledge in writing arguments to explain phenomena. Journal of the Learning Sciences, 18 , 416–460. https://doi.org/10.1080/10508400903013488

Merriam, S. B. (2009). Qualitative research: A guide to design and implementation . Jossey-Bass.

Millar, R. (1998). Rhetoric and reality: What practical work in science education is really for. In J. Wellington (Ed.), Practical work in school science: Which way now? (pp. 16–31). Routledge.

Millar, R. (2004, July 12–13). The role of practical work in the teaching and learning science [Paper presentation]. Meeting of high school science laboratories: Role and vision, Washington, DC, United States.

NGSS Lead States. (2013). Next generation science standards: For states, by states . The National Academies Press.

Núñez-Oviedo, M. C., & Clement, J. (2008). A competition strategy and other modes for developing mental models in large group discussion. In J. J. Clement & M. A. Rea-Ramirez (Eds.), Model based learning and instruction in science (pp. 117–138). Springer.

Chapter   Google Scholar  

Núñez-Oviedo, M. C., & Clement, J. (2019). Large scale scientific modeling practices that can organize science instruction at the unit and lesson levels. Frontiers in Education, 4 , 68. https://doi.org/10.3389/feduc.2019.00068

Núñez-Oviedo, M. C., Clement, J., & Rea-Ramirez, M. A. (2008). Developing complex mental models in biology through model evolution. In J. J. Clement & M. A. Rea-Ramirez (Eds.), Model based learning and instruction in science (pp. 173–193). Springer.

Oh, P. S. (2011). Characteristics of abductive inquiry in earth science: An undergraduate case study. Science Education , 95 , 409–430. https://doi.org/10.1002/sce.20424

Oh, P. S. (2017). The roles and importance of critical evidence (CE) and critical resource models (CRMs) in abductive reasoning for earth scientific problem solving. Journal of Science Education , 41 (3), 426–446. (In Korean with an English abstract) https://doi.org/10.21796/jse.2017.41.3.426

Oh, P. S. (2019). Features of modeling-based abductive reasoning as a disciplinary practice of inquiry in earth science. Science & Education , 28 , 731–757. https://doi.org/10.1007/s11191-019-00058-w

Oh, P. S., & Oh, S. J. (2013). Modeling sunspots. The Science Teacher , 80 (6), 51–56.

Osborne, J. (1998). Science education without a laboratory? In J. Wellington (Ed.), Practical work in school science: Which way now? (pp. 156–175). Routledge.

Paavola, S. (2004). Abduction as a logic and methodology of discovery: The importance of strategies. Foundations of Science, 9 , 267–283. https://doi.org/10.1023/B:FODA.0000042843.48932.25

Parnafes, O. (2012). Developing explanations and developing understanding: Students explain the phases of the moon using visual representation. Cognition and Instruction, 30 , 359–403. https://doi.org/10.1080/07370008.2012.716885

Raymond, L. R. (2002). Petrology: The study of igneous, sedimentary, and metamorphic rocks (2 nd ed.). McGraw-Hill.

Richards, A. J., Jones, D. C., & Etkina, E. (2020). How students combine resources to make conceptual breakthroughs. Research in Science Education, 50 , 1119–1140. https://doi.org/10.1007/s11165-018-9725-8

Robertson, A. D., Scherr, R. E., & Hammer, D. (2016). Responsive teaching in science and mathematics . Routledge.

Rule, P., & John, V. M. (2015). A necessary dialogue: Theory in case study research. International Journal of Qualitative Methods, 14 , 1–11. https://doi.org/10.1177/1609406915611575

Sabo, H. C., Goodhew, L. M., & Robertson, A. D. (2016). University student conceptual resources for understanding energy. Physical Review Physics Education Research, 12 , 010126. https://doi.org/10.1103/PhysRevPhysEducRes.12.010126

Sampson, V., & Clark, D. B. (2008). Assessment of the ways students generate arguments in science education: Current perspectives and recommendations for future directions. Science Education, 92 , 447–472. https://doi.org/10.1002/sce.20276

Schreier, M. (2012). Qualitative content analysis in practice . Sage.

Book   Google Scholar  

Taber, K. S. (2000). Multiple frameworks?: Evidence of manifold conceptions in individual cognitive structure. International Journal of Science Education, 22 , 399–417. https://doi.org/10.1080/095006900289813

Taber, K. S. (2001). Shifting sands: A case study of conceptual development as competition between alternative concepts. International Journal of Science Education, 23 , 731–753. https://doi.org/10.1080/09500690010006572

Taber, K. S. (2008). Conceptual resources for learning science: Issues of transience and grain-size in cognition and cognitive structure. International Journal of Science Education, 30 , 1027–1053. https://doi.org/10.1080/09500690701485082

Taber, K. S., de Trafford, T., & Quail, T. (2006). Conceptual resources for constructing the concepts of electricity: The role of models, analogies and imagination. Physics Education, 41 , 155–160.

Tytler, R. (1998). The nature of students’ informal science conceptions. International Journal of Science Education, 20 , 901–927. https://doi.org/10.1080/0950069980200802

Wee, S. M., Cho, H., Kim, J. S., & Kim, Y. J. (2007). Characteristics of high school students’ conceptual understanding about minerals and rocks. Journal of the Korean Earth Science Society , 28 , 415–430. (In Korean with an English abstract) https://doi.org/10.5467/JKESS.2007.28.4.415

Wellington, J. (1998). Practical work in science: Time for a re-appraisal. In J. Wellington (Ed.), Practical work in school science: Which way now? (pp. 3–15). Routledge.

Williams, G., & Clement, J. (2015). Identifying multiple levels of discussion-based teaching strategies for constructing scientific models. International Journal of Science Education, 37 , 82–107. https://doi.org/10.1080/09500693.2014.966257

Download references

Author information

Authors and affiliations.

Department of Science Education, Gyeongin National University of Education, Sammak-ro 155, Manan-gu, Anyang, Gyeonggi-do, 13910, Republic of Korea

Phil Seok Oh

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Phil Seok Oh .

Ethics declarations

Conflict of interest.

The author declares no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Oh, P.S. How a Student Uses Knowledge as a Resource to Solve Scientific Problems: A Case Study on Science Learning as Rediscovery. Sci & Educ 33 , 213–247 (2024). https://doi.org/10.1007/s11191-022-00350-2

Download citation

Accepted : 27 April 2022

Published : 21 May 2022

Issue Date : February 2024

DOI : https://doi.org/10.1007/s11191-022-00350-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

Organizing Your Social Sciences Research Assignments

  • Annotated Bibliography
  • Analyzing a Scholarly Journal Article
  • Group Presentations
  • Dealing with Nervousness
  • Using Visual Aids
  • Grading Someone Else's Paper
  • Types of Structured Group Activities
  • Group Project Survival Skills
  • Leading a Class Discussion
  • Multiple Book Review Essay
  • Reviewing Collected Works
  • Writing a Case Analysis Paper
  • Writing a Case Study
  • About Informed Consent
  • Writing Field Notes
  • Writing a Policy Memo
  • Writing a Reflective Paper
  • Writing a Research Proposal
  • Generative AI and Writing
  • Acknowledgments

The goal of a research proposal is twofold: to present and justify the need to study a research problem and to present the practical ways in which the proposed study should be conducted. The design elements and procedures for conducting research are governed by standards of the predominant discipline in which the problem resides, therefore, the guidelines for research proposals are more exacting and less formal than a general project proposal. Research proposals contain extensive literature reviews. They must provide persuasive evidence that a need exists for the proposed study. In addition to providing a rationale, a proposal describes detailed methodology for conducting the research consistent with requirements of the professional or academic field and a statement on anticipated outcomes and benefits derived from the study's completion.

Krathwohl, David R. How to Prepare a Dissertation Proposal: Suggestions for Students in Education and the Social and Behavioral Sciences . Syracuse, NY: Syracuse University Press, 2005.

How to Approach Writing a Research Proposal

Your professor may assign the task of writing a research proposal for the following reasons:

  • Develop your skills in thinking about and designing a comprehensive research study;
  • Learn how to conduct a comprehensive review of the literature to determine that the research problem has not been adequately addressed or has been answered ineffectively and, in so doing, become better at locating pertinent scholarship related to your topic;
  • Improve your general research and writing skills;
  • Practice identifying the logical steps that must be taken to accomplish one's research goals;
  • Critically review, examine, and consider the use of different methods for gathering and analyzing data related to the research problem; and,
  • Nurture a sense of inquisitiveness within yourself and to help see yourself as an active participant in the process of conducting scholarly research.

A proposal should contain all the key elements involved in designing a completed research study, with sufficient information that allows readers to assess the validity and usefulness of your proposed study. The only elements missing from a research proposal are the findings of the study and your analysis of those findings. Finally, an effective proposal is judged on the quality of your writing and, therefore, it is important that your proposal is coherent, clear, and compelling.

Regardless of the research problem you are investigating and the methodology you choose, all research proposals must address the following questions:

  • What do you plan to accomplish? Be clear and succinct in defining the research problem and what it is you are proposing to investigate.
  • Why do you want to do the research? In addition to detailing your research design, you also must conduct a thorough review of the literature and provide convincing evidence that it is a topic worthy of in-depth study. A successful research proposal must answer the "So What?" question.
  • How are you going to conduct the research? Be sure that what you propose is doable. If you're having difficulty formulating a research problem to propose investigating, go here for strategies in developing a problem to study.

Common Mistakes to Avoid

  • Failure to be concise . A research proposal must be focused and not be "all over the map" or diverge into unrelated tangents without a clear sense of purpose.
  • Failure to cite landmark works in your literature review . Proposals should be grounded in foundational research that lays a foundation for understanding the development and scope of the the topic and its relevance.
  • Failure to delimit the contextual scope of your research [e.g., time, place, people, etc.]. As with any research paper, your proposed study must inform the reader how and in what ways the study will frame the problem.
  • Failure to develop a coherent and persuasive argument for the proposed research . This is critical. In many workplace settings, the research proposal is a formal document intended to argue for why a study should be funded.
  • Sloppy or imprecise writing, or poor grammar . Although a research proposal does not represent a completed research study, there is still an expectation that it is well-written and follows the style and rules of good academic writing.
  • Too much detail on minor issues, but not enough detail on major issues . Your proposal should focus on only a few key research questions in order to support the argument that the research needs to be conducted. Minor issues, even if valid, can be mentioned but they should not dominate the overall narrative.

Procter, Margaret. The Academic Proposal.  The Lab Report. University College Writing Centre. University of Toronto; Sanford, Keith. Information for Students: Writing a Research Proposal. Baylor University; Wong, Paul T. P. How to Write a Research Proposal. International Network on Personal Meaning. Trinity Western University; Writing Academic Proposals: Conferences, Articles, and Books. The Writing Lab and The OWL. Purdue University; Writing a Research Proposal. University Library. University of Illinois at Urbana-Champaign.

Structure and Writing Style

Beginning the Proposal Process

As with writing most college-level academic papers, research proposals are generally organized the same way throughout most social science disciplines. The text of proposals generally vary in length between ten and thirty-five pages, followed by the list of references. However, before you begin, read the assignment carefully and, if anything seems unclear, ask your professor whether there are any specific requirements for organizing and writing the proposal.

A good place to begin is to ask yourself a series of questions:

  • What do I want to study?
  • Why is the topic important?
  • How is it significant within the subject areas covered in my class?
  • What problems will it help solve?
  • How does it build upon [and hopefully go beyond] research already conducted on the topic?
  • What exactly should I plan to do, and can I get it done in the time available?

In general, a compelling research proposal should document your knowledge of the topic and demonstrate your enthusiasm for conducting the study. Approach it with the intention of leaving your readers feeling like, "Wow, that's an exciting idea and I can’t wait to see how it turns out!"

Most proposals should include the following sections:

I.  Introduction

In the real world of higher education, a research proposal is most often written by scholars seeking grant funding for a research project or it's the first step in getting approval to write a doctoral dissertation. Even if this is just a course assignment, treat your introduction as the initial pitch of an idea based on a thorough examination of the significance of a research problem. After reading the introduction, your readers should not only have an understanding of what you want to do, but they should also be able to gain a sense of your passion for the topic and to be excited about the study's possible outcomes. Note that most proposals do not include an abstract [summary] before the introduction.

Think about your introduction as a narrative written in two to four paragraphs that succinctly answers the following four questions :

  • What is the central research problem?
  • What is the topic of study related to that research problem?
  • What methods should be used to analyze the research problem?
  • Answer the "So What?" question by explaining why this is important research, what is its significance, and why should someone reading the proposal care about the outcomes of the proposed study?

II.  Background and Significance

This is where you explain the scope and context of your proposal and describe in detail why it's important. It can be melded into your introduction or you can create a separate section to help with the organization and narrative flow of your proposal. Approach writing this section with the thought that you can’t assume your readers will know as much about the research problem as you do. Note that this section is not an essay going over everything you have learned about the topic; instead, you must choose what is most relevant in explaining the aims of your research.

To that end, while there are no prescribed rules for establishing the significance of your proposed study, you should attempt to address some or all of the following:

  • State the research problem and give a more detailed explanation about the purpose of the study than what you stated in the introduction. This is particularly important if the problem is complex or multifaceted .
  • Present the rationale of your proposed study and clearly indicate why it is worth doing; be sure to answer the "So What? question [i.e., why should anyone care?].
  • Describe the major issues or problems examined by your research. This can be in the form of questions to be addressed. Be sure to note how your proposed study builds on previous assumptions about the research problem.
  • Explain the methods you plan to use for conducting your research. Clearly identify the key sources you intend to use and explain how they will contribute to your analysis of the topic.
  • Describe the boundaries of your proposed research in order to provide a clear focus. Where appropriate, state not only what you plan to study, but what aspects of the research problem will be excluded from the study.
  • If necessary, provide definitions of key concepts, theories, or terms.

III.  Literature Review

Connected to the background and significance of your study is a section of your proposal devoted to a more deliberate review and synthesis of prior studies related to the research problem under investigation . The purpose here is to place your project within the larger whole of what is currently being explored, while at the same time, demonstrating to your readers that your work is original and innovative. Think about what questions other researchers have asked, what methodological approaches they have used, and what is your understanding of their findings and, when stated, their recommendations. Also pay attention to any suggestions for further research.

Since a literature review is information dense, it is crucial that this section is intelligently structured to enable a reader to grasp the key arguments underpinning your proposed study in relation to the arguments put forth by other researchers. A good strategy is to break the literature into "conceptual categories" [themes] rather than systematically or chronologically describing groups of materials one at a time. Note that conceptual categories generally reveal themselves after you have read most of the pertinent literature on your topic so adding new categories is an on-going process of discovery as you review more studies. How do you know you've covered the key conceptual categories underlying the research literature? Generally, you can have confidence that all of the significant conceptual categories have been identified if you start to see repetition in the conclusions or recommendations that are being made.

NOTE: Do not shy away from challenging the conclusions made in prior research as a basis for supporting the need for your proposal. Assess what you believe is missing and state how previous research has failed to adequately examine the issue that your study addresses. Highlighting the problematic conclusions strengthens your proposal. For more information on writing literature reviews, GO HERE .

To help frame your proposal's review of prior research, consider the "five C’s" of writing a literature review:

  • Cite , so as to keep the primary focus on the literature pertinent to your research problem.
  • Compare the various arguments, theories, methodologies, and findings expressed in the literature: what do the authors agree on? Who applies similar approaches to analyzing the research problem?
  • Contrast the various arguments, themes, methodologies, approaches, and controversies expressed in the literature: describe what are the major areas of disagreement, controversy, or debate among scholars?
  • Critique the literature: Which arguments are more persuasive, and why? Which approaches, findings, and methodologies seem most reliable, valid, or appropriate, and why? Pay attention to the verbs you use to describe what an author says/does [e.g., asserts, demonstrates, argues, etc.].
  • Connect the literature to your own area of research and investigation: how does your own work draw upon, depart from, synthesize, or add a new perspective to what has been said in the literature?

IV.  Research Design and Methods

This section must be well-written and logically organized because you are not actually doing the research, yet, your reader must have confidence that you have a plan worth pursuing . The reader will never have a study outcome from which to evaluate whether your methodological choices were the correct ones. Thus, the objective here is to convince the reader that your overall research design and proposed methods of analysis will correctly address the problem and that the methods will provide the means to effectively interpret the potential results. Your design and methods should be unmistakably tied to the specific aims of your study.

Describe the overall research design by building upon and drawing examples from your review of the literature. Consider not only methods that other researchers have used, but methods of data gathering that have not been used but perhaps could be. Be specific about the methodological approaches you plan to undertake to obtain information, the techniques you would use to analyze the data, and the tests of external validity to which you commit yourself [i.e., the trustworthiness by which you can generalize from your study to other people, places, events, and/or periods of time].

When describing the methods you will use, be sure to cover the following:

  • Specify the research process you will undertake and the way you will interpret the results obtained in relation to the research problem. Don't just describe what you intend to achieve from applying the methods you choose, but state how you will spend your time while applying these methods [e.g., coding text from interviews to find statements about the need to change school curriculum; running a regression to determine if there is a relationship between campaign advertising on social media sites and election outcomes in Europe ].
  • Keep in mind that the methodology is not just a list of tasks; it is a deliberate argument as to why techniques for gathering information add up to the best way to investigate the research problem. This is an important point because the mere listing of tasks to be performed does not demonstrate that, collectively, they effectively address the research problem. Be sure you clearly explain this.
  • Anticipate and acknowledge any potential barriers and pitfalls in carrying out your research design and explain how you plan to address them. No method applied to research in the social and behavioral sciences is perfect, so you need to describe where you believe challenges may exist in obtaining data or accessing information. It's always better to acknowledge this than to have it brought up by your professor!

V.  Preliminary Suppositions and Implications

Just because you don't have to actually conduct the study and analyze the results, doesn't mean you can skip talking about the analytical process and potential implications . The purpose of this section is to argue how and in what ways you believe your research will refine, revise, or extend existing knowledge in the subject area under investigation. Depending on the aims and objectives of your study, describe how the anticipated results will impact future scholarly research, theory, practice, forms of interventions, or policy making. Note that such discussions may have either substantive [a potential new policy], theoretical [a potential new understanding], or methodological [a potential new way of analyzing] significance.   When thinking about the potential implications of your study, ask the following questions:

  • What might the results mean in regards to challenging the theoretical framework and underlying assumptions that support the study?
  • What suggestions for subsequent research could arise from the potential outcomes of the study?
  • What will the results mean to practitioners in the natural settings of their workplace, organization, or community?
  • Will the results influence programs, methods, and/or forms of intervention?
  • How might the results contribute to the solution of social, economic, or other types of problems?
  • Will the results influence policy decisions?
  • In what way do individuals or groups benefit should your study be pursued?
  • What will be improved or changed as a result of the proposed research?
  • How will the results of the study be implemented and what innovations or transformative insights could emerge from the process of implementation?

NOTE:   This section should not delve into idle speculation, opinion, or be formulated on the basis of unclear evidence . The purpose is to reflect upon gaps or understudied areas of the current literature and describe how your proposed research contributes to a new understanding of the research problem should the study be implemented as designed.

ANOTHER NOTE : This section is also where you describe any potential limitations to your proposed study. While it is impossible to highlight all potential limitations because the study has yet to be conducted, you still must tell the reader where and in what form impediments may arise and how you plan to address them.

VI.  Conclusion

The conclusion reiterates the importance or significance of your proposal and provides a brief summary of the entire study . This section should be only one or two paragraphs long, emphasizing why the research problem is worth investigating, why your research study is unique, and how it should advance existing knowledge.

Someone reading this section should come away with an understanding of:

  • Why the study should be done;
  • The specific purpose of the study and the research questions it attempts to answer;
  • The decision for why the research design and methods used where chosen over other options;
  • The potential implications emerging from your proposed study of the research problem; and
  • A sense of how your study fits within the broader scholarship about the research problem.

VII.  Citations

As with any scholarly research paper, you must cite the sources you used . In a standard research proposal, this section can take two forms, so consult with your professor about which one is preferred.

  • References -- a list of only the sources you actually used in creating your proposal.
  • Bibliography -- a list of everything you used in creating your proposal, along with additional citations to any key sources relevant to understanding the research problem.

In either case, this section should testify to the fact that you did enough preparatory work to ensure the project will complement and not just duplicate the efforts of other researchers. It demonstrates to the reader that you have a thorough understanding of prior research on the topic.

Most proposal formats have you start a new page and use the heading "References" or "Bibliography" centered at the top of the page. Cited works should always use a standard format that follows the writing style advised by the discipline of your course [e.g., education=APA; history=Chicago] or that is preferred by your professor. This section normally does not count towards the total page length of your research proposal.

Develop a Research Proposal: Writing the Proposal. Office of Library Information Services. Baltimore County Public Schools; Heath, M. Teresa Pereira and Caroline Tynan. “Crafting a Research Proposal.” The Marketing Review 10 (Summer 2010): 147-168; Jones, Mark. “Writing a Research Proposal.” In MasterClass in Geography Education: Transforming Teaching and Learning . Graham Butt, editor. (New York: Bloomsbury Academic, 2015), pp. 113-127; Juni, Muhamad Hanafiah. “Writing a Research Proposal.” International Journal of Public Health and Clinical Sciences 1 (September/October 2014): 229-240; Krathwohl, David R. How to Prepare a Dissertation Proposal: Suggestions for Students in Education and the Social and Behavioral Sciences . Syracuse, NY: Syracuse University Press, 2005; Procter, Margaret. The Academic Proposal. The Lab Report. University College Writing Centre. University of Toronto; Punch, Keith and Wayne McGowan. "Developing and Writing a Research Proposal." In From Postgraduate to Social Scientist: A Guide to Key Skills . Nigel Gilbert, ed. (Thousand Oaks, CA: Sage, 2006), 59-81; Wong, Paul T. P. How to Write a Research Proposal. International Network on Personal Meaning. Trinity Western University; Writing Academic Proposals: Conferences , Articles, and Books. The Writing Lab and The OWL. Purdue University; Writing a Research Proposal. University Library. University of Illinois at Urbana-Champaign.

  • << Previous: Writing a Reflective Paper
  • Next: Generative AI and Writing >>
  • Last Updated: Mar 6, 2024 1:00 PM
  • URL: https://libguides.usc.edu/writingguide/assignments

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Educ Health Promot

Knowledge, attitudes, and barriers toward research: The perspectives of undergraduate medical and dental students

Htoo htoo kyaw soe.

Department of Community Medicine, Melaka-Manipal Medical College, Melaka, Malaysia

Nan Nitra Than

Mila nu nu nu htay, khine lynn phyu.

1 Department of Pediatrics, Melaka-Manipal Medical College, Melaka, Malaysia

Adinegara Lutfi Abas

Scientific research not only promotes health and combats diseases of an individual, but also it can strengthen the effectiveness of health systems. Hence, understanding of scientific methods becomes a crucial component in the medical profession.

This study was conducted to assess the knowledge, attitudes, and barriers toward research among undergraduate medical and dental students.

SETTINGS AND DESIGN:

This cross-sectional study was conducted among 295 undergraduate Bachelor of Medicine and Bachelor of Surgery (MBBS) and Bachelor of Dental Surgery (BDS) students from a private medical college in Malaysia.

MATERIALS AND METHODS:

We purposively selected 360 students attending the 3 rd , 4 th , and 5 th year in MBBS course and BDS course in September 2015. A total of 295 students who were willing to provide written informed consent were included in this study. We collected data using a validated, self-administered, structured questionnaire which included 20 questions about knowledge toward scientific research, 21 attitude items in regard to scientific research, a list of 10 barriers toward conducting medical research, and 5 questions of confidence to conduct the medical research.

STATISTICAL ANALYSIS USED:

Data were analyzed using descriptive statistics, independent t-test, ANOVA, and multiple linear regression.

Among the students, 56.9% had moderate knowledge while the majority (83.3%) had moderate attitude toward scientific research. The majorly cited barriers were the lack of time (79.9%), lack of knowledge and skills (72.1%), lack of funding (72.0%) and facilities (63.6%), and lack of rewards (55.8%). There was a significant association between age, academic year, and knowledge of research as the older age group, and 4 th - and 5 th -year students had higher knowledge score. The students of higher attitude score had better-perceived barriers score toward research with regression coefficient 0.095 (95% confidence interval 0.032–0.159).

CONCLUSIONS:

Even though the students had the positive attitudes toward scientific research, a supportive and positive environment is needed to improve skills and knowledge of research and to overcome the barriers toward the conduct of scientific research.

Introduction

Research is important to the scientific progress,[ 1 ] and it is also crucial to the understanding of problems which affects the health of individuals, communities, and health systems.[ 2 ] Research involves systematic investigation or experimentation to discover the new knowledge[ 2 ] and revision of current knowledge.[ 3 ] In 2007, developed countries had 3655.8 researchers per million inhabitants when only 580.3 researchers in the developing world.[ 4 ] A total number of scientific publications were 315,742 in the developing countries when it was doubled in developed. In Malaysia, the gross domestic expenditure on research and development per GDP ratio had increased from 0.49% in 2000 to 0.64% in 2006.[ 4 ] As a result, publications output has risen rapidly in the past decade from 805 scientific publications in the year 2000 to 2712 publications in the year 2008, in which 300 publications of medical research and 535 publications of clinical medicine.[ 4 ]

Research is vital to developmental activities and it had been carried out in all academic and developmental institutions.[ 5 ] In medical education, health research training is an essential component to help developing physician's research skills,[ 6 ] including literature search, critical appraisal, independent learning, and writing research papers.[ 7 ] Training for research skills and experience of research in an early time in the medical profession is associated with continued professional academic work and may also help resident's career decisions.[ 7 ] While many undergraduate programs include research methodology course,[ 8 ] medical training in many developing countries does not emphasize its importance in medical practice and this course is not included in the medical curriculum.[ 7 ] Compulsory research course along with a mandatory research project has a positive impact on student's knowledge and attitudes toward research.[ 9 , 10 ] Moreover, it provides necessary skills to the future research in their career[ 3 ] and strengthens lifelong learning.[ 11 ] Moreover, research by a student can significantly affect the published output of the institution, and to a further extent, also of the country.[ 7 , 12 ]

Several studies have been carried out in many countries to evaluate knowledge and attitudes toward scientific research among health professionals and medical students.[ 9 , 10 , 13 , 14 , 15 , 16 ] Evidence also showed that existence of barriers brings the gap between theory of scientific research and practice of conducting it.[ 17 ] Furthermore, lack of skills training, infrastructure and facilities, mentorship, and lack of time and motivation were cited as the major hurdles.[ 6 , 9 , 13 , 14 , 18 , 19 ] In Malaysia, knowledge, attitudes, and barriers toward conduct of medical research and evidence-based medicine were investigated among health professionals such as doctors, specialists, pharmacists, nurses, and physiotherapists;[ 20 , 21 , 22 , 23 , 24 ] however, there is limited information on this topic among undergraduate medical and dental students.

Research is mandatory and it is one part of the core curriculum in the Bachelor of Medicine and Bachelor of Surgery (MBBS) and Bachelor of Dental Surgery (BDS) course in our private medical college. The aim of the course is to introduce principles of scientific research and biostatistics using various problem-solving exercises and to provide the skills that can effectively contribute in various institutional research projects. In the final year, students require performing research projects which is mentored by the faculty members. While conducting research projects, students learn to identify the research question, generate research hypotheses, critically appraise literature, design the study, collect and analyze the data, and write a detailed project report. Although it is not compulsory, students are also encouraged to publish their researches in medical journals and do presentations in conferences. This study aims to assess the knowledge and attitudes toward scientific research and to identify the barriers to participation in scientific research among undergraduate medical and dental students, expect promoting research skills and increasing published outputs of the college.

Materials and Methods

We conducted a cross-sectional study among undergraduate medical and dental students in September 2015 in the private medical college in Malaysia. Approximately 800 students were attending in the MBBS and BDS program. The sample size was calculated using the formula for single population proportion with the margin of error 5%, the assumption of 95% confidence level,[ 25 ] and 80.2% of moderate knowledge.[ 6 ] The minimum sample size required was 245; however, we purposively selected 360 students attending the 3 rd , 4 th and 5 th year in MBBS course and BDS course. The students who were willing to provide written informed consent were included in this study.

We collected data using a self-administered, structured questionnaire within a span of 4 months. The questionnaire was adapted from the previous studies.[ 9 , 14 , 26 ] The questionnaire consisted of sociodemographic, previous experience of scientific research, knowledge and attitudes toward research, and perceived barrier conducting research. In this study, 20 questions including single best answer type of multiple-choice questions and true/false questions were used for assessing knowledge. Five-point Likert scale (strongly agree, agree, neutral, disagree, and strongly disagree) was used to assess the attitudes toward research. Attitudes part consisted of 21 items including 11 positive and 10 negative statements. Ten items of perceived barriers with three-point Likert scale (agree, disagree, and undecided) were also included. After modification of the questionnaire, we carried out a pilot study with 30 students to check for validity, reliability, clarity, and understanding of the questionnaire. Content validity was checked with experts and face validity was checked for clarity and understanding of the questionnaire. The Cronbach's alpha coefficient of knowledge questions was 0.648, of attitude questions was 0.824, and of barriers questions was 0.683.

After checking and coding the questionnaire, we used Microsoft Excel for data entry and SPSS version 12 (SPSS Inc, Chicago, IL) for data analysis. Regarding knowledge, the correct answer was scored one and wrong/not sure answer was scored zero (higher score indicates better knowledge). For attitudes, for positive items, strongly agree was scored five and strongly disagree was scored one, while for negative items, strongly agree was scored one and strongly disagree was scored five (higher score indicates better attitude). Regarding perceived barrier, disagree was scored three, score one for agree, and score two for undecided (higher score indicates lesser perceived barrier). The total score was computed by taking the sum for all of these. We categorized knowledge and attitudes into three levels such as good (>80% of the maximum possible total score), moderate (60%–80% of the maximum possible total score), and poor (<60% of the maximum possible total score).

For quantitative variables, mean and standard deviation (SD) were calculated and for qualitative variables, frequency and percentage were described. We used independent sample t -test and one-way ANOVA to determine the knowledge, attitudes, and perceived barriers toward research between different age groups, gender, ethnicity, and academic years. We also performed multiple linear regression to find the relationship between knowledge, attitudes, and perceived barriers after adjusting other covariates. All the statistical tests were two-sided and the level of significance was set at 0.05.

Before data collection, the purpose of the study was explained to the respondents. Participation was strictly voluntary and autonomy of the respondents was respected. Written informed consent was taken from each participant. Confidentiality was maintained and anonymity of respondents was ensured. In addition, data were kept secured and available only to the statistician. Approval for this study was taken from Research Committee of our college.

A total of 295 students participated in this study and response rate was 81.94%. Among them, 62.4% were from MBBS program and 37.6% were BDS students while 59.3% of the participants were from 3 rd year, 33.9% from 4 th year, and 6.8% from 5 th year. The mean age of the participants was 22.99 years (SD 1.05) and the majority (97.3%) of them was Malaysian nationality. 59.5% of them were female students [ Table 1 ].

Demographic characteristics among medical and dental students ( n =295)

An external file that holds a picture, illustration, etc.
Object name is JEHP-7-23-g001.jpg

In this study, only 4% of the students had good knowledge while 56.9% had moderate knowledge. The majority (83.3%) had moderate attitude and 11.3% of the students had good attitude. The mean of perceived barriers was 17.84 out of 30 (higher score indicates lesser perceived barrier). Nearly 13.4% of the students had performed presentations in conference and 5.8% published research articles [ Table 2 ]. Percentage of answers on attitude questions among undergraduates is shown in Table 3 .

Knowledge, attitudes, perceived barriers toward research and previous research experience among medical and dental students

An external file that holds a picture, illustration, etc.
Object name is JEHP-7-23-g002.jpg

Percentage of answers on attitude questions among medical and dental students

An external file that holds a picture, illustration, etc.
Object name is JEHP-7-23-g003.jpg

In regard to barriers, majority of the students had stated lack of time (79.9%), lack of knowledge/skills (72.1%), and lack of funding (72%). Other barriers revealed lack of facilities (63.6%), lack of rewards (55.8%), inaccessible to relevant medical and other electronic databases (44.0%), lack of interest (37.5%), inefficient faculty staff to deliver necessary knowledge and skills (26.6%), lack of proper mentoring (20.5%), and opportunity to conduct (20.2%) [ Table 4 ].

Perceived barriers to participation in research among medical and dental students

An external file that holds a picture, illustration, etc.
Object name is JEHP-7-23-g004.jpg

Most of the students had limited or somewhat confidence in creating a clinical question (37.6% limited and 47.4% somewhat), search for literature (34.8% limited and 45.5% somewhat), critical appraisal (38.1% limited and 44.3% somewhat), access clinical expertise from instructor (41% limited and 40.7% somewhat), and using evidence-based processes (38.3% limited and 40.3% somewhat) [ Table 5 ].

Confidence of doing research activities among undergraduate students

An external file that holds a picture, illustration, etc.
Object name is JEHP-7-23-g005.jpg

Table 6 shows that there was a significant association between age, academic year, and knowledge. Age >23 years had higher mean knowledge score than age 22–23 years and <20 years. As regards the academic year, 4 th -year and 5 th -year students had higher knowledge score than 3 rd -year students. There was also a significant relationship between ethnicity and attitudes toward research as Indian had higher attitude score than Chinese and Malay. Moreover, age was significantly associated with perceived barriers and age >23 years had the highest mean barriers score among all age groups. There were no significant relationship between gender, ethnicity, and knowledge; no significant association between age, gender, academic year, and attitudes; and no significant relationship between gender, ethnicity, academic year, and perceived barriers toward research [ Table 6 ].

The relationship between demographic characteristics and knowledge, attitudes, and perceived barriers toward research

An external file that holds a picture, illustration, etc.
Object name is JEHP-7-23-g006.jpg

We performed multiple linear regression to determine the relationship between knowledge, attitudes, and perceived barriers toward research after adjusting the other covariates such as age, gender, ethnicity, and the academic year. We assessed model fit and found that there were linearity, independence of residuals, homoscedasticity, and no evidence of multicollinearity, and the assumption of normality was met. The multiple linear regression model was statistically significant with F (10,240) = 3.337, P 0.001, and adjusted R 2 = 0.078. There was no significant relationship between knowledge and barriers toward research. However, there was a significant positive relationship between attitudes and barriers with regression coefficient of 0.095 (95% confidence interval 0.032–0.159), P = 0.003 [ Table 7 ].

Multiple linear regression analysis of relationship between knowledge, attitudes, and perceived barriers toward research

An external file that holds a picture, illustration, etc.
Object name is JEHP-7-23-g007.jpg

We conducted the cross-sectional study to assess the knowledge, attitudes, and the barriers toward the conduct of scientific research among undergraduate medical and dental students in our private medical college. Understanding of scientific methods becomes a crucial component of the medical profession. Although every health professional is not inspired to perform research to acquire new knowledge, he or she should be able to know principles of scientific research.[ 3 , 14 ] In this study, we found that 56.9% of the students had moderate and 39.1% had poor level of knowledge, with a mean score 12.14 of maximum 20. Previous studies done among undergraduate students showed poor-to-moderate level of knowledge toward health research.[ 9 , 10 , 14 , 19 ] Similarly, the study done among postgraduate trainees revealed poor level of knowledge toward research as 80.2% of them was in the first two quartiles of knowledge score.[ 6 ] Moreover, attitude toward health research is one of the important predictors of evidence-based practice and health care research utilization.[ 27 , 28 , 29 , 30 ] Systematic review of attitudes to science in medicine revealed that 49.5% of the students had positive attitudes,[ 31 ] and observational studies done among health professionals and medical students also demonstrated moderate and positive attitudes toward research.[ 6 , 10 , 14 , 19 , 27 ] In this study, the response rate was 81.94% which reflects the positive attitudes of students toward scientific research as 83.3% of the students had moderate attitude and 11.3% had good attitude. The students also demonstrated positive attitudes toward science and scientific methodology in medicine. Similar to other studies,[ 6 , 9 , 10 ] many of our participants showed somewhat to extensive confidence in conducting research activities such as creating clinical questions and searching and appraising literature, but many of them expressed limited confidence in accessing clinical expertise from the instructor and utilizing evidence-based processes. If the health professionals had perceived ability to perform research activities, they are more likely to involve in medical research.[ 27 ] In our college, research methodology course is compulsory in the medical curriculum and it is given in the 3 rd year in MBBS and 4 th year in BDS program. A previous study done by Bonner and Sando had also shown that if the health professionals had not taken research training or course, they felt lack of skills to perform research.[ 13 ]

The relationship between individual characteristics such as age, gender, year of education, and knowledge of and attitudes toward health research had been studied.[ 9 , 10 , 13 , 14 , 15 ] It was shown that knowledge was negatively correlated with age of the students,[ 9 ] and the number of years spent in medical college was significantly associated with knowledge of research after adjusting for age.[ 10 ] In our study, the mean score of both knowledge and attitude was highest in oldest age group and we found that age was significantly related to knowledge but not with attitudes. The academic year of the student was also significantly associated only with knowledge toward research. Apart from compulsory research methodology course, final-year students are also needed to perform the research projects, which are mentored by the faculty members. While conducting these projects, students learn to identify a research question, generate research hypotheses, critically appraise literature, design the study, collect and analyze the data, and write a detailed project report. Similar to the study conducted among the licensed nurses by Bonner and Sando,[ 13 ] we found that senior year students had a better understanding, higher mean score of knowledge, and a positive attitude. Intensive training of research principles and mandatory participation in research activities can lead to the significant improvement in content knowledge and positive impact on attitudes toward future research.[ 9 , 10 , 15 ] Though the results of the relationship between gender and knowledge were not consistent with previous studies,[ 9 , 10 , 16 ] in this study, we found that female students had higher knowledge score and males had higher attitude score, but it was not significant. There was no significant association between ethnicity and knowledge, but it was significantly associated with attitudes toward research. The mean score of both knowledge and attitude scale was highest in Indian followed by others, Chinese and Malay. Our findings were different from the previous study done among Malaysians showed Malay had the highest level of interest in science-, technology-, and innovation-related issues and more positive attitudes toward scientific research than Chinese and Indian.[ 32 ]

Although the emphasis is given to promote scientific research, the presence of barriers bring the gap between theory and practice.[ 17 ] The barriers to participate in scientific research can be classified as extrinsic[ 33 ] such as lack of training in research methodology, lack of time due to overburdened with educational activities, lack of rewards and incentives, lack of infrastructure and facilities, inadequate support by organization/institute, access to library and publications, and inadequate supervision and mentorship,[ 5 , 6 , 9 , 13 , 14 , 17 , 18 , 19 , 20 ] and intrinsic[ 33 ] including lack of motivation and lack of appropriate knowledge and skills in scientific methods and statistics.[ 9 , 17 , 14 , 19 , 20 , 21 ] In this study, the most common obstacles stated by the students were lack of time, lack of knowledge/skills, lack of funding and facilities, and lack of rewards. Moreover, the students mentioned limited access to the relevant medical and other electronic databases made them difficult to discover knowledge gap and initiate their research activities. We also found that the students who had higher attitude score had better-perceived barriers score toward research. Previous studies showed that attitudes toward research involvement and utilization are essential in evidence-based medicine[ 27 ] and adoption of negative attitudes could make difficulty in implementation of scientific research.[ 17 ] It has been accepted that supportive positive environment can bring successful researcher and have an impact on research output including publications.[ 3 ] We found that 13.4% of the students had ever done oral and poster presentations and some of them received awards in national student conferences. However, only 5.8% of the students had ever published research papers in indexed journals which were lower than previous studies.[ 6 , 7 , 31 ]

There are some limitations in this study. The response rate of the final year students was lowest (6.8%) because they were given study break at the time of data collection. This study was conducted in one private medical institution; therefore, the findings cannot be applicable to other institutions with the different environment. This was a cross-sectional study; therefore, we could neither observe the changes over time nor inference of causality. In this study, we did not include other barriers such as organizational, strategic, and policy barriers, communication barriers, and cultural and language barriers; therefore, further study should explore on these aspects. A qualitative approach will also offer a better understanding of obstacles toward research participation of undergraduate students. As the faculty members are the most important resources, the perception of faculty members toward scientific research and toward student's research should also be studied.

Conclusions

The undergraduate medical and dental students had the moderate level of knowledge and positive attitudes toward the conduct of medical research. Lack of time, skills, funding, and facilities and limited access to relevant medical journals and databases were the major barriers. These barriers need to be addressed by providing proper supervision, good mentorship, research funding and awards, and providing access to electronic databases to encourage the undergraduate students participating in research activities. It is recommended to organize research workshops, frequent research presentations, and journal clubs to provide knowledge and skills needed for the medical students to implement the scientific research in the future.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

Acknowledgment

We would like to acknowledge to all students who participated in this study and the volunteer students who helped for data collection. We would like to thank Professor Dr. Soe Moe for her valuable suggestion in concept and design of this study. We also would like to thank the management of Melaka-Manipal Medical College (MMMC) and Research Ethics Committee (MMMC) to grant the approval of this study.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • CAREER FEATURE
  • 08 April 2024

Ready or not, AI is coming to science education — and students have opinions

  • Sarah Wells 0

Sarah Wells is an independent science journalist based in Washington DC.

You can also search for this author in PubMed   Google Scholar

You have full access to this article via your institution.

Yan Jun (Leo) Wu speaks into a microphone while opening the Students@AI Conference

Leo Wu, an economics student at Minerva University in San Francisco, California, founded a group to discuss how AI tools can help in education. Credit: AI Consensus

The world had never heard of ChatGPT when Johnny Chang started his undergraduate programme in computer engineering at the University of Illinois Urbana–Champaign in 2018. All that the public knew then about assistive artificial intelligence (AI) was that the technology powered joke-telling smart speakers or the somewhat fitful smartphone assistants.

But, by his final year in 2023, Chang says, it became impossible to walk through campus without catching glimpses of generative AI chatbots lighting up classmates’ screens.

“I was studying for my classes and exams and as I was walking around the library, I noticed that a lot of students were using ChatGPT,” says Chang, who is now a master’s student at Stanford University in California. He studies computer science and AI, and is a student leader in the discussion of AI’s role in education. “They were using it everywhere.”

justify the need for research knowledge to students

‘Without these tools, I’d be lost’: how generative AI aids in accessibility

ChatGPT is one example of the large language model (LLM) tools that have exploded in popularity over the past two years. These tools work by taking user inputs in the form of written prompts or questions and generating human-like responses using the Internet as their catalogue of knowledge. As such, generative AI produces new data based on the information it has already seen.

However, these newly generated data — from works of art to university papers — often lack accuracy and creative integrity, ringing alarm bells for educators. Across academia, universities have been quick to place bans on AI tools in classrooms to combat what some fear could be an onslaught of plagiarism and misinformation. But others caution against such knee-jerk reactions.

Victor Lee, who leads Stanford University’s Data Interactions & STEM Teaching and Learning Lab, says that data suggest that levels of cheating in secondary schools did not increase with the roll-out of ChatGPT and other AI tools. He says that part of the problem facing educators is the fast-paced changes brought on by AI. These changes might seem daunting, but they’re not without benefit.

Educators must rethink the model of written assignments “painstakingly produced” by students using “static information”, says Lee. “This means many of our practices in teaching will need to change — but there are so many developments that it is hard to keep track of the state of the art.”

Despite these challenges, Chang and other student leaders think that blanket AI bans are depriving students of a potentially revolutionary educational tool. “In talking to lecturers, I noticed that there’s a gap between what educators think students do with ChatGPT and what students actually do,” Chang says. For example, rather than asking AI to write their final papers, students might use AI tools to make flashcards based on a video lecture. “There were a lot of discussions happening [on campus], but always without the students.”

Portrait of Johnny Chang at graduation

Computer-science master’s student Johnny Chang started a conference to bring educators and students together to discuss the responsible use of AI. Credit: Howie Liu

To help bridge this communications gap, Chang founded the AI x Education conference in 2023 to bring together secondary and university students and educators to have candid discussions about the future of AI in learning. The virtual conference included 60 speakers and more than 5,000 registrants. This is one of several efforts set up and led by students to ensure that they have a part in determining what responsible AI will look like at universities.

Over the past year, at events in the United States, India and Thailand, students have spoken up to share their perspectives on the future of AI tools in education. Although many students see benefits, they also worry about how AI could damage higher education.

Enhancing education

Leo Wu, an undergraduate student studying economics at Minerva University in San Francisco, California, co-founded a student group called AI Consensus . Wu and his colleagues brought together students and educators in Hyderabad, India, and in San Francisco for discussion groups and hackathons to collect real-world examples of how AI can assist learning.

From these discussions, students agreed that AI could be used to disrupt the existing learning model to make it more accessible for students with different learning styles or who face language barriers. For example, Wu says that students shared stories about using multiple AI tools to summarize a lecture or a research paper and then turn the content into a video or a collection of images. Others used AI to transform data points collected in a laboratory class into an intuitive visualization.

justify the need for research knowledge to students

Three ways ChatGPT helps me in my academic writing

For people studying in a second language, Wu says that “the language barrier [can] prevent students from communicating ideas to the fullest”. Using AI to translate these students’ original ideas or rough drafts crafted in their first language into an essay in English could be one solution to this problem, he says. Wu acknowledges that this practice could easily become problematic if students relied on AI to generate ideas, and the AI returned inaccurate translations or wrote the paper altogether.

Jomchai Chongthanakorn and Warisa Kongsantinart, undergraduate students at Mahidol University in Salaya, Thailand, presented their perspectives at the UNESCO Round Table on Generative AI and Education in Asia–Pacific last November. They point out that AI can have a role as a custom tutor to provide instant feedback for students.

“Instant feedback promotes iterative learning by enabling students to recognize and promptly correct errors, improving their comprehension and performance,” wrote Chongthanakorn and Kongsantinart in an e-mail to Nature . “Furthermore, real-time AI algorithms monitor students’ progress, pinpointing areas for development and suggesting pertinent course materials in response.”

Although private tutors could provide the same learning support, some AI tools offer a free alternative, potentially levelling the playing field for students with low incomes.

Jomchai Chongthanakorn speaks at the UNESCO Round Table on Generative AI and Education conference

Jomchai Chongthanakorn gave his thoughts on AI at a UNESCO round table in Bangkok. Credit: UNESCO/Jessy & Thanaporn

Despite the possible benefits, students also express wariness about how using AI could negatively affect their education and research. ChatGPT is notorious for ‘hallucinating’ — producing incorrect information but confidently asserting it as fact. At Carnegie Mellon University in Pittsburgh, Pennsylvania, physicist Rupert Croft led a workshop on responsible AI alongside physics graduate students Patrick Shaw and Yesukhei Jagvaral to discuss the role of AI in the natural sciences.

“In science, we try to come up with things that are testable — and to test things, you need to be able to reproduce them,” Croft says. But, he explains, it’s difficult to know whether things are reproducible with AI because the software operations are often a black box. “If you asked [ChatGPT] something three times, you will get three different answers because there’s an element of randomness.”

And because AI systems are prone to hallucinations and can give answers only on the basis of data they have already seen, truly new information, such as research that has not yet been published, is often beyond their grasp.

justify the need for research knowledge to students

‘Obviously ChatGPT’ — how reviewers accused me of scientific fraud

Croft agrees that AI can assist researchers, for example, by helping astronomers to find planetary research targets in a vast array of data. But he stresses the need for critical thinking when using the tools. To use AI responsibly, Croft argued in the workshop, researchers must understand the reasoning that led to an AI’s conclusion. To take a tool’s answer simply on its word alone would be irresponsible.

“We’re already working at the edge of what we understand” in scientific enquiry, Shaw says. “Then you’re trying to learn something about this thing that we barely understand using a tool we barely understand.”

These lessons also apply to undergraduate science education, but Shaw says that he’s yet to see AI play a large part in the courses he teaches. At the end of the day, he says, AI tools such as ChatGPT “are language models — they’re really pretty terrible at quantitative reasoning”.

Shaw says it’s obvious when students have used an AI on their physics problems, because they are more likely to have either incorrect solutions or inconsistent logic throughout. But as AI tools improve, those tells could become harder to detect.

Chongthanakorn and Kongsantinart say that one of the biggest lessons they took away from the UNESCO round table was that AI is a “double-edged sword”. Although it might help with some aspects of learning, they say, students should be wary of over-reliance on the technology, which could reduce human interaction and opportunities for learning and growth.

“In our opinion, AI has a lot of potential to help students learn, and can improve the student learning curve,” Chongthanakorn and Kongsantinart wrote in their e-mail. But “this technology should be used only to assist instructors or as a secondary tool”, and not as the main method of teaching, they say.

Equal access

Tamara Paris is a master’s student at McGill University in Montreal, Canada, studying ethics in AI and robotics. She says that students should also carefully consider the privacy issues and inequities created by AI tools.

Some academics avoid using certain AI systems owing to privacy concerns about whether AI companies will misuse or sell user data, she says. Paris notes that widespread use of AI could create “unjust disparities” between students if knowledge or access to these tools isn’t equal.

Portrait of Tamara Paris

Tamara Paris says not all students have equal access to AI tools. Credit: McCall Macbain Scholarship at McGill

“Some students are very aware that AIs exist, and others are not,” Paris says. “Some students can afford to pay for subscriptions to AIs, and others cannot.”

One way to address these concerns, says Chang, is to teach students and educators about the flaws of AI and its responsible use as early as possible. “Students are already accessing these tools through [integrated apps] like Snapchat” at school, Chang says.

In addition to learning about hallucinations and inaccuracies, students should also be taught how AI can perpetuate the biases already found in our society, such as discriminating against people from under-represented groups, Chang says. These issues are exacerbated by the black-box nature of AI — often, even the engineers who built these tools don’t know exactly how an AI makes its decisions.

Beyond AI literacy, Lee says that proactive, clear guidelines for AI use will be key. At some universities, academics are carving out these boundaries themselves, with some banning the use of AI tools for certain classes and others asking students to engage with AI for assignments. Scientific journals are also implementing guidelines for AI use when writing papers and peer reviews that range from outright bans to emphasizing transparent use .

Lee says that instructors should clearly communicate to students when AI can and cannot be used for assignments and, importantly, signal the reasons behind those decisions. “We also need students to uphold honesty and disclosure — for some assignments, I am completely fine with students using AI support, but I expect them to disclose it and be clear how it was used.”

For instance, Lee says he’s OK with students using AI in courses such as digital fabrication — AI-generated images are used for laser-cutting assignments — or in learning-theory courses that explore AI’s risks and benefits.

For now, the application of AI in education is a constantly moving target, and the best practices for its use will be as varied and nuanced as the subjects it is applied to. The inclusion of student voices will be crucial to help those in higher education work out where those boundaries should be and to ensure the equitable and beneficial use of AI tools. After all, they aren’t going away.

“It is impossible to completely ban the use of AIs in the academic environment,” Paris says. “Rather than prohibiting them, it is more important to rethink courses around AIs.”

Nature 628 , 459-461 (2024)

doi: https://doi.org/10.1038/d41586-024-01002-x

Related Articles

justify the need for research knowledge to students

  • Machine learning
  • Scientific community

How I harnessed media engagement to supercharge my research career

How I harnessed media engagement to supercharge my research career

Career Column 09 APR 24

How we landed job interviews for professorships straight out of our PhD programmes

How we landed job interviews for professorships straight out of our PhD programmes

Career Column 08 APR 24

Three ways ChatGPT helps me in my academic writing

After the genocide: what scientists are learning from Rwanda

News Feature 05 APR 24

How can we make PhD training fit for the modern world? Broaden its philosophical foundations

Correspondence 02 APR 24

The neuroscientist formerly known as Prince’s audio engineer

The neuroscientist formerly known as Prince’s audio engineer

Career Feature 14 MAR 24

How to break big tech’s stranglehold on AI in academia

Correspondence 09 APR 24

AI can help to tailor drugs for Africa — but Africans should lead the way

AI can help to tailor drugs for Africa — but Africans should lead the way

Comment 09 APR 24

PhD position (all genders) in AI for biomedical data analysis

PhD position (all genders) in AI for biomedical data analysis Part time  | Temporary | Arbeitsort: Hamburg-Eppendorf UKE_Zentrum für Molekulare Ne...

Hamburg (DE)

Personalwerk GmbH

justify the need for research knowledge to students

Postdoctoral fellow in structure determination of membrane proteins using cryo-EM

The Institute of Biomedicine is involved in both research and education. In both of these areas, we focus on fundamental knowledge of the living ce...

Gothenburg (Stad), Västra Götaland (SE)

University of Gothenburg

Postdoctoral Associate- Endometriosis

Houston, Texas (US)

Baylor College of Medicine (BCM)

justify the need for research knowledge to students

Postdoctoral Research Fellow at the Dalian Institute of Chemical Physics

Located in the beautiful coastal city of Dalian, surrounded by mountains and sea, DICP seeks all talents from around the globe.

Dalian, Liaoning, China

The Dalian Institute of Chemical Physics (DICP)

justify the need for research knowledge to students

Postdoctoral Research Associate

Qualifications: PhD degree in chemistry, radiochemistry, or nuclear medicine technology with at least 3 years of PET radiochemistry work experience i

Charlottesville, Virginia

University of Virginia Health

justify the need for research knowledge to students

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

IMAGES

  1. Need of Research

    justify the need for research knowledge to students

  2. Example Of Justification Of The Study In Research

    justify the need for research knowledge to students

  3. Top 6 Ways to Improve your Research Skills

    justify the need for research knowledge to students

  4. Example Of Justification Of The Study In Research

    justify the need for research knowledge to students

  5. Justify The Research Methodologies Chosen For The Project Essay Example

    justify the need for research knowledge to students

  6. What are the Best Tips to Improve your Research Skills

    justify the need for research knowledge to students

VIDEO

  1. 4. Research Skills

  2. You do not need to justify your #PTO #workculture #corporatelife

  3. Do you feel like you need to 'justify' things you buy? 🫢

  4. Top tips to get good marks in exam #shortsindia #millionairemindset #viralvideo #shorts

  5. Can you justify the need for occupational therapy?

  6. Introduction about research S5 (part 1)

COMMENTS

  1. Undergraduate students' involvement in research: Values, benefits, barriers and recommendations

    1. Introduction. As the world evolves, the need for research grows, and it remains a factor of key importance in creating a knowledge-driven economy and supporting development initiatives as well as driving innovations across all fields [].It is becoming more and more important to increase undergraduate student involvement in research [].Academic institutions, faculty mentors, and students can ...

  2. Enhancing research and scholarly experiences based on students

    A number of large-scale funded programmes such as the Medical Student Research Fellowship Programme in the U.S. and Medical Student Research projects in Norway and the Netherlands have been introduced to engage students at this crucially influential stage of their training and try to introduce a degree of consistency in student experience.

  3. Student Research: What Is It Good For?

    Rocky start. Wooster geology students Jerome Hall, top, and Sara Austin explore an exposure of broken coral, shells, and carbonate sand in Jamaica. Undergraduate research is equally popular among the major research universities. "Research is the lifeblood of our institution, and it's a good way to connect our faculty and students," says ...

  4. Using Research and Reason in Education: How Teachers Can Use ...

    Knowledge considered "special"--the province of the thought of an individual and immune from scrutiny and criticism by others--can never have the status of scientific knowledge. Research-based conclusions, when published in a peer reviewed journal, become part of the public realm, available to all, in a way that claims of "special expertise ...

  5. Undergraduate research experiences: Impacts and opportunities

    During the first year, students, who are typically unfamiliar with the science concepts and techniques of the lab, need orientation to the specific research project and slowly acquire this knowledge. In 1 year or less, the duration of most UREs, students learn how to set up experiments specific to that lab and collect data but can rarely relate ...

  6. How to Conduct Responsible Research: A Guide for Graduate Students

    Scientists make discoveries to build knowledge and solve problems, and they work with other dedicated researchers. Research is a highly complex activity, so it takes years for beginning researchers to learn everything they need to know to do science well. Part of this large body of knowledge is learning how to do research responsibly.

  7. Improving education through research? From effectiveness, causality and

    This paper focuses on the role of research in the improvement of educational practice. I use the 10 Principles for Effective Pedagogy, which were formulated on the basis of research conducted in the UK's Teacher and Learning Research Programme as an example to highlight some common problems in the discussion about research and educational improvement.

  8. How do 15-16 year old students use scientific knowledge to justify

    1. Introduction. In the last decades, school has changed from a place where students are the recipients of ready-made knowledge (Gaskell, 1992) to a place where they learn how to use knowledge to develop their reasoning skills.When relying less on the essentialist metaphor for teaching, another way of understanding the teacher's assignments is needed (Pouliot, 2009, Zeidler and Sadler, 2008).

  9. How to Write the Rationale of the Study in Research (Examples)

    The rationale of the study is the justification for taking on a given study. It explains the reason the study was conducted or should be conducted. This means the study rationale should explain to the reader or examiner why the study is/was necessary. It is also sometimes called the "purpose" or "justification" of a study.

  10. 3 The Use of Research Knowledge: Current Scholarship

    3 The Use of Research Knowledge: Current Scholarship. W ith the arrival of big social science and the growth of the policy enterprise, the federal investment in social science brought attention to whether the knowledge being produced was being used. Research on what was labeled "knowledge utilization" got under way. We address that research under three headings: decisionism and its ...

  11. Does a Knowledge Generation Approach to Learning Benefit Students? A

    The shifting emphases of new national curricula have placed more attention on knowledge generation approaches to learning. Such approaches are centered on the fundamental sense of generative learning where practices and tools for learning become the focus of the learning environment, rather than on the products of learning. This paper, building on from the previous review by Fiorella and Mayer ...

  12. PDF Why research is important

    research is offered, and examples are provided of the some of the ways in which research knowledge and skills contribute to effective counselling and psychotherapy. Exercise 1.1 Images of research Take a few moments to relax and centre yourself. When you are ready, reflect on the images, metaphors and fantasies that come to mind when you think

  13. How To Engage Undergraduates in Research

    Here are some key steps in creating a successful undergraduate research experience for you and your students. Jump to: Identify Learning Objectives * Choose Form and Intensity * Determine Project Needs * Set Expectations * Structure the Critical Elements * Provide the Right Support * Assess the Experience * Further the Experience. Show credits.

  14. PDF Main Article: How and Why to Teach Interdisciplinary Research Practice

    Students need to justify and explain their majors to parents, friends, and prospective employers. These others will often think they know what chemistry or history is all about, but will wonder about more novel interdisciplinary programs. Students in interdisciplinary programs thus are more likely to be queried regarding the nature of their ...

  15. 4.3 Why Do You Learn to Research?

    Research allows you to pursue your interests, to learn something new, to hone your problem-solving skills and to challenge yourself in new ways. Working on a faculty-initiated research project gives you the opportunity to work closely with a mentor-a faculty member or other experienced researcher. [2]

  16. Ten simple rules for providing a meaningful research experience to high

    Introduction. Much has been written about designing research experiences for undergraduate students [1-4], but what about providing meaningful experiences to high school students?There are many formal opportunities for high school students to conduct research, but early-career scientists and principal investigators (PIs) do not necessarily have much experience working with this age group ...

  17. PDF How a Student Uses Knowledge as a Resource to Solve ...

    Besides scientic knowledge, students' personal knowledge can also be an equally important resource for learning science. For example, Cheng & Brown (2010) investigated what knowledge students used to construct explanatory models of magnetic phenomena and in what way they used that knowledge. Their study revealed that verbal-symbolic

  18. How students justify their knowledge in the ...

    A coding schema, from naive to scientific knowing, is used to analyze students' justification of knowing, probed by open-ended and multiple-choice ``convincing'' questions. Results of the study ...

  19. Organizing Your Social Sciences Research Assignments

    Failure to delimit the contextual scope of your research [e.g., time, place, people, etc.]. As with any research paper, your proposed study must inform the reader how and in what ways the study will frame the problem. Failure to develop a coherent and persuasive argument for the proposed research. This is critical.

  20. Broadening the Definition of 'Research Skills' to Enhance Students

    Undergraduate and master's programs—thesis- or non-thesis-based—provide students with opportunities to develop research skills that vary depending on their degree requirements. However, there is a lack of clarity and consistency regarding the definition of a research skill and the components that are taught, practiced, and assessed. In response to this ambiguity, an environmental scan ...

  21. How do 15-16 year old students use scientific knowledge to justify

    In such cases, the students were found to justify their claims with a combination of science and other grounds of knowledge: 1) other facts; 2) morals; 3) personal experiences; and 4) feelings.

  22. Knowledge, attitudes, and barriers toward research: The perspectives of

    There was a significant association between age, academic year, and knowledge of research as the older age group, and 4 th - and 5 th-year students had higher knowledge score. The students of higher attitude score had better-perceived barriers score toward research with regression coefficient 0.095 (95% confidence interval 0.032-0.159).

  23. What is the justification of a research?

    Answer: Research is conducted to add something new, either knowledge or solutions, to a field. Therefore, when undertaking new research, it is important to know and state why the research is being conducted, in other words, justify the research. The justification of a research is also known as the rationale.

  24. Ready or not, AI is coming to science education

    For example, Wu says that students shared stories about using multiple AI tools to summarize a lecture or a research paper and then turn the content into a video or a collection of images.

  25. (PDF) Teaching Philosophy with Relevance to the Digital Age to

    Kohima-Nagaland, India. Abstract. Th e teaching philosophy aims to equip secondary school students in India with the skills and knowledge. necessary to navig ate the digital age effectively. The ...

  26. Sustainability

    Reducing food waste in the student population is important for promoting sustainable economic, social, and ecological development. In this paper, with the help of CiteSpace software (versions 6.1.R6 and 6.2.R4), we visually analyze the literature related to the food waste of students in the WoS core collection database. It is found that (1) scholars are paying increasing attention to the field ...