U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

Information Processing: The Language and Analytical Tools for Cognitive Psychology in the Information Age

The information age can be dated to the work of Norbert Wiener and Claude Shannon in the 1940s. Their work on cybernetics and information theory, and many subsequent developments, had a profound influence on reshaping the field of psychology from what it was prior to the 1950s. Contemporaneously, advances also occurred in experimental design and inferential statistical testing stemming from the work of Ronald Fisher, Jerzy Neyman, and Egon Pearson. These interdisciplinary advances from outside of psychology provided the conceptual and methodological tools for what is often called the cognitive revolution but is more accurately described as the information-processing revolution. Cybernetics set the stage with the idea that everything ranging from neurophysiological mechanisms to societal activities can be modeled as structured control systems with feedforward and feedback loops. Information theory offered a way to quantify entropy and information, and promoted theorizing in terms of information flow. Statistical theory provided means for making scientific inferences from the results of controlled experiments and for conceptualizing human decision making. With those three pillars, a cognitive psychology adapted to the information age evolved. The growth of technology in the information age has resulted in human lives being increasingly interweaved with the cyber environment, making cognitive psychology an essential part of interdisciplinary research on such interweaving. Continued engagement in interdisciplinary research at the forefront of technology development provides a chance for psychologists not only to refine their theories but also to play a major role in the advent of a new age of science.

Information is information, not matter or energy Wiener (1952 , p. 132)

Introduction

The period of human history in which we live is frequently called the information age , and it is often dated to the work of Wiener (1894–1964) and Shannon (1916–2001) on cybernetics and information theory. Each of these individuals has been dubbed the “father of the information age” ( Conway and Siegelman, 2005 ; Nahin, 2013 ). Wiener’s and Shannon’s work quantitatively described the fundamental phenomena of communication, and subsequent developments linked to that work had a profound influence on re-shaping many fields, including cognitive psychology from what it was prior to the 1950s ( Cherry, 1957 ; Edwards, 1997 , p. 222). Another closely related influence during that same period is the statistical hypothesis testing of Fisher (1890–1962), the father of modern statistics and experimental design ( Dawkins, 2010 ), and Jerzy Neyman (1894–1981), and Egon Pearson (1895–1980). In the U.S., during the first half of the 20th century, the behaviorist approach dominated psychology ( Mandler, 2007 ). In the 1950s, though, based mainly on the progress made in communication system engineering, as well as statistics, the human information-processing approach emerged in what is often called the cognitive revolution ( Gardner, 1985 ; Miller, 2003 ).

The information age has had, and continues to have, a great impact on psychology and society at large. Since the 1950s, science and technology have progressed with each passing day. The promise of the information-processing approach was to bring knowledge of human mind to a level in which cognitive mechanisms could be modeled to explain the processes between people’s perception and action. This promise, though far from completely fulfilled, has been increasingly realized. However, as any period in human history, the information age will come to an end at some future time and be replaced by another age. We are not claiming that information will become obsolete in the new age, just that it will become necessary but not sufficient for understanding people and society in the new era. Comprehending how and why the information-processing revolution in psychology occurred should prepare psychologists to deal with the changes that accompany the new era.

In the present paper, we consider the information age from a historical viewpoint and examine its impact on the emergence of contemporary cognitive psychology. Our analysis of the historical origins of cognitive psychology reveals that applied research incorporating multiple disciplines provided conceptual and methodological tools that advanced the field. An implication, which we explore briefly, is that interdisciplinary research oriented toward solving applied problems is likely to be the source of the next advance in conceptual and methodological tools that will enable a new age of psychology. In the following sections, we examine milestones of the information age and link them to the specific language and methodology for conducting psychological studies. We illustrate how the research methods and theory evolved over time and provide hints for developing research tools in the next age for cognitive psychology.

Cybernetics and Information Theory

Wiener and cybernetics.

Norbert Wiener is an individual whose impact on the field of psychology has not been acknowledged adequately. Wiener, a mathematician and philosopher, was a child prodigy who earned his Ph.D. from Harvard University at age 18 years. He is best-known for establishing what he labeled Cybernetics ( Wiener, 1948b ), which is also known as control theory, although he made many other contributions of note. A key feature of Wiener’s intellectual development and scientific work is its interdisciplinary nature ( Montagnini, 2017b ).

Prior to college, Wiener was influenced by Harvard physiologist Walter B. Cannon ( Conway and Siegelman, 2005 ), who later, in 1926, devised the term homeostasis , “the tendency of an organism or a cell to regulate its internal conditions, usually by a system of feedback controls…” ( Biology Online Dictionary, 2018 ). During his undergraduate and graduate education, Wiener was inspired by several Harvard philosophers ( Montagnini, 2017a ), including William James (pragmatism), George Santayana (positivistic idealism), and Josiah Royce (idealism and the scientific method). Motivated by Royce, Wiener made his commitment to study logic and completed his dissertation on mathematic logic. Following graduate school, Wiener traveled on a postdoctoral fellowship to pursue his study of mathematics and logic, working with philosopher/logician Bertrand Russell and mathematician/geneticist Godfrey H. Hardy in England, mathematicians David Hilbert and Edmund Landau in Europe, and philosopher/psychologist John Dewey in the U.S.

Wiener’s career was characterized by a commitment to apply mathematics and logic to real-world problems, which was sparked by his working for the U.S. Army. According to Hulbert ( 2018 , p. 50),

  • simple  He returned to the United States in 1915 to figure out what he might do next, at 21 jumping among jobs… His stint in 1918 at the U.S. Army’s Aberdeen Proving Ground was especially rewarding…. Busy doing invaluable work on antiaircraft targeting with fellow mathematicians, he found the camaraderie and the independence he yearned for. Soon, in a now-flourishing postwar academic market for the brainiacs needed in a science-guided era, Norbert found his niche. At MIT, social graces and pedigrees didn’t count for much, and wartime technical experience like his did. He got hired. The latest mathematical tools were much in demand as electronic communication technology took off in the 1920s.

Wiener began his early research in applied mathematics on stochastic noise processes (i.e., Brownian motion; Wiener, 1921 ). The Wiener process named in honor of him has been widely used in engineering, finance, physical sciences, and, as described later, psychology. From the mid 1930s until 1953, Wiener also was actively involved in a series of interdisciplinary seminars and conferences with a group of researchers that included mathematicians (John von Neumann, Walter Pitts), engineers (Julian Bigelow, Claude Shannon), physiologists (Warren McCulloch, Arturo Rosenblueth), and psychologists (Wolfgang Köhler, Joseph C. R. Licklider, Duncan Luce). “Models of the human brain” is one topic discussed in those meetings, and concepts proposed during those conferences had significant influence on the research in information technologies and the human sciences ( Heims, 1991 ).

One of Wiener’s major contributions was in World War II, when he applied mathematics to electronics problems and developed a statistical prediction method for fire control theory. This method predicted the position in space where an enemy aircraft would be located in the future so that an artillery shell fired from a distance would hit the aircraft ( Conway and Siegelman, 2005 ). As told by Conway and Siegelman, “Wiener’s focus on a practical real-world problem had led him into that paradoxical realm of nature where there was no certainty, only probabilities, compromises, and statistical conclusions…” (p. 113). Advances in probability and statistics provided a tool for Wiener and others to investigate this paradoxial realm. Early in 1942 Wiener wrote a classified report for the National Defense Research Committee (NRDC), “The Extrapolation, Interpolating, and Smoothing of Stationary Time Series,” which was published as a book in 1949. This report is credited as the founding work in communications engineering, in which Wiener concluded that communication in all fields is in terms of information. In his words,

  • simple  The proper field of communication engineering is far wider than that generally assigned to it. Communication engineering concerns itself with the transmission of messages. For the existence of a message, it is indeed essential that variable information be transmitted. The transmission of a single fixed item of information is of no communicative value. We must have a repertory of possible messages, and over this repertory a measure determining the probability of these messages ( Wiener, 1949 , p. 2).

Wiener went on to say “such information will generally be of a statistical nature” (p. 10).

From 1942 onward, Wiener developed his ideas of control theory more broadly in Cybernetics , as described in a Scientific American article ( Wiener, 1948a ):

  • simple  It combines under one heading the study of what in a human context is sometimes loosely described as thinking and in engineering is known as control and communication. In other words, cybernetics attempts to find the common elements in the functioning of automatic machines and of the human nervous system, and to develop a theory which will cover the entire field of control and communication in machines and in living organisms (p. 14).

Wiener (1948a) made apparent in that article that the term cybernetics was chosen to emphasize the concept of feedback mechanism. The example he used was one of human action:

  • simple  Suppose that I pick up a pencil. To do this I have to move certain muscles. Only an expert anatomist knows what all these muscles are, and even an anatomist could hardly perform the act by a conscious exertion of the will to contract each muscle concerned in succession. Actually, what we will is not to move individual muscles but to pick up the pencil. Once we have determined on this, the motion of the arm and hand proceeds in such a way that we may say that the amount by which the pencil is not yet picked up is decreased at each stage. This part of the action is not in full conscious (p. 14; see also p. 7 of Wiener, 1961 ).

Note that in this example, Wiener hits on the central idea behind contemporary theorizing in action selection – the choice of action is with reference to a distal goal ( Hommel et al., 2001 ; Dignath et al., 2014 ). Wiener went on to say,

  • simple  To perform an action in such a manner, there must be a report to the nervous system, conscious or unconscious, of the amount by which we have failed to pick up the pencil at each instant. The report may be visual, at least in part, but it is more generally kinesthetic, or, to use a term now in vogue, proprioceptive (p. 14; see also p. 7 of Wiener, 1961 ).

That is, Wiener emphasizes the role of negative feedback in control of the motor system, as in theories of motor control ( Adams, 1971 ; Schmidt, 1975 ).

Wiener (1948b) developed his views more thoroughly and mathematically in his master work, Cybernetics or Control and Communication in the Animal and in the Machine , which was extended in a second edition published in 1961. In this book, Wiener devoted considerable coverage to psychological and sociological phenomena, emphasizing a systems view that takes into account feedback mechanisms. Although he was interested in sensory physiology and neural functioning, he later noted, “The need of including psychologists had indeed been obvious from the beginning. He who studies the nervous system cannot forget the mind, and he who studies the mind cannot forget the nervous system” ( Wiener, 1961 , p. 18).

Later in the Cybernetics book, Wiener indicated the value of viewing society as a control system, stating “Of all of these anti-homeostatic factors in society, the control of the means of communication is the most effective and most important” (p. 160). This statement is followed immediately by a focus on information processing of the individual: “One of the lessons of the present book is that any organism is held together in this action by the possession of means for the acquisition, use, retention, and transmission of information” ( Wiener, 1961 , p. 160).

Cybernetics, or the study of control and communication in machines and living things, is a general approach to understanding self-regulating systems. The basic unit of cybernetic control is the negative feedback loop, whose function is to reduce the sensed deviations from an expected outcome to maintain a steady state. Specifically, a present condition is perceived by the input function and then compared against a point of reference through a mechanism called a comparator. If there is a discrepancy between the present state and the reference value, an action is taken. This arrangement thus constitutes a closed loop of control, the overall purpose of which is to minimize deviations from the standard of comparison (reference point). Reference values are typically provided by superordinate systems, which output behaviors that constitute the setting of standards for the next lower level.

Cybernetics thus illustrates one of the most valuable characteristics of mathematics: to identify a common feature (feedback) across many domains and then study it abstracted from those domains. This abstracted study draws the domains closer together and often enables results from one domain to be extended to the other. From its birth, Wiener conceived of cybernetics as an interdisciplinary field, and control theory has had a major impact on diverse areas of work, such as biology, psychology, engineering, and computer science. Besides the mathematical nature, cybernetics has also been claimed as the science of complex probabilistic systems ( Beer, 1959 ). In other words, cybernetics is a science of combined constant flows of communication and self-regulating systems.

Shannon and Information Theory

With backgrounds in electrical engineering and mathematics, Claude Shannon obtained his Ph.D. in electrical engineering at MIT in 1940. Shannon is known within psychology primarily for information theory, but prior to his contribution on that topic, in his Master’s thesis, he showed how to design switch circuits according to Boole’s symbolic logic. Use of combinations of switches that represent binary values provides the foundation of modern computers and telecommunication systems ( O’Regan, 2012 ). In the 1940s, Shannon’s work on digit circuit theory opened the doors for him and allowed him to make connections with great scientists of the day, including von Neumann, Albert Einstein, and Alan Turing. These connections, along with his work on cryptography, affected his thoughts about communication theory.

With regard to information theory, or what he called communication theory, Shannon (1948a) stated the essential problem of communication in the first page of his classic article:

  • simple  The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point… The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design. If the number of messages in the set is finite then this number or any monotonic function of this number can be regarded as a measure of the information produced when one message is chosen from the set, all choices being equally likely. As was pointed out by Hartley the most natural choice is the logarithmic function (p. 379).

Shannon (1948a) characterized an information system as having five elements: (1) an information source; (2) a transmitter; (3) a channel; (4) a receiver; (5) a destination. Note the similarity of Figure ​ Figure1 1 , taken from his article, to the human information-processing models of cognitive psychology. Shannon provided mathematical analyses of each element for three categories of communication systems: discrete, continuous, and mixed. A key measure in information theory is entropy , which Shannon defined as the amount of uncertainty involved in the value of a random variable or the outcome of a random process. Shannon also introduced the concepts of encoding and decoding for the transmitter and receiver, respectively. His main concern was to find explicit methods, also called codes , to increase the efficiency and reduce the error rate during data communication over noisy channels to near the channel capacity.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-09-01270-g001.jpg

Shannon’s schematic diagram of a general communication system ( Shannon, 1948a ).

Shannon explicitly acknowledged Wiener’s influence on the development of information theory:

  • simple  Communication theory is heavily indebted to Wiener for much of its basic philosophy and theory. His classic NDRC report, The Interpolation, Extrapolation, and Smoothing of Stationary Time Series ( Wiener, 1949 ), contains the first clear-cut formulation of communication theory as a statistical problem, the study of operations on time series. This work, although chiefly concerned with the linear prediction and filtering problem, is an important collateral reference in connection with the present paper. We may also refer here to Wiener’s Cybernetics ( Wiener, 1948b ), dealing with the general problems of communication and control ( Shannon and Weaver, 1949 , p. 85).

Although Shannon and Weaver (1949) developed similar measures of information independently from Wiener, they approached the same problem from different angles. Wiener developed the statistical theory of communication, equated information with negative entropy and related it to solve the problems of prediction and filtering while he worked on designing anti-aircraft fire-control systems ( Galison, 1994 ). Shannon, working primarily on cryptography at Bell Labs, drew an analogy between a secrecy system and a noisy communication system through coding messages into signals to transmit information in the presence of noise ( Shannon, 1945 ). According to Shannon, the amount of information and channel capacity were expressed in terms of positive entropy. With regard to the difference in sign for entropy in his and Wiener’s formulations, Shannon wrote to Wiener:

  • simple  I do not believe this difference has any real significance but is due to our taking somewhat complementary views of information. I consider how much information is produced when a choice is made from a set – the larger the set the more the information. You consider the larger uncertainty in the case of a larger set to mean less knowledge of the situation and hence less information ( Shannon, 1948b ).

A key element of the mathematical theory of communication developed by Shannon is that it omits “the question of interpretation” ( Shannon and Weaver, 1949 ). In other words, it separates information from the “psychological factors” involved in the ordinary use of information and establishes a neutral or non-specific human meaning of the information content ( Luce, 2003 ). In such sense, consistent with cybernetics, information theory also confirmed this neutral meaning common to systems of machines, human beings, or combinations of them. The view that information refers not to “what” you send but what you “can” send, based on probability and statistics, opened a new science that used the same methods to study machines, humans, and their interactions.

Inference Revolution

Although it is often overlooked, a related impact on psychological research during roughly the same period was that of using statistical thinking and methodology for small sample experiments. The two approaches that have been most influential in psychology, the null hypothesis significance testing of Ronald Fisher and the more general hypothesis testing view of Jerzy Neyman and Egon Pearson, resulted in what Gigerenzer and Murray (1987) called the inference revolution .

Fisher, Information, Inferential Statistics, and Experiment Design

Ronald Fisher got his degree in mathematics from Cambridge University where he spent another year studying statistical mechanics and quantum theory ( Yates and Mather, 1963 ). He has been described as “a genius who almost single-handedly created the foundations for modern statistical science” ( Hald, 2008 , p. 147) and “the single most important figure of 20th century statistics” ( Efron, 1998 , p. 95). Fisher is also “rightly regarded as the founder of the modern methods of design and analysis of experiments” ( Yates, 1964 , p. 307). In addition to his work on statistics and experimental design, Fisher made significant scientic contributions to genetics and evolutionary biology. Indeed, Dawkins (2010) , the famous biologist, called Fisher the greatest biologist since Darwin, saying:

  • simple  Not only was he the most original and constructive of the architects of the neo-Darwinian synthesis. Fisher also was the father of modern statistics and experimental design. He therefore could be said to have provided researchers in biology and medicine with their most important research tools.

Our interest in this paper is, of course, with the research tools and logic that Fisher provided, along with their application to scientific content.

Fisher began his early research as a statistician at Rothamsted Experimental Station in Harpenden, England (1919–1933). There, he was hired to develop statistical methods that could be applied to interpret the cumulative results of agriculture experiments ( Russell, 1966 , p. 326). Besides dealing with the past data, he became involved with ongoing experiments and developing methods to improve them ( Lehmann, 2011 ). Fisher’s hands-on work with experiments is a necessary feature of his background for understanding his positions regarding statistics and experimental design. Fisher (1962 , p. 529) essentially said as much in an address published posthumously:

  • simple  There is, frankly, no easy substitute for the educational discipline of whole time personal responsibility for the planning and conduct of experiments, designed for the ascertainment of fact, or the improvement of Natural Knowledge. I say “educational discipline” because such experience trains the mind and deepens the judgment for innumerable ancillary decisions, on which the value or cogency of an experimental program depends.

The analysis of variance (ANOVA, Fisher, 1925 ) and an emphasis on experimental design ( Fisher, 1937 ) were both outcomes of Fisher’s work in response to the experimental problems posed by the agricultural research performed at Rothamsted ( Parolini, 2015 ).

Fisher’s work synthesized mathematics with practicality and reshaped the scientific tools and practice for conducting and analyzing experiments. In the preface to the first edition of his textbook Statistical Methods for Research Workers , Fisher (1925 , p. vii) made clear that his main concern was application:

  • simple  Daily contact with the statistical problems which present themselves to the laboratory worker has stimulated the purely mathematical researches upon which are based the methods here presented. Little experience is sufficient to show that the traditional machinery of statistical processes is wholly unsuited to the needs of practical research.

Although prior statisticians developed probabilistic methods to estimate errors of experimental data [e.g., Student’s, 1908 (Gosset’s) t -test], Fisher carried the work a step further, developing the concept of null hypothesis testing using ANOVA ( Fisher, 1925 , 1935 ). Fisher demonstrated that by proposing a null hypothesis (usually no effect of an independent variable over a population), a researcher could evaluate whether a difference between conditions was sufficiently unlikely to occur due to chance to allow rejection of the null hypothesis. Fisher proposed that tests of significance with a low p -value can be taken as evidence against the null hypothesis. The following quote from Fisher (1937 , pp. 15–16), captures his position well:

  • simple  It is usual and convenient for experimenters to take 5 per cent. as a standard level of significance, in the sense that they are prepared to ignore all results which fail to reach this standard, and, by this means, to eliminate from further discussion the greater part of the fluctuations which chance causes have introduced into their experimental results.

While Fisher recommended using the 0.05 probability level as a criterion to decide whether to reject the null hypothesis, his general position was that researchers should set the critical level of significance at sufficiently low probability so as to limit the chance of concluding that an independent variable has an effect when the null hypothesis is true. Therefore, the criterion of significance does not necessarily have to be 0.05 (see also Lehmann, 1993 ), and Fisher’s main point was that failing to reject the null hypothesis regardless of what criterion is used does not warrant accepting it (see Fisher, 1956 , pp. 4 and 42).

Fisher (1925 , 1935 ) also showed how different sources of variance can be partitioned to allow tests of separate and combined effects of two or more independent variables. Prior to this work, much experimental research in psychology, though not all, used designs in which a single independent variable was manipulated. Fisher made a case that designs with two or more independent variables were more informative than multiple experiments using different single independent variables because they allowed determination of whether the variables interacted. Fisher’s application of the ANOVA to data from factorial experimental designs, coupled with his treatment of extracting always the maximum amount of information (likelihood) conveyed by a statistic (see later), apparently influenced both Wiener and Shannon ( Wiener, 1948a , p. 10).

In “The Place of Design of Experiments in the Logic of Scientific Inference,” Fisher (1962) linked experimental design, the application of correct statistical methods, and the subsequent extraction of a valid conclusion through the concept of information (known as Fisher information ). Fisher information measures the amount of information that an obtained random sample of data has about the variable of interest ( Ly et al., 2017 ). It is the expected value of the second moment of the log-likelihood function, which is the probability density function for obtained data conditional on the variable. In other words, the variance is defined to be the Fisher information, which measures the sensitivity of the likelihood function to the changes of a manipulated variable on the obtained results. Furthermore, Fisher argued that experimenters should be interested in not only minimizing loss of information in the process of statistical reduction (e.g., use ANOVA to summarize evidence that preserves the relevant information from data, Fisher, 1925 , pp. 1 and 7) but also the deliberate study of experimental design, for example, by introducing randomization or control, to maximize the amount of information provided by estimates derived from the resulting experimental data (see Fisher, 1947 ). Therefore, Fisher unfied experimental design and statistical analysis through information ( Seidenfeld, 1992 ; Aldrich, 2007 ), an approach that resonates with the system view of cybernetics.

Neyman-Pearson Approach

Jerzy Neyman obtained his doctorate from the University of Warsaw with a thesis based on his statistical work at the Agricultural Institute in Bydgoszcz, Poland, in 1924. Egon Pearson received his undergraduate degree in mathematics and continued his graduate study in astronomy at Cambridge University until 1921. In 1926, Neyman and Pearson started their collaboration and raised a question with regard to Fisher’s method, why test only the null hypothesis? They proposed a solution in which not only the null hypothesis but also a class of possible alternatives are considered, and the decision is one of accepting or rejecting the null hypothesis. This decision yields probabilities of two kinds of error: false rejection of the null hypothesis (Type I or Alpha) or false acceptance of the alternative hypothesis (Type II or Beta; Neyman and Pearson, 1928 , 1933 ). They suggested that the best test was the one that minimized the Type II error subject to a bound on the Type I error, i.e., the significance level of the test. Thus, instead of classifying the null hypothesis as rejected or not, the central consideration of the Neyman-Pearson approach was that one must specify not only the null hypothesis but also the alternative hypotheses against which it is tested. With this symmetric decision approach, statistical power (1 – Type II error) becomes an issue. Fisher (1947) also realized the importance and necessity of power but argued that it is a qualitative concern addressed during the experimental design to increase the sensitivity of the experiment and not part of the statistical decision process. In other words, Fisher thought that researchers should “conduct experimental and observational inquiries so as to maximize the information obtained for a given expenditure” ( Fisher, 1951 , p. 54), but did not see that as being part of statistics.

To reject or accept the null hypothesis, rather than disregarding results that do not allow rejection of the null hypothesis, was the rule of behavior for Neyman-Pearson hypothesis testing. Thus, their approach assumed that a decision made from the statistical analysis was sufficient to draw a conclusion as to whether the null or alternative hypothesis was most likely and did not put emphasis on the need for non-statistical inductive inference to understand the problems. In other words, tests of significance were interpreted as means to make decisions in an acceptance procedure (also see Wald, 1950 ) but not specifically for research workers to gain a better understanding of the experimental material. Also, the Neyman-Pearson approach interpreted probability (or the value of a significance level) as a realization of long-run frequency in a sequence of repetitions under constant conditions. Their view was that if a sequence of independent events was obtained with probability p of success, then the long-run success frequency will be close to p (this is known as a frequentist probability, Neyman, 1977 ). Fisher (1956) vehemently disagreed with this frequentist position.

Fisherian vs. Frequentist Approaches

In a nutshell, the differences between Fisherians and frequentists are mostly about research philosophy and how to interpret the results ( Fisher, 1956 ). In particular, Fisher emphasized that in scientific research, failure to reject the null hypothesis should not be interpreted as acceptance of it, whereas Neyman and Pearson portrayed the process as a decision between accepting or rejecting the null hypothesis. Nevertheless, the usual practice for statistical testing in psychology is based on a hybrid of Fisher’s and Neyman-Pearson’s approaches ( Gigerenzer and Murray, 1987 ). In practice, when behavioral researchers speak of the results of research, they are primarily referring to the statistically significant results and less often to null effects and the effect size estimates associated with those p -values.

The reliance on the results of significance testing has been explained from two perspectives: (1) Neither experienced behavioral researchers nor experienced statisticians have a good intuitive feel for the practical meaning of effect size estimation (e.g., Rosenthal and Rubin, 1982 ); (2) The reliance on a reject-or-accept dichotomous decision procedure, in which the differences between p levels are taken to be trivial relative to the difference between exceeding or failing to exceed a 0.05 or some other accepted level of significance ( Nelson et al., 1986 ). The reject-accept procedure follows the Neyman-Pearson approach and is compatible with the view that information is binary ( Shannon, 1948a ). Nevertheless, even if an accurate statistical power analysis is conducted, a replication study properly powered can produce results that are consistent with the effect size of interest or consistent with absolutely no effect ( Nelson et al., 1986 ; Maxwell et al., 2015 ). Therefore, instead of solely relying on hypothesis testing, or whether an effect is true or false, a report of actual p level obtained along with a statement of the effect size estimation, should be considered.

Fisher emphasized in his writings that an essential ingredient in the research process is the judgment of the researcher, who must decide by how much the obtained results have advanced a particular theoretical proposition (that is, how meaningful the results are). This decision is based in large part on decisions made during experimental design. The statistical significance test is just a useful tool to inform such decisions during the process to allow the researcher to be confident that the results are likely not due to chance. Moreover, he wanted this statistical decision in scientific research to be independent of a priori probabilities or estimates because he did not think these could be made accurately. Consequently, Fisher considered that only a statistically significant effect in an exact test for which the null hypothesis can be rejected should be open to subsequent interpretation by the researcher.

Acree (1978 , pp. 397–398) conducted a thorough evaluation of statistical inference in psychological research that for the most part captures why Fisher’s views had greater impact on the practice of psychological researchers than those of Neyman and Pearson (emphasis ours):

  • simple  On logical grounds, Neyman and Pearson had decidedly the better theory; but Fisher’s claims were closer to the ostensible needs of psychological research . The upshot is that psychologists have mostly followed Fisher in their thinking and practice: in the use of the hypothetical infinite population to justify probabilistic statements about a single data set; in treating the significance level evidentially; in setting it after the experiment is performed; in never accepting the null hypothesis; in disregarding power…. Yet the rationale for all our statistical methods, insofar as it is presented, is that of Neyman and Pearson, rather than Fisher.

Although Neyman and Pearson may have had “decidedly the better theory” for statistical decisions in general, Fisher’s approach provides a better theory for scientific inferences from controlled experiments.

Interim Summary

The work we described in Sections “Cybernetics and Information Theory” and “Inference Revolution” identifies three crucial pillars of research that were developed mainly in the period from 1940 to 1955: cybernetics/control theory, information/communication theory, and inferential statistical theory. Moreover, our analysis revealed a correspondence among those pillars. Specifically, the cybernetics/control theory corresponds to the experimental design, both of which provide the framework for cognitive psychology. The information theory corresponds to the statistical test, both of which provide quantitative evidence for the qualitative assumption.

These pillars were identified as early as 1952 in the preface to the proceedings of a conference called Information Theory in Biology , in which the editor, Quastler (1953 , p. 1), said:

  • simple  The “new movement” [what we would call information-processing theory] is based on evaluative concepts (R. A. Fisher’s experimental design, A. Wald’s statistical decision function, J. von Neumann’s theory of games), on the development of a measure of information (R. Hartley, D. Gabor, N. Wiener, C. Shannon), on studies of control mechanisms, and the analysis and design of large systems (W. S. McCulloch and W. Pitt’s “neurons,” J. von Neumann’s theory of complicated automata, N. Wiener’s cybernetics).

The pillars undergirded not only the new movement in biology but also the new movement in psychology. The concepts introduced in the dawning information age of 1940–1955 had tremendous impact on applied and basic research in experimental psychology that transformed psychological research into a form that has developed to the present.

Human Information Processing

As noted in earlier sections, psychologists and neurophysiologists were involved in the cybernetics, information theory, and inferential statistics movements from the earliest days. Each of these movements was crucial to the ascension of the information-processing approach in psychology and the emergence of cognitive science, which are often dated to 1956. In this section, we review developments in cognitive psychology linked to each of the three pillars, starting with the most fundamental one, cybernetics.

The Systems Viewpoint of Cybernetics

George A. Miller explicitly credited cybernetics as being seminal in 1979, stating, “I have picked September 11, 1956 [the date of the second MIT symposium on Information Theory] as the birthday of cognitive science – the day that cognitive science burst from the womb of cybernetics and became a recognizable interdisciplinary adventure in its own right” (quoted by Elias, 1994 , p. 24; emphasis ours). With regard to the development of human factors (ergonomics) in the United Kingdom, Waterson (2011 , pp. 1126–1127) remarks similarly:

  • simple  During the 1960s, the ‘systems approach’ within ergonomics took on a precedence which has lasted until the present day, and a lot of research was informed from cybernetics and general systems theory. In many respects, a concern in applying a systemic approach to ergonomic issues could be said to be one of the factors which ‘glues’ together all of the elements and sub-disciplines within ergonomics.

This seminal role for cybernetics is due to its fundamental idea that various levels of processing in humans and non-humans can be viewed as control systems with interconnected stages and feedback loops. It should be apparent that the human information-processing approach, which emphasizes the human as a processing system with feedback loops, stems directly from cybernetics, and the human-machine system view that underlies contemporary human factors and ergonomics, can be traced directly to cybernetics.

We will provide a few more specific examples of the impact of cybernetics. McCulloch and Pitts (1943) , members of the cybernetics movement, are given credit for developing “the first conceptual model of an artificial neural network” ( Shiffman, 2012 ) and “the first modern computational theory of mind and brain” ( Piccinini, 2004 ). The McCulloch and Pitts model identified the neurons as logical decision elements by on and off states, which are the basis of building brain-like machines. Since then, Boolean function, together with feedback through neurons, has been used extensively to quantify theorizing in relation to both neural and artificial intelligent systems ( Piccinini, 2004 ). Thus, computational modeling of brain processes was part of the cybernetics movement from the outset.

Franklin Taylor a noted engineering psychologist, reviewed Wiener’ (1948b) Cybernetics book, calling it “a curious and provocative book” ( Taylor, 1949 , p. 236). Taylor noted, “The author’s most important assertion for psychology is his suggestion, often repeated, that computers, servos, and other machines may profitably be used as models of human and animal behavior” (p. 236), and “It seems that Dr. Wiener is suggesting that psychologists should borrow the theory and mathematics worked out for machines and apply them to the behavior of men” (p. 237). Psychologists have clearly followed this suggestion, making ample use of the theory and mathematics of control systems. Craik (1947 , 1948 ) in the UK had in fact already started to take a control theory approach to human tracking performance, stating that his analysis “puts the human operator in the class of ‘intermittent definite correction servos’ ” ( Craik, 1948 , p. 148).

Wiener’s work seemingly had considerable impact on Taylor, as reflected in the opening paragraphs of a famous article by Birmingham and Taylor (1954) on human performance of tracking tasks and design of manual control systems:

  • simple  The cardinal purpose of this report is to discuss a principle of control system design based upon considerations of engineering psychology. This principle will be found to advocate design practices for man-operated systems similar to those customarily employed by engineers with fully automatic systems…. In many control systems the human acts as the error detector… During the last decade it has become clear that, in order to develop control systems with maximum precision and stability, human response characteristics have to be taken into account. Accordingly, the new discipline of engineering psychology was created to undertake the study of man from an engineering point of view (p. 1748).

Control theory continues to provide a quantitative means for modeling basic and applied human performance ( Jagacinski and Flach, 2003 ; Flach et al., 2015 ).

Colin Cherry, who performed the formative study on auditory selective attention, studied with Wiener and Jerome Wiesner at MIT in 1952. It was during this time that he conducted his classic experiments on the cocktail party problem – the question of how we identify what one person is saying when others are speaking at the same time ( Cherry, 1953 ). His detailed investigations of selective listening, including attention switching, provided the basis for much research on the topic in the next decade that laid the foundation for contemporary studies of attention. The initial models explored the features and locus of a “limited-capacity processing channel” ( Broadbent, 1958 ; Deutsch and Deutsch, 1963 ). Subsequent landmark studies of attention include the attentuation theory of Treisman (1960 ; also see Moray, 1959 ); capacity models that conceive of attention as a resource to be flexibly allocated to various stages of human information processing ( Kahneman, 1973 ; Posner, 1978 ); the distinction between controlled and automatic processing ( Shiffrin and Schneider, 1977 ); the feature-integration theory of visual search ( Treisman and Gelade, 1980 ).

As noted, Miller (2003) and others identified the year 1956 as a critical one in the development of contemporary psychology ( Newell and Simon, 1972 ; Mandler, 2007 ). Mandler lists two events that year that ignited the field, in both of which Allan Newell and Herbert Simon participated. The first is the meeting of the Special Group on Information Theory of the Institute of Electrical and Electronics Engineers, which included papers by linguist Noam Chomsky (who argued against an information theory approach to language for his transformational-generative grammer) and psychologist Miller (on avoiding the short-term memory bottleneck), in addition to Newell and Simon (on their Logic Theorist “thinking machine”) and others ( Miller, 2003 ). The other event is the Dartmouth Summer Seminar on Artificial Intelligence (AI), which was organized by John McCarthy, who had coined the term AI the previous year. It included Shannon, Oliver Selfridge (who discussed initial ideas that led to his Pandemonium model of human pattern recognition, described in the next paragraph), and Marvin Minsky (a pioneer of AI, who turned to symbolic AI after earlier work on neural net; Moor, 2006 ), among others. A presentation by Newell and Simon at that seminar is regarded as essential in the birth of AI, and their work on human problem solving exploited concepts from work on AI.

Newell applied a combination of experimental and theoretical research during his work in RAND Corporation from 1950 ( Simon, 1997 ). For example, in 1952, he and his colleagues designed and conducted laboratory experiments on a full-scale simulation of an Air-Force Early Warning Station to study the decision-making and information-handling processes of the station crews. Central to the research was the recording and analyzing the crew’s interaction with their radar screens, with interception aircraft, and with each other. From these studies, Newell became to believe that information processing is the central activity in organizations (systems).

Selfridge (1959) laid the foundation for a cognitive theory of letter perception with his Pandemonium model, in which the letter identification is achieved by way of hierarchically organized layers of features and letter detectors. Inspired by Selfridge’s work on Pandemonium, Newell started to converge on the idea that systems can be created that contain intelligence and have the ability to adapt. Based on his understanding of computers, heuristics, information processing in organizations (systems), and cybernetics, Newell (1955) delineated the design of a computer program to play chess in “The Chess Machine: An Example of Dealing with a Complex Task by Adaptation.” After that, for Newell, the investigation of organizations (systems) became the examination of the mind, and he committed himself to understand human learning and thinking through computer simulations.

In the study of problem solving, think-aloud protocols in laboratory settings revealed that means-end analysis is a key heuristic mechanism. Specifically, the current situation is compared to the desired goal state and mental or physical actions are taken to reduce the gap. Newell, Simon, and Jeff Shaw developed the General Problem Solver, a computer program that could solve problems in various domains if given a problem space (domain representation), possible actions to move between space states, and information about which actions would reduce the gap between the current and goal states (see Ernst and Newell, 1969 , for a detailed treatment, and Newell and Simon, 1972 , for an overview). The program built into the system underlined the importance of control structure for solving the problems, revealing a combination of cybernetics and information theory.

Besides using cybernetics, neuroscientists further developed it to explain anticipation in biological systems. Although closed-loop feedback can perform online corrections in a determinate machine, it does not give any direction ( Ashby, 1956 , pp. 224–225). Therefore, a feedforward loop was proposed in cybernetics that could improve control over systems through anticipation of future actions ( Ashby, 1956 ; MacKay, 1956 ). Generally, the feedforward mechansim is constructed as another input pathway parallel to the actual input, which enables comparison between the actual and anticipated inputs before they are processed by the system ( Ashby, 1960 ; Pribram, 1976 , p. 309). In other words, a self-organized system is not only capable of self-adjusting its own behavior (feedback), but is also able to change its own internal organization in such a way as to select the response that eliminates a disturbance from the outside among the random responses that it attempts ( Ashby, 1960 ). Therefore, the feedforward loop “nudges” the inputs based on predefined parameters in an automatic manner to account for cognitive adaptation, indicating a higher level action planning. Moreover, different from error-based feedback control, the knowledge-based feedforward control cannot be further adjusted once the feedforward input has been processed. The feedforward control from cybernetics has been used by psychologists to understand human action control at behavioral, motoric, and neural levels (for a review, see Basso and Olivetti Belardinelli, 2006 ).

Therefore, both feedback and feedforward are critical to a control system, in which feedforward control is valuable and could improve the performance when feedback control is not sufficient. A control system with feedforward and feedback loops allows the interaction between top-down and bottom-up information processing. Consequently, the main function of a control system is not to create “behavior” but to create and maintain the anticipation of a specific desired condition, which constitutes its reference value or standard of comparison.

Cognitive psychology and neuroscience suggest that the combination of anticipatory and hierarchical structures are involved for human action learning and control ( Herbort et al., 2005 ). Specifically, anticipatory mechanisms lead to direct action selections in inverse model and effective filtering mechanisms in forward models, both of which are based on sensorimotor contingencies through people’s interaction with the environment (ideomotor principle, Greenwald, 1970 ; James, 1890 ). Therefore, the feedback loop included in cybernetics as well as the feedforward loop are essential to the learning processes.

We conclude this section with mention of one of the milestone books in cognitive psychology, Plans and the Structure of Behavior , by Miller et al. (1960) . In the prolog to the book, the authors indicate that they worked on it together for a year at the Center for Advanced Study in the Behavioral Sciences in California. As indicated by the title, the central idea motivating the book was that of a plan, or program, that guides behavior. But, the authors said:

  • simple  Our fundamental concern, however, was to discover whether the cybernetic ideas have any relevance for psychology…. There must be some way to phrase the new ideas [of cybernetics] so that they can contribute to and profit from the science of behavior that psychologists have created. It was the search for that favorable intersection that directed the course of our year-long debate (p. 3, emphasis ours).

In developing the central concept of the test-operate-test (TOTE) unit in the book, Miller et al. (1960) stated, “The interpretation to which the argument builds is one that has been called the ‘cybernetic hypothesis,’ namely that the fundamental building block of the nervous system is the feedback loop” (pp. 26–27). As noted by Edwards (1997) , the TOTE concept “is the same principle upon which Weiner, Rosenblueth, and Bigelow had based ‘Behavior, Purpose, and Teleology”’ (p. 231). Thus, although Miller later gave 1956 as the date that cognitive science “burst from the womb of cybernetics,” even after the birth of cognitive science, the genes inherited from cybernetics continued to influence its development.

Information and Uncertainty

Information theory, a useful way to quantify psychological and behavior concepts, had possibly a more direct impact than cybernetics on psychological research. No articles were retrieved from the PsycINFO database prior to 1950 when we entered “information theory” as an unrestricted field search term on May 3, 2018. But, from 1950 to 1956 there were 37 entries with “information theory” in the title and 153 entries with the term in some field. Two articles applying information theory to speech communication appeared in 1950, a general theoretical article by Fano (1950) of the Research Laboratory in Electronics at MIT, and an empirical article by Licklider (1950) of the Acoustics Laboratory, also at MIT. Licklider (1950) presented two methods of reducing the frequncies of speech without destoying the intelligibility by using the Shannon-Weaver information formula based on first-order probability.

Given Licklider’s background in cybernetics and information theory, it is not too surprising that he played a major role in establishing the ARPAnet, which was later replaced by the Internet:

  • simple  His 1968 paper called “The Computer as a Communication Device” illustrated his vision of network applications and predicted the use of computer networks for communications. Until then, computers had generally been thought of as mathematical devices for speeding up computations ( Internet Hall of Fame, 2016 ).

Licklider worked from 1943–1950 at the Psyco-Acoustics Laboratory (PAL) of Harvard University University, headed by Edwards ( 1997 , p. 212) noted, “The PAL played a crucial role in the genesis of postwar information processing psychologies.” He pointed out, “A large number of those who worked at the lab… helped to develop computer models and metaphors and to introduce information theory into human experimental psychology” (p. 212). Among those were George Miller and Wendell Garner, who did much to promulgate information theory in psychology ( Garner and Hake, 1951 ; Miller, 1953 ), as well as Licklider, Galanter and Pribram. Much of PAL’s research was based in solving engineering problems for the military and industry.

The exploration of human information-processing limitations using information theory that led to one of the most influential applications was that of Hick (1952) and Hyman (1953) to explain increases in reaction time as a function of uncertainty regarding the potential stimulus-response alternatives. Their analyses showed that reaction time increased as a logarithmic function of the number of equally likely alternatives and as a function of the amount of information as computed from differential probabilities of occurrence and sequential effects. This relation, call Hick’s law or the Hick-Hyman law has continued to be a source of research to the present and is considered to be a fundamental law of human-computer interaction ( Proctor and Schneider, 2018 ). Fitts and Seeger (1953) showed that uncertainty was not the only factor influencing reaction time. They examined performance of eight-choice task for all combinations of three spatial-location stimulus and response arrays. Responses were faster and more accurate when the response array corresponded to that of the stimulus array than when it did not, which Fitts and Seeger called a stimulus-response compatibility effect. The main point of their demonstration was that correspondence of the spatial codes for the stimulus and response alternatives was crucial, and this led to detailed investigation of compatibility effects that continue to the present ( Proctor and Vu, 2006 , 2016 ).

Even more influential has been Fitts’s law, which describes movement time in tasks where people make discrete aimed movements to targets or series of repetitive movements between two targets. Fitts (1954) developed the index of difficulty as –log 2 W/2A bits/response, where W , is the target width and A is the amplitude (or distance) of the movement. The resulting movement time is a linear function of the index of difficulty, with the slope differing for different movement types. Fitts’s law continues to be the subject of basic and applied research to the present ( Glazebrook et al., 2015 ; Velasco et al., 2017 ).

Information theory was applied to a range of other topics during the 1950s, including intelligence tests ( Hick, 1951 ), memory ( Aborn and Rubenstein, 1952 ; Miller, 1956 ), perception ( Attneave, 1959 ), skilled performance ( Kay, 1957 ), music ( Meyer, 1957 ), and psychiatry ( Brosin, 1953 ). However, the key concept of information theory, entropy, or uncertainty, was found not to provide an adequate basis for theories of human performance (e.g., Ambler et al., 1977 ; Proctor and Schneider, 2018 ).

Shannon’ (1948a) advocacy of information theory for electronic communication was mainly built on there being a mature understanding of the structured pattern of information transmission within electromagnetic systems at that time. In spite of cognitive research having greatly expanded our knowledge about how humans select, store, manipulate, recover, and output information, the fundamental mechanisms of those information processes remained under further investigation ( Neisser, 1967 , p. 8). Thus, although information theory provided a useful mathematic metric, it did not provide a comprehensive account of events between the stimulus and response, which is what most psychologists were interested in ( Broadbent, 1959 ). With the requirement that information be applicable to a vast array of psychological issues, “information” has been expanded from a measure of informativeness of stimuli and responses, to a framework for describing the mental or neural events between stimuli and responses in cognitive psychology ( Collins, 2007 ). Therefore, the more enduring impact of information theory was through getting cognitive psychologists to focus on the nature of human information processing, such that by Lachman et al. (1979) titled their introduction to the field, Cognitive Psychology and Information Processing .

Along with the information theory, the arrival of the computer provided one of the most viable models to help researchers understand the human mind. Computers grew from a desire to make machines smart ( Laird et al., 1987 ), which assumes that stored knowledge inside of a machine can be applied to the world similar to the way that people do, constituting intelligence (e.g., intelligent machine, Turing, 1937 ; AI, Minsky, 1968 ; McCarthy et al., 2006 ). The core idea of the computer metaphor is that the mind functions like a digital computer, in which mental states are computational states and mental processes are computational processes. The use of the computer as a tool for thinking about how the mind handles information has been highly influential in cognitive psychology. For example, the PsycINFO database returned no articles prior to 1950 when “encoding” was entered as an unrestricted field search term on May 3, 2018. From 1950 to 1956 there was 1 entry with “encoding” in the title and 4 entries with the term in some field. But, from 1956 to 1973, there were 214 entries with “encoding” in the title and 578 entries with term in some field, including the famous encoding specificity principle of Tulving and Thomson (1973) . Some models in cognitive psychology were directly inspired by how the memory system of a computer works, for example, the multi-store memory ( Atkinson and Shiffrin, 1968 ) and working memory ( Baddeley and Hitch, 1974 ) models.

Although cybernetics is the origin of early AI ( Kline, 2011 ) and the computer metaphor and cybernetics share similar concepts (e.g., representation), they are fundamentally different at the conceptual level. The computer metaphor represents a genuine simplification: Terms like “encoding” and “retrieving” can be used to describe human behavior analogously to machine operation but without specifying a precise mapping between the analogical “computer” and the target “human” domain ( Gentner and Grudin, 1985 ). In contrast, cybernetics provides a powerful framework to help people understand the human mind, which holds that, regardless of human or machine, it is necessary and possible to achieve goals through correcting action using feedback and adapting to the external environment using feedforward. Recent breakthroughs in AI (e.g., AlphaGo beating professional Go players) rely on training the machine to learn how to perform tasks at a level not seen before using a large number of examples and an artificial neural network (ANN) without human guidance. This unsupervised learning allows the machine to determine on its own whether a certain function should be executed. The development of ANN has been greatly influenced by consideration of dynamic properties of cybernetics ( Cruse, 2009 ), to achieve the goal of self-organization or self-regulation.

Statistical Inference and Decisions

Statistical decision theory also had substantial impact. Engineering psychologists were among the leaders in promulgating use of the ANOVA, with Chapanis and Schachter (1945 ; Schachter and Chapanis, 1945 ) using it in research on depth perception through distorted glass, conducted in the latter part of World War II and presented in Technical Reports. As noted by Rucci and Tweney (1980) , “Following the war, these [engineering] psychologists entered the academic world and began to publish in regular journals, using ANOVA” (p. 180).

Factorial experiments and use of the ANOVA were slow to take hold in psychology. Rucci and Tweney (1980) counted the frequency with which the t -test and ANOVA were used in major psychology journals from 1935 to 1952. They described the relation as, “Use of both t and ANOVA increased gradually prior to World War II, declined during the war, and increased immediately thereafter” (p. 172). Rucci and Tweney concluded, “By 1952 it [ANOVA] was fully established as the most frequently used technique in experimental research” (p. 166). They emphasized that this increased use of ANOVA reflected a radical change in experimental design, and emphasized that although one could argue that the statistical technique caused the change in psychological research, “It is just as plausible that the discipline had developed in such a way that the time was ripe for adoption of the technique” (p. 167). Note that the rise in use of null hypothesis testing and ANOVA paralleled that of cybernetics and information theory, which suggests that the time was indeed ripe for the use of probability theory, multiple independent variables, and formal scientific decision making through hypothesis testing that is embodied in the factorial design and ANOVA.

The first half of the 1950s also saw the introduction of signal detection theory, a variant of statistical decision theory, for analyzing human perception and performance. Initial articles by Peterson et al. (1954) and Van Meter and Middleton (1954) were published in a journal of the Institute of Electrical and Electronics Engineers (IEEE), but psychologists were quick to realize the importance of the approach. This point is evident in the first sentence of Swets et al.’s (1961) article describing signal detection theory in detail:

  • simple  About 5 years ago, the theory of statistical decision was translated into a theory of signal detection. Although the translation was motivated by problems in radar, the detection theory that resulted is a general theory… The generality of the theory suggested to us that it might also be relevant to the detection of signals by human observers… The detection theory seemed to provide a framework for a realistic description of the behavior of the human observer in a variety of perceptual tasks (p. 301).

Signal detection theory has proved to be an invaluable tool because it dissociates influences of the evidence on which decisions are based from the criteria applied to that evidence. This way of conceiving decisions is useful not only for perceptual tasks but for a variety of tasks in which choices on the basis of noisy information are required, including recognition memory ( Kellen et al., 2012 ). Indeed, Wixted (2014) states, “Signal-detection theory is one of psychology’s most notable achievements, but it is not a theory about typical psychological phenomena such as memory, attention, vision or psychopathology (even though it applies to all of those areas and more). Instead, it is a theory about how we use evidence to make decisions.”

In the 1960s, Sternberg (1969) formalized the additive factors method of analyzing reaction-time data to identify different information-processing stages. Specifically, a factorial experiment is conducted, and if two independent variables affect different processing stages, the two variables do not interact. If, on the other hand, there is a significant interaction, then the variables can be assumed to affect at least one processing stage in common. Note that the subtitle of Sternberg’s article is “Extension of Donders’ Method,” which is reference to the research reported by F. C. Donders 100 years earlier in which he estimated the time for various processing stages by subtracting the reaction time obtained for a task that did not have an additional processing stage inserted from one that did. A limitation of Donders’s (1868/1969) subtraction method is that the stages had to be assumed and could not be identified. Sternberg’s extension that provided a means for identifying the stages did not occur until both the language of information processing and the factorial ANOVA were available as tools for analyzing reaction-time data. Additive factors method formed a cornerstone for much research in cognitive psychology for the following couple of decades, and the logic is still often applied to interpret empirical results, often without explicit acknowledgment.

In psychology, how people make decisions in perceptual and cognitive tasks has often been proposed on the basis of sequential sampling to explain the pattern of obtained reaction time (RT) and percentage error. The study of such mechanisms addresses one of the fundamental question in psychology, namely, how the central nervous system translates perception into action and how this translation depends on the interaction and expectation of individuals. Like signal detection theory, the theory of sequential sampling starts from the premise that perceptual and cognitive decisions are statistical in nature. It also follws the widely accepted assumption that sensory and cognitive systems are inherently noise and time-varying. In practice, the study of a given sequential model reduce to the study of a stochastic process, which represents the accumulative information avaliable to the decision at a given time. A Wiener process forms the basis of Ratcliff’ (1978) influential diffusion model of reaction times, in which noisy information accumulates continuously over time from a starting point to response thresholds ( Ratcliff and Smith, 2004 ). Recently, Srivastava et al. (2017) extended this diffusion model to make the Wiener process time dependent. More generally, Shalizi (2007 , p. 126) makes the point, “The fully general theory of stochastic calculus considers integration with respect to a very broad range of stochastic processes, but the original case, which is still the most important, is integration with respect to the Wiener process.”

In parallel to use of the computer metaphor to understand human mind, use of the laws of probability as metaphors of the mind also has had a profound influence on physiology and psychology ( Gigerenzer, 1991 ). Gregory (1968) regarded seeing an object from an image as an inference from a hypothesis (also see “unconscious inference” of Helmholtz, 1866/1925 ). According to Gregory (1980) , in spite of differences between perception and science, the cognitive procedures carried out by perceptual neural processes are essentially the same as the processes of predictive hypotheses of science. Especially, Gregory emphasized the importance and distinction between bottom–up and top–down procedures in perception. For normal perception and the perceptual illusions, the bottom–up procedures filter and structure the input and the top–down procedures refer to stored knowledge or assumptions that can work downwards to parcel signals and data into object.

A recent development of the statistics metaphor is the Bayesian brain hypothesis, which has been used to model perception and decision making since the 1990s ( Friston, 2012 ). Rao and Ballard (1997) described a hierarchical neural network model of visual recognition, in which both input-driven bottom–up signals and expectation-driven top–down signals were used to predict the current recognition state. They showed that feedback from a higher layer to the input later carries predictions of expected inputs, and the feedforward connections convey the errors in prediction which are used to correct the estimation. Rao (2004) illustrated how the Bayesian model could be implemented with neural networks by feedforward and recurrent connections, showing that for both perception and decision-making tasks the resulting network exhibits direction selectivity and computes posterior error corrections.

We would like to highlight that, different from the analogy to computers, the cybernetics view is essential for the Bayesian brain hypothesis. This reliance on cybernetics is because the Bayesian brain hypothesis models the interaction between prior knowledge (top–down) and sensory evidence (bottom–up) quantitatively. Therefore, the success of Bayesian brain modeling is due to both the framework from cybernetics and the computation of probability. Seth (2015) explicitly acknowledges this relation in his article, The Cybernetic Bayesian Brain .

Meanwhile, the external information format (or representation) on which the Bayesian inferences and statistical reasoning operate has been investigated. For example, Gigerenzer and Hoffrage (1995) varied mathematically equivalent representation of information in percentage or frequency for various problems (e.g., the mammography problem, the cab problem) and found that frequency formats enabled participants’ inferences to conform to Bayes’ theorem without any teaching or instruction.

The Information Age of humans follows on periods that are called the Stone Age , Bronze Age , Iron Age , and Industrial Age . These labels indicate that the advancement and time period of a specific human history are often represented by the type of tool material used by humans. The formation of the Information Age is inseparable from the interdisciplinary work of cybernetics, information theory, and statistical inference, which together generated a cognitive psychology adapted to the age. Each of these three pillars has been acknowledged separately by other authors, and contemporary scientifical approaches to motor control and cognitive processing have been continuously inspired by cybernetics, information theory, and statistical inference.

Kline (2015 , p. 1) aptly summarized the importance of cybernetics in the founding of the information age:

  • simple  During contentious meetings filled with brilliant arguments, rambling digressions, and disciplinary posturing, the cybernetics group shaped a language of feedback, control, and information that transformed the idiom of the biological and social sciences, sparked the invention of information technologies, and set the intellectual foundation for what came to be called the information age. The premise of cybernetics was a powerful analogy: that the principles of information-feedback machines, which explained how a thermostat controlled a household furnace, for example, could also explain how all living things—from the level of the cell to that of society—behaved as they interacted with their environment.

Pezzulo and Cisek (2016) extended the cybernetic principles of feedback and forward control for understanding cognition. In particular, they proposed hierachical feedback control, indicating that adaptive action selection is influenced not only by prediction of immediate outcomes but also prediction of new opportunites afforded by the outcomes. Scott (2016) highlighted the use of sensory feedback, after a person becomes familiar with performing a perceptual-motor task, to drive goal-directed motor control, reducing the role of top–down control through utilizing bottom–up sensory feedback.

Although less expansive than Kline (2015) , Fan (2014 , p. 2) emphasized the role of information theory. He stated, “Information theory has a long and distinguished role in cognitive science and neuroscience. The ‘cognitive revolution’ of the 1950s, as spearheaded by Miller (1956) and Broadbent (1958) , was highly influenced by information theory.” Likewise, Gigerenzer (1991 , p. 255) said, “Inferential statistics… provided a large part of the new concepts for mental processes that have fueled the so called cognitive revolution since the 1960s.” The separate treatment of the three pillars by various authors indicates that the pillars have distinct emphases, which are sometimes treated as in opposition ( Verschure, 2016 ). However, we have highlighted the convergent aspects of the three that were critical to the founding of cognitive psychology and its continued development to the present. An example of a contemporary approach utilizing information theory and statistics in computational and cognitive neuroscience is the study of activity of neuronal populations to understand how the brain processes information. Quian Quiroga and Panzeri (2009) reviewed methods based on statistical decoding and information theory, and concluded, “Decoding and information theory describe complementary aspects of knowledge extraction… A more systematic joint application of both methodologies may offer additional insights” (p. 183).

Leahey (1992) claimed that it is incorrect to say that there was a “cognitive revolution” in the 1950s, but he acknowledged “that information-processing psychology has had world-wide influence…” (p. 315). Likewise, Mandler (2007) pointed out that the term “cognitive revolution” for the changes that occurred in the 1950s is a misnomer because, although behaviorism was dominant in the United States, much psychology outside of the United States prior to that time could be classified as “cognitive.” However, he also said, after reviewing the 1956 meetings and ones in 1958, that “the 1950s surely were ready for the emergence of the new information-processing psychology—the new cognitive psychology” (p. 187). Ironically, although both Leahey and Mandler identified the change as being one of information processing, neither author acknowledged the implication of their analyses, which is that there was a transformation that is more aptly labeled the information-processing revolution rather than the cognitive revolution. The concepts provided by the advances in cybernetics, information theory, and inferential statistical theory together provided the language and methodological tools that enabled a significant leap forward in theorizing.

Wootton (2015) , says of his book The Invention of Science: A New History of the Scientific Revolution , “We can state one of its core premises quite simply: a revolution in ideas requires a revolution in language” (p. 48). That language is what the concepts of communications systems engineering and inferential statistical theory provided for psychological research. Assessing the early influence of cybernetics and information theory on cognitive psychology, Broadbent (1959) stated, “It is in fact, the cybernetic approach above all others which has provided a clear language for discussing those various internal complexities which make the nervous system differ from a simple channel” (p. 113). He also identified a central feature of information theory as being crucial: The information conveyed by a stimulus is dependent on the stimuli that might have occurred but did not.

Posner (1986) , in his introduction to the Information Processing section of the Handbook of Perception and Human Performance , highlights more generally that the language of information processing affords many benefits to cognitive psychologists. He states, “Information processing language provides an alternative way of discussing internal mental operations intermediate between subjective experience and activity of neurons” (p. V-3). Later in the chapter, he elaborates:

  • simple  The view of the nervous system in terms of information flow provided a common language in which both conscious and unconscious events might be discussed. Computers could be programmed to simulate exciting tasks heretofore only performed by human beings without requiring any discussion of consciousness. By analogies with computing systems, one could deal with the format (code) in which information is presented to the senses and the computations required to change code (recodings) and for storage and overt responses. These concepts brought a new unity to areas of psychology and a way of translating between psychological and physiological processes. The presence of the new information processing metaphor reawakened interest in internal mental processes beyond that of simple sensory and motor events and brought cognition back to a position of centrality in psychology (p. V-7).

Posner also notes, “The information processing approach has a long and respected relationship with applications of experimental psychology to industrial and military settings” (V-7). The reason, as emphasized years earlier by Wiener, is that it allows descriptions of humans to be integrated with those of the nonhuman parts of the system. Again, from our perspective, there was a revolution, but it was specifically an information-processing revolution.

Our main points can be summarized as follows:

  • simple  (1) The information age originated in interdisciplinary research of an applied nature.
  • simple  (2) Cybernetics and information theory played pivotal roles, with the former being more fundamental than the latter through its emphasis on a systems approach.
  • simple  (3) These roles of communication systems theory were closely linked to developments in inferential statistical theory and applications.
  • simple  (4) The three pillars of cybernetics, information theory, and inferential statistical theory undergirded the so-called cognitive revolution in psychology, which is more appropriately called the information-processing revolution.
  • simple  (5) Those pillars, rooted in solving real-world problems, provided the language and methodological tools that enabled growth of the basic and applied fields of psychology.
  • simple  (6) The experimental design and inferential statistics adopted in cognitive psychology, with emphasis on rejecting null hypotheses, originated in the applied statistical analyses of the scientist Ronald Fisher and were influential because of their compatibility with scientific research conducted using controlled experiments.

Simon (1969) , in an article entitled “Designing Organizations for an Information-Rich World,” pointed out the problems created by the wealth of information:

  • simple  Now, when we speak of an information-rich world, we may expect, analogically, that the wealth of information means a dearth of something else – a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it (manuscript pp. 6-7).

What Simon described is valid and even more evident in the current smart device and Internet era, where the amount of information is overwhelming. Phenomena as disparate as accidents caused by talking on a cellphone while driving ( Banducci et al., 2016 ) and difficulty assessing the credibility of information reported on the Internet or other media ( Chen et al., 2015 ) can be attributed to the overload. Moreover, the rapid rate at which information is encountered may have a negative impact on maintaining a prolonged focus of attention ( Microsoft Canada, 2015 ). Therefore, knowing how people process information and allocate attention is increasingly essential in the current explosion of information.

As noted, the predominant method of cognitive psychology in the information age has been that of drawing theoretical inferences from the statistical results of small-scale sets of data collected in controlled experimental settings (e.g., laboratory). The progress in psychology is tied to progress in statistics as well as technological developments that improve our ability to measure and analyze human behavior. Outside of the lab, with the continuing development of the Internet of Things (IoT), especially the implementation of AI, human physical lives are becoming increasingly interweaved into the cyber world. Ubiquitous records of human behavior, or “big data,” offer the potential to examine cognitive mechanisms at an escalated scale and level of ecological validity that cannot be achieved in the lab. This opportunity seems to require another significant transformation of cognitive psychology to use those data effectively to push forward understanding of the human mind and ensure seamless integration with cyber physical systems.

In a posthumous article, Brunswik (1956) noted that psychology should have the goal of broadening perception and learning by including interactions with a probabilistic environment. He insisted that psychology “must link behavior and environment statistically in bivariate or multivariate correlation rather than with the predominant emphasis on strict law…” (p. 158). As part of this proposal, Brunswik indicated a need to relate psychology more closely to disciplines that “use autocorrelation and intercorrelation, as theoretically stressed especially by Wiener (1949) , for probability prediction” (p. 160). With the ubiquitous data being collected within cyber physical systems, more extensive use of sophisticated correlational methods to extract the information embedded within the data will likely be necessary.

Using Stokes (1997) two dimensions of scientific research (considerations of use; quest for fundamental understanding), the work of pioneers of the Information Age, including Wiener, Shannon, and Fisher, falls within Pasteur’s Quadrant of use-inspired basic research. They were motivated by the need to solve immediate applied problems and through their research advanced human’s fundamental understanding of nature. Likewise, in seizing the opportunity to use big data to inform cognitive psychology, psychologists need to increase their involvement in interdisciplinary research targeted at real-world problems. In seeking to mine the information from big data, a new age is likely to emerge for cognitive psychology and related disciplines.

Although we reviewed the history of the information-processing revolution and subsequent developments in this paper, our ultimate concern is with the future of cognitive psychology. So, it is fitting to end as we began with a quote from Wiener (1951 , p. 68):

To respect the future, we must be aware of the past.

Author Contributions

AX and RP contributed jointly and equally to the paper.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

  • Aborn M., Rubenstein H. (1952). Information theory and immediate recall. J. Exp. Psychol. 44 260–266. 10.1037/h0061660 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Acree M. C. (1978). Theories of Statistical Inference in Psychological Research: A Historico-Critical Study. Doctoral dissertation, Clark University, Worcester, MS. [ Google Scholar ]
  • Adams J. A. (1971). A closed-loop theory of motor learning. J. Mot. Behav. 3 111–150. 10.1080/00222895.1971.10734898 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Aldrich J. (2007). Information and economics in Fisher’s design of experiments. Int. Stat. Rev. 75 131–149. 10.1111/j.1751-5823.2007.00020.x [ CrossRef ] [ Google Scholar ]
  • Ambler B. A., Fisicaro S. A., Proctor R. W. (1977). Information reduction, internal transformations, and task difficulty. Bull. Psychon. Soc. 10 463–466. 10.3758/BF03337698 [ CrossRef ] [ Google Scholar ]
  • Ashby W. R. (1956). An Introduction to Cybernetics. London: Chapman & Hall; 10.5962/bhl.title.5851 [ CrossRef ] [ Google Scholar ]
  • Ashby W. R. (1960). Design for a Brain: The Origin of Adaptive Behavior. New York, NY: Wiley & Sons; 10.1037/11592-000 [ CrossRef ] [ Google Scholar ]
  • Atkinson R. C., Shiffrin R. M. (1968). “Human memory: a proposed system and its control,” in The Psychology of Learning and Motivation Vol. 2 eds Spence K. W., Spence J. T. (New York, NY: Academic Press; ), 89–195. [ Google Scholar ]
  • Attneave F. (1959). Applications of Information theory to Psychology: A Summary of Basic Concepts, Methods, and Results. New York, NY: Henry Holt. [ Google Scholar ]
  • Baddeley A. D., Hitch G. (1974). “Working memory,” in The Psychology of Learning and Motivation Vol. 8 ed. Bower G. A. (New York, NY: Academic press; ), 47–89. [ Google Scholar ]
  • Banducci S. E., Ward N., Gaspar J. G., Schab K. R., Crowell J. A., Kaczmarski H., et al. (2016). The effects of cell phone and text message conversations on simulated street crossing. Hum. Factors 58 150–162. 10.1177/0018720815609501 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Basso D., Olivetti Belardinelli M. (2006). The role of the feedforward paradigm in cognitive psychology. Cogn. Process. 7 73–88. 10.1007/s10339-006-0034-1 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Beer S. (1959). Cybernetics and Management. New York, NY: John Wiley & Sons. [ Google Scholar ]
  • Biology Online Dictionary (2018). Homeostasis. Available at: https://www.biology-online.org/dictionary/Homeostasis [ Google Scholar ]
  • Birmingham H. P., Taylor F. V. (1954). A design philosophy for man-machine control systems. Proc. Inst. Radio Eng. 42 1748–1758. 10.1109/JRPROC.1954.274775 [ CrossRef ] [ Google Scholar ]
  • Broadbent D. E. (1958). Perception and Communication. London: Pergamon Press; 10.1037/10037-000 [ CrossRef ] [ Google Scholar ]
  • Broadbent D. E. (1959). Information theory and older approaches in psychology. Acta Psychol. 15 111–115. 10.1016/S0001-6918(59)80030-5 [ CrossRef ] [ Google Scholar ]
  • Brosin H. W. (1953). “Information theory and clinical medicine (psychiatry),” in Current Trends in Information theory , ed. Patton R. A. (Pittsburgh, PA: University of Pittsburgh Press; ), 140–188. [ Google Scholar ]
  • Brunswik E. (1956). Historical and thematic relations of psychology to other sciences. Sci. Mon. 83 151–161. [ Google Scholar ]
  • Chapanis A., Schachter S. (1945). Depth Perception through a P-80 Canopy and through Distorted Glass. Memorandum Rep. TSEAL-69S-48N. Dayton, OH: Aero Medical Laboratory. [ Google Scholar ]
  • Chen Y., Conroy N. J., Rubin V. L. (2015). News in an online world: the need for an “automatic crap detector”. Proc. Assoc. Inform. Sci. Technol. 52 1–4. 10.1002/pra2.2015.145052010081 [ CrossRef ] [ Google Scholar ]
  • Cherry E. C. (1953). Some experiments on the recognition of speech, with one and with two ears. J. Acoust. Soc. Am. 25 975–979. 10.1121/1.1907229 [ CrossRef ] [ Google Scholar ]
  • Cherry E. C. (1957). On Human Communication. New York, NY: John Wiley. [ Google Scholar ]
  • Collins A. (2007). From H = log s n to conceptual framework: a short history of information. Hist. Psychol. 10 44–72. 10.1037/1093-4510.10.1.44 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Conway F., Siegelman J. (2005). Dark Hero of the Information Age. New York, NY: Basic Books. [ Google Scholar ]
  • Craik K. J. (1947). Theory of the human operator in control systems. I. The operator as an engineering system. Br. J. Psychol. 38 56–61. [ PubMed ] [ Google Scholar ]
  • Craik K. J. (1948). Theory of the human operator in control systems. II. Man as an element in a control system. Br. J. Psychol. 38 142–148. [ PubMed ] [ Google Scholar ]
  • Cruse H. (2009). Neural Networks as Cybernetic Systems , 3rd Edn. Bielefeld: Brain, Minds, and Media. [ Google Scholar ]
  • Dawkins R. (2010). Who is the Greatest Biologist Since Darwin? Why? Available at: https://www.edge.org/3rd_culture/leroi11/leroi11_index.html#dawkins [ Google Scholar ]
  • Deutsch J. A., Deutsch D. (1963). Attention: some theoretical considerations. Psychol. Rev. 70 80–90. 10.1037/h0039515 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dignath D., Pfister R., Eder A. B., Kiesel A., Kunde W. (2014). Representing the hyphen in action–effect associations: automatic acquisition and bidirectional retrieval of action–effect intervals. J. Exp. Psychol. Learn. Mem. Cogn. 40 1701–1712. 10.1037/xlm0000022 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Donders F. C. (1868/1969). “On the speed of mental processes,” in Attention and Performance II , ed. Koster W. G. (Amsterdam: North Holland Publishing Company; ), 412–431. [ Google Scholar ]
  • Edwards P. N. (1997). The Closed World: Computers and the Politics of Discourse in Cold War America. Cambridge, MA: MIT Press. [ Google Scholar ]
  • Efron B. (1998). R. A. Fisher in the 21st century: invited paper presented at the 1996 R. A. Fisher lecture. Stat. Sci. 13 95–122. [ Google Scholar ]
  • Elias P. (1994). “The rise and fall of cybernetics in the US and USSR,” in The Legacy of Norbert Wiener: A Centennial Symposium , eds D. Jerison I, Singer M., Stroock D. W. (Providence, RI: American Mathematical Society; ), 21–30. [ Google Scholar ]
  • Ernst G. W., Newell A. (1969). GPS: A Case Study in Generality and Problem Solving. New York, NY: Academic Press. [ Google Scholar ]
  • Fan J. (2014). An information theory account of cognitive control. Front. Hum. Neurosci. 8 : 680 . 10.3389/fnhum.2014.00680 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fano R. M. (1950). The information theory point of view in speech communication. J. Acoust. Soc. Am. 22 691–696. 10.1121/1.1906671 [ CrossRef ] [ Google Scholar ]
  • Fisher R. A. (1925). Statistical Methods for Research Workers. London: Oliver & Boyd. [ Google Scholar ]
  • Fisher R. A. (1935). The Design of Experiments. London: Oliver & Boyd. [ Google Scholar ]
  • Fisher R. A. (1937). The Design of Experiments , 2nd Edn. London: Oliver & Boyd. [ Google Scholar ]
  • Fisher R. A. (1947). “Development of the theory of experimental design,” in Proceedings of the International Statistical Conferences Vol. 3 Poznań, 434–439. [ Google Scholar ]
  • Fisher R. A. (1951). “Statistics,” in Scientific thought in the Twentieth century , ed. Heath A. E. (London: Watts; ). [ Google Scholar ]
  • Fisher R. A. (1956). Statistical Methods and Scientific Inference. Edinburgh: Oliver & Boyd. [ Google Scholar ]
  • Fisher R. A. (ed.). (1962). “The place of the design of experiments in the logic of scientific inference,” in Fisher: Collected Papers Relating to Statistical and Mathematical theory and Applications Vol. 110 (Paris: Centre National de la Recherche Scientifique), 528–532. [ Google Scholar ]
  • Fitts P. M. (1954). The information capacity of the human motor system in controlling the amplitude of movement. J. Exp. Psychol. 47 381–391. 10.1037/h0055392 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fitts P. M., Seeger C. M. (1953). S-R compatibility: spatial characteristics of stimulus and response codes. J. Exp. Psychol. 46 199–210. 10.1037/h0062827 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Flach J. M., Bennett K. B., Woods D. D., Jagacinski R. J. (2015). “Interface design: a control theoretic context for a triadic meaning processing approach,” in The Cambridge Handbook of Applied Perception Research , Vol. II , eds Hoffman R. R., Hancock P. A., Scerbo M. W., Parasuraman R., Szalma J. L. (New York, NY: Cambridge University Press; ), 647–668. [ Google Scholar ]
  • Friston K. (2012). The history of the future of the Bayesian brain. Neuroimage 62 1230–1233. 10.1016/j.neuroimage.2011.10.004 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Galison P. (1994). The ontology of the enemy: norbert Wiener and the cybernetic vision. Crit. Inq. 21 228–266. 10.1086/448747 [ CrossRef ] [ Google Scholar ]
  • Gardner H. E. (1985). The Mind’s New Science: A History of the Cognitive Revolution. New York, NY: Basic Books. [ Google Scholar ]
  • Garner W. R., Hake H. W. (1951). The amount of information in absolute judgments. Psychol. Rev. 58 446–459. 10.1037/h0054482 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gentner D., Grudin J. (1985). The evolution of mental metaphors in psychology: a 90-year retrospective. Am. Psychol. 40 181–192. 10.1037/0003-066X.40.2.181 [ CrossRef ] [ Google Scholar ]
  • Gigerenzer G. (1991). From tools to theories: a heuristic of discovery in Cognitive Psychology. Psychol. Rev. 98 254–267. 10.1037/0033-295X.98.2.254 [ CrossRef ] [ Google Scholar ]
  • Gigerenzer G., Hoffrage U. (1995). How to improve Bayesian reasoning without instruction: frequency formats. Psychol. Rev. 102 684–704. 10.1037/0033-295X.102.4.684 [ CrossRef ] [ Google Scholar ]
  • Gigerenzer G., Murray D. J. (1987). Cognition as Intuitive Statistics. Mahwah, NJ: Lawrence Erlbaum. [ Google Scholar ]
  • Glazebrook C. M., Kiernan D., Welsh T. N., Tremblay L. (2015). How one breaks Fitts’s Law and gets away with it: moving further and faster involves more efficient online control. Hum. Mov. Sci. 39 163–176. 10.1016/j.humov.2014.11.005 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Greenwald A. G. (1970). Sensory feedback mechanisms in performance control: with special reference to the ideo-motor mechanism. Psychol. Rev. 77 73–99. 10.1037/h0028689 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gregory R. L. (1968). Perceptual illusions and brain models. Proc. R. Soc. Lond. B Biol. Sci. 171 279–296. 10.1098/rspb.1968.0071 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gregory R. L. (1980). Perceptions as hypotheses. Philos. Trans. R. Soc. Lond. B 290 181–197. 10.1098/rstb.1980.0090 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hald A. (2008). A History of Parametric Statistical Inference from Bernoulli to Fisher. Copenhagen: Springer Science & Business Media, 1713–1935. [ Google Scholar ]
  • Heims S. J. (1991). The Cybernetics Group. Cambridge, MA: Massachusetts Institute of Technology. [ Google Scholar ]
  • Helmholtz H. (1866/1925). Handbuch der Physiologischen Optik [Treatise on Physiological Optics] Vol. 3 ed. Southall J. (Rochester, NY: Optical Society of America; ). [ Google Scholar ]
  • Herbort O., Butz M. V., Hoffmann J. (2005). “Towards an adaptive hierarchical anticipatory behavioral control system,” in From Reactive to Anticipatory Cognitive Embodied Systems: Papers from the AAAI Fall Symposium , eds Castelfranchi C., Balkenius C. Butz M. V. Ortony A. (Menlo Park, CA: AAAI Press; ), 83–90. [ Google Scholar ]
  • Hick W. E. (1951). Information theory and intelligence tests. Br. J. Math. Stat. Psychol. 4 157–164. 10.1111/j.2044-8317.1951.tb00317.x [ CrossRef ] [ Google Scholar ]
  • Hick W. E. (1952). On the rate of gain of information. Q. J. Exp. Psychol. 4 11–26. 10.1080/17470215208416600 [ CrossRef ] [ Google Scholar ]
  • Hommel B., Müsseler J., Aschersleben G., Prinz W. (2001). The theory of event coding (TEC): a framework for perception and action planning. Behav. Brain Sci. 24 849–878. 10.1017/S0140525X01000103 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hulbert A. (2018). Prodigies’ Progress: Parents and Superkids, then and Now. Cambridge, MA: Harvard Magazine, 46–51. [ Google Scholar ]
  • Hyman R. (1953). Stimulus information as a determinant of reaction time. J. Exp. Psychol. 53 188–196. 10.1037/h0056940 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Internet Hall of Fame (2016). Internet Hall of Fame Pioneer J.C.R. Licklider: Posthumous Recipient. Available at: https://www.internethalloffame.org/inductees/jcr-licklider [ Google Scholar ]
  • Jagacinski R. J., Flach J. M. (2003). Control theory for Humans: Quantitative Approaches to Modeling Performance. Mahwah, NJ: Lawrence Erlbaum. [ Google Scholar ]
  • James W. (1890). The Principles of Psychology. New York, NY: Dover. [ Google Scholar ]
  • Kahneman D. (1973). Attention and Effort. Englewood Cliffs, NJ: Prentice Hall. [ Google Scholar ]
  • Kay H. (1957). Information theory in the understanding of skills. Occup. Psychol. 31 218–224. [ Google Scholar ]
  • Kellen D., Klauer K. C., Singmann H. (2012). On the measurement of criterion noise in signal detection theory: the case of recognition memory. Psychol. Rev. 119 457–479. 10.1037/a0027727 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kline R. R. (2011). Cybernetics, automata studies, and the Dartmouth conference on artificial intelligence. IEEE Ann. His. Comput. 33 5–16. 10.1109/MAHC.2010.44 [ CrossRef ] [ Google Scholar ]
  • Kline R. R. (2015). The Cybernetics Moment: Or why We Call our Age the Information Age. Baltimore, MD: John Hopkins University Press. [ Google Scholar ]
  • Lachman R., Lachman J. L., Butterfield E. C. (1979). Cognitive Psychology and Information Processing: An Introduction. Hillsdale, NJ: Lawrence Erlbaum. [ Google Scholar ]
  • Laird J., Newell A., Rosenbloom P. (1987). SOAR: an architecture for general intelligence. Artif. Intell. 33 1–64. 10.1016/0004-3702(87)90050-6 [ CrossRef ] [ Google Scholar ]
  • Leahey T. H. (1992). The mythical revolutions of American psychology. Am. Psychol. 47 308–318. 10.1037/0003-066X.47.2.308 [ CrossRef ] [ Google Scholar ]
  • Lehmann E. L. (1993). The Fisher, Neyman-Pearson theories of testing hypotheses: one theory or two? J. Am. Stat. Assoc. 88 1242–1249. 10.1080/01621459.1993.10476404 [ CrossRef ] [ Google Scholar ]
  • Lehmann E. L. (2011). Fisher, Neyman, and the Creation of Classical Statistics. New York, NY: Springer Science & Business Media; 10.1007/978-1-4419-9500-1 [ CrossRef ] [ Google Scholar ]
  • Licklider J. R. (1950). The intelligibility of amplitude-dichotomized, time-quantized speech waves. J. Acoust. Soc. Am. 22 820–823. 10.1121/1.1906695 [ CrossRef ] [ Google Scholar ]
  • Luce R. D. (2003). Whatever happened to information theory in psychology? Rev. Gen. Psychol. 7 183–188. 10.1037/1089-2680.7.2.183 [ CrossRef ] [ Google Scholar ]
  • Ly A., Marsman M., Verhagen J., Grasman R. P., Wagenmakers E. J. (2017). A tutorial on Fisher information. J. Math. Psychol. 80 40–55. 10.1016/j.jmp.2017.05.006 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • MacKay D. M. (1956). Towards an information-flow model of human behaviour. Br. J. Psychol. 47 30–43. 10.1111/j.2044-8295.1956.tb00559.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mandler G. (2007). A History of Modern Experimental Psychology: From James and Wundt to Cognitive Science. Cambridge, MA: MIT Press. [ Google Scholar ]
  • Maxwell S. E., Lau M. Y., Howard G. S. (2015). Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? Am. Psychol. 70 487–498. 10.1037/a0039400 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McCarthy J., Minsky M. L., Rochester N., Shannon C. E. (2006). A proposal for the Dartmouth summer research project on artificial intelligence, August 31 1955. AI Mag. 27 12–14. [ Google Scholar ]
  • McCulloch W., Pitts W. (1943). A logical calculus of ideas immanent in nervous activity. Bull. Math. Biophys. 5 115–133. 10.1007/BF02478259 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Meyer L. B. (1957). Meaning in music and information theory. J. Aesthet. Art Crit. 15 412–424. 10.1016/j.plrev.2013.05.008 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Microsoft Canada (2015). Attention Spans Research Report. Available at: https://www.scribd.com/document/317442018/microsoft-attention-spans-research-report-pdf [ Google Scholar ]
  • Miller G. A. (1953). What is information measurement? Am. Psychol. 8 3–11. 10.1037/h0057808 [ CrossRef ] [ Google Scholar ]
  • Miller G. A. (1956). The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol. Rev. 63 81–97. 10.1037/h0043158 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Miller G. A. (2003). The cognitive revolution: a historical perspective. Trends Cogn. Sci. 7 141–144. 10.1016/S1364-6613(03)00029-9 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Miller G. A., Galanter E., Pribram K. H. (1960). Plans and the Structure of Behavior. New York, NY: Holt; 10.1037/10039-000 [ CrossRef ] [ Google Scholar ]
  • Minsky M. (ed.). (1968). Semantic Information Processing. Cambridge, MA: The MIT Press. [ Google Scholar ]
  • Montagnini L. (2017a). Harmonies of Disorder: Norbert Wiener: A Mathematician-Philosopher of our Time. Roma: Springer. [ PubMed ] [ Google Scholar ]
  • Montagnini L. (2017b). Interdisciplinarity in Norbert Wiener, a mathematician-philosopher of our time. Biophys. Chem. 229 173–180. 10.1016/j.bpc.2017.06.009 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Moor J. (2006). The Dartmouth College artificial intelligence conference: the next fifty years. AI Mag. 27 87–91. [ Google Scholar ]
  • Moray N. (1959). Attention in dichotic listening: affective cues and the influence of instructions. Q. J. Exp. Psychol. 11 56–60. 10.1080/17470215908416289 [ CrossRef ] [ Google Scholar ]
  • Nahin P. J. (2013). The Logician and the Engineer: How George Boole and Claude Shannon Created the Information Age. Princeton, NJ: Princeton University Press. [ Google Scholar ]
  • Neisser U. (1967). Cognitive Psychology. New York, NY: Apple Century Crofts. [ Google Scholar ]
  • Nelson N., Rosenthal R., Rosnow R. L. (1986). Interpretation of significance levels and effect sizes by psychological researchers. Am. Psychol. 41 1299–1301. 10.1037/0003-066X.41.11.1299 [ CrossRef ] [ Google Scholar ]
  • Newell A. (1955). “The chess machine: an example of dealing with a complex task by adaptation,” in Proceedings of the March 1-3 1955 Western Joint Computer Conference , (New York, NY: ACM; ), 101–108. 10.1145/1455292.1455312 [ CrossRef ] [ Google Scholar ]
  • Newell A., Simon H. A. (1972). Human Problem Solving. Englewood Cliffs, NJ: Prentice-Hall. [ Google Scholar ]
  • Neyman J. (1977). Frequentist probability and frequentist statistics. Synthese 36 97–131. 10.1007/BF00485695 [ CrossRef ] [ Google Scholar ]
  • Neyman J., Pearson E. S. (1928). On the use and interpretation of certain test criteria for purposes of statistical inference: Part I. Biometrika 20A , 175–240. [ Google Scholar ]
  • Neyman J., Pearson E. S. (1933). IX. On the problem of the most efficient tests of statistical hypotheses. Philos. Trans. R. Soc. Lond. A 231 289–337. 10.1098/rsta.1933.0009 [ CrossRef ] [ Google Scholar ]
  • O’Regan G. (2012). A Brief History of Computing , 2nd Edn. London: Springer; 10.1007/978-1-4471-2359-0 [ CrossRef ] [ Google Scholar ]
  • Parolini G. (2015). The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919–1933. J. His. Biol. 48 301–335. 10.1007/s10739-014-9394-z [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Peterson W. W. T. G., Birdsall T., Fox W. (1954). The theory of signal detectability. Trans. IRE Prof. Group Inform. Theory 4 171–212. 10.1109/TIT.1954.1057460 [ CrossRef ] [ Google Scholar ]
  • Pezzulo G., Cisek P. (2016). Navigating the affordance landscape: feedback control as a process model of behavior and cognition. Trends Cogn. Sci. 20 414–424. 10.1016/j.tics.2016.03.013 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Piccinini G. (2004). The First computational theory of mind and brain: a close look at McCulloch and Pitts’s “Logical calculus of ideas immanent in nervous activity”. Synthese 141 175–215. 10.1023/B:SYNT.0000043018.52445.3e [ CrossRef ] [ Google Scholar ]
  • Posner M. I. (1978). Chronometric Explorations of Mind. Hillsdale, NJ: Lawrence Erlbaum. [ Google Scholar ]
  • Posner M. I. (1986). “Overview,” in Handbook of Perception and Human Performance: Cognitive Processes and Performance Vol. 2 eds Boff K. R., Kaufman L. I., Thomas J. P. (New York, NY: John Wiley; ), V.1—-V.10 . [ Google Scholar ]
  • Pribram K. H. (1976). “Problems concerning the structure of consciousness,” in Consciousness and the Brain: A Scientific and Philosophical Inquiry , eds Globus G. G., Maxwell G., Savodnik I. (New York, NY: Plenum; ), 297–313. [ Google Scholar ]
  • Proctor R. W., Schneider D. W. (2018). Hick’s law for choice reaction time: a review. Q. J. Exp. Psychol. 71 1281–1299. 10.1080/17470218.2017.1322622 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Proctor R. W., Vu K. P. L. (2006). Stimulus-Response Compatibility Principles: Data, theory, and Application. Boca Raton, FL: CRC Press. [ Google Scholar ]
  • Proctor R. W., Vu K. P. L. (2016). Principles for designing interfaces compatible with human information processing. Int. J. Hum. Comput. Interact. 32 2–22. 10.1080/10447318.2016.1105009 [ CrossRef ] [ Google Scholar ]
  • Quastler H. (ed.). (1953). Information theory in Biology. Urbana, IL: University of Illinois Press. [ Google Scholar ]
  • Quian Quiroga R., Panzeri E. (2009). Extracting information from neuronal populations: information theory and decoding approaches. Nat. Rev. Neurosci. 10 173–185. 10.1038/nrn2578 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rao R. P. (2004). Bayesian computation in recurrent neural circuits. Neural Comput. 16 1–38. 10.1162/08997660460733976 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rao R. P., Ballard D. H. (1997). Dynamic model of visual recognition predicts neural response properties in the visual cortex. Neural Comput. 9 721–763. 10.1162/neco.1997.9.4.721 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ratcliff R. (1978). A theory of memory retrieval. Psychol. Rev. 85 59–108. 10.1037/0033-295X.85.2.59 [ CrossRef ] [ Google Scholar ]
  • Ratcliff R., Smith P. L. (2004). A comparison of sequential sampling models for two-choice reaction time. Psychol. Rev. 111 333–367. 10.1037/0033-295X.111.2.333 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rosenthal R., Rubin D. B. (1982). A simple, general purpose display of magnitude of experimental effect. J. Educ. Psychol. 74 166–169. 10.1037/0022-0663.74.2.166 [ CrossRef ] [ Google Scholar ]
  • Rucci A. J., Tweney R. D. (1980). Analysis of variance and the “second discipline” of scientific psychology: a historical account. Psychol. Bull. 87 166–184. 10.1037/0033-2909.87.1.166 [ CrossRef ] [ Google Scholar ]
  • Russell E. J. (1966). A History of Agricultural Science in Great Britain. London: George Allen and Unwin, 1620–1954. [ Google Scholar ]
  • Schachter S., Chapanis A. (1945). Distortion in Glass and Its Effect on Depth Perception. Memorandum Report No. TSEAL-695-48B. Dayton, OH: Aero Medical Laboratory. [ Google Scholar ]
  • Schmidt R. A. (1975). A schema theory of discrete motor skill learning. Psychol. Rev. 82 225–260. 10.1037/h0076770 [ CrossRef ] [ Google Scholar ]
  • Scott S. H. (2016). A functional taxonomy of bottom-up sensory feedback processing for motor actions. Trends Neurosci. 39 512–526. 10.1016/j.tins.2016.06.001 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Seidenfeld T. (1992). “R. A. Fisher on the design of experiments and statistical estimation,” in The Founders of Evolutionary Genetics , ed. Sarkar S. (Dordrecht: Springer; ), 23–36. [ Google Scholar ]
  • Selfridge O. G. (1959). “Pandemonium: a paradigm for learning,” in Proceedings of the Symposium on Mechanisation of thought Processes , (London: Her Majesty’s Stationery Office; ), 511–529. [ Google Scholar ]
  • Seth A. K. (2015). “The cybernetic Bayesian brain - From interoceptive inference to sensorimotor contingencies,” in Open MIND: 35(T) , eds Metzinger T., Windt J. M. (Frankfurt: MIND Group; ). [ Google Scholar ]
  • Shalizi C. (2007). Advanced Probability II or Almost None of the theory of Stochastic Processes. Available at: http://www.stat.cmu.edu/cshalizi/754/notes/all.pdf [ Google Scholar ]
  • Shannon C. E. (1945). A Mathematical theory of Cryptography. Technical Report Memoranda 45-110-02 . Murray Hill, NJ: Bell; Labs. [ Google Scholar ]
  • Shannon C. E. (1948a). A mathematical theory of communication. Bell Syst. Tech. J. 27 379–423. 10.1002/j.1538-7305.1948.tb01338.x [ CrossRef ] [ Google Scholar ]
  • Shannon C. E. (1948b). Letter to Norbert Wiener, October 13. In box 5-85 Norbert Wiener Papers. Cambridge, MA: MIT Archives. [ Google Scholar ]
  • Shannon C. E., Weaver W. (1949). The Mathematical theory of Communication. Urbana, IL: University of Illinois Press. [ Google Scholar ]
  • Shiffman D. (2012). The Nature of Code. Available at: http://natureofcode.com/book/ [ Google Scholar ]
  • Shiffrin R. M., Schneider W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory. Psychol. Rev. 84 127–190. 10.1037/0033-295X.84.2.127 [ CrossRef ] [ Google Scholar ]
  • Simon H. A. (1969). Designing organizations for an information-rich world. Int. Libr. Crit. Writ. Econ. 70 187–202. [ Google Scholar ]
  • Simon H. A. (1997). Allen Newell (1927-1992). Biographical Memoir. Washington DC: National Academics Press. [ Google Scholar ]
  • Srivastava V., Feng S. F., Cohen J. D., Leonard N. E., Shenhav A. (2017). A martingale analysis of first passage times of time-dependent Wiener diffusion models. J. Math. Psychol. 77 94–110. 10.1016/j.jmp.2016.10.001 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sternberg S. (1969). The discovery of processing stages: extensions of Donders’ method. Acta Psychol. 30 276–315. 10.1016/0001-6918(69)90055-9 [ CrossRef ] [ Google Scholar ]
  • Stokes D. E. (1997). Pasteur’s Quadrant – Basic Science and Technological Innovation. Washington DC: Brookings Institution Press. [ Google Scholar ]
  • Student’s (1908). The probable error of a mean. Biometrika 6 1–25. [ Google Scholar ]
  • Swets J. A., Tanner W. P., Jr., Birdsall T. G. (1961). Decision processes in perception. Psychol. Rev. 68 301–340. 10.1037/h0040547 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Taylor F. V. (1949). Review of Cybernetics (or control and communication in the animal and the machine). Psychol. Bull. 46 236–237. 10.1037/h0051026 [ CrossRef ] [ Google Scholar ]
  • Treisman A. M. (1960). Contextual cues in selective listening. Q. J. Exp. Psychol. 12 242–248. 10.1080/17470216008416732 [ CrossRef ] [ Google Scholar ]
  • Treisman A. M., Gelade G. (1980). A feature-integration theory of attention. Cogn. Psychol. 12 97–136. 10.1016/0010-0285(80)90005-5 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tulving E., Thomson D. M. (1973). Encoding specificity and retrieval processes in episodic memory. Psychol. Rev. 80 352–373. 10.1037/h0020071 [ CrossRef ] [ Google Scholar ]
  • Turing A. M. (1937). On computable numbers, with an application to the Entscheidungsproblem. Proc. Lond. Math. Soc. 2 230–265. 10.1112/plms/s2-42.1.230 [ CrossRef ] [ Google Scholar ]
  • Van Meter D., Middleton D. (1954). Modern statistical approaches to reception in communication theory. Trans. IRE Prof. Group Inform. Theory 4 119–145. 10.1109/TIT.1954.1057471 [ CrossRef ] [ Google Scholar ]
  • Velasco M. A., Clemotte A., Raya R., Ceres R., Rocon E. (2017). Human-computer interaction for users with cerebral palsy based on head orientation. Can cursor’s movement be modeled by Fitts’s law? Int. J. Hum. Comput. Stud. 106 1–9. 10.1016/j.ijhcs.2017.05.002 [ CrossRef ] [ Google Scholar ]
  • Verschure P. F. M. J. (2016). “Consciousness in action: the unconscious parallel present optimized by the conscious sequential projected future,” in The Pragmatic Turn: Toward Action-Oriented Views in Cognitive Science , eds Engel A. K., Friston K. J., Kragic D. (Cambridge, MA: MIT Press; ). [ Google Scholar ]
  • Wald A. (1950). Statistical Decision Functions. New York, NY: John Wiley. [ Google Scholar ]
  • Waterson P. (2011). World War II and other historical influences on the formation of the Ergonomics Research Society. Ergonomics 54 1111–1129. 10.1080/00140139.2011.622796 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wiener N. (1921). The average of an analytic functional and the Brownian movement. Proc. Natl. Acad. Sci. U.S.A. 7 294–298. 10.1073/pnas.7.10.294 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wiener N. (1948a). Cybernetics. Sci. Am. 179 14–19. 10.1038/scientificamerican1148-14 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wiener N. (1948b). Cybernetics or Control and Communication in the Animal and the Machine. New York, NY: John Wiley. [ Google Scholar ]
  • Wiener N. (1949). Extrapolation, Interpolation, and Smoothing of Stationary Time Series, with Engineering Applications. Cambridge, MA: Technology Press of the Massachusetts Institute of Technology. [ Google Scholar ]
  • Wiener N. (1951). Homeostasis in the individual and society. J. Franklin Inst. 251 65–68. 10.1016/0016-0032(51)90897-6 [ CrossRef ] [ Google Scholar ]
  • Wiener N. (1952). Cybernetics or Control and Communication in the Animal and the Machine. Cambridge, MA: The MIT Press. [ Google Scholar ]
  • Wiener N. (1961). Cybernetics or Control and Communication in the Animal and the Machine , 2nd Edn. Cambridge MA: MIT Press; 10.1037/13140-000 [ CrossRef ] [ Google Scholar ]
  • Wixted J. T. (2014). Signal Detection theory. Hoboken, NJ: Wiley; 10.1002/9781118445112.stat06743 [ CrossRef ] [ Google Scholar ]
  • Wootton D. (2015). The Invention of Science: A new History of the Scientific Revolution. New York, NY: Harper. [ Google Scholar ]
  • Yates F. (1964). Sir Ronald Fisher and the design of experiments. Biometrics 20 307–321. 10.1080/03639045.2017.1291672 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Yates F., Mather K. (1963). Ronald Aylmer fisher. Biogr. Mem. Fellows R. Soc. Lond. 9 91–120. 10.1098/rsbm.1963.0006 [ CrossRef ] [ Google Scholar ]
  • Open access
  • Published: 09 March 2020

Rubrics to assess critical thinking and information processing in undergraduate STEM courses

  • Gil Reynders 1 , 2 ,
  • Juliette Lantz 3 ,
  • Suzanne M. Ruder 2 ,
  • Courtney L. Stanford 4 &
  • Renée S. Cole   ORCID: orcid.org/0000-0002-2807-1500 1  

International Journal of STEM Education volume  7 , Article number:  9 ( 2020 ) Cite this article

67k Accesses

56 Citations

4 Altmetric

Metrics details

Process skills such as critical thinking and information processing are commonly stated outcomes for STEM undergraduate degree programs, but instructors often do not explicitly assess these skills in their courses. Students are more likely to develop these crucial skills if there is constructive alignment between an instructor’s intended learning outcomes, the tasks that the instructor and students perform, and the assessment tools that the instructor uses. Rubrics for each process skill can enhance this alignment by creating a shared understanding of process skills between instructors and students. Rubrics can also enable instructors to reflect on their teaching practices with regard to developing their students’ process skills and facilitating feedback to students to identify areas for improvement.

Here, we provide rubrics that can be used to assess critical thinking and information processing in STEM undergraduate classrooms and to provide students with formative feedback. As part of the Enhancing Learning by Improving Process Skills in STEM (ELIPSS) Project, rubrics were developed to assess these two skills in STEM undergraduate students’ written work. The rubrics were implemented in multiple STEM disciplines, class sizes, course levels, and institution types to ensure they were practical for everyday classroom use. Instructors reported via surveys that the rubrics supported assessment of students’ written work in multiple STEM learning environments. Graduate teaching assistants also indicated that they could effectively use the rubrics to assess student work and that the rubrics clarified the instructor’s expectations for how they should assess students. Students reported that they understood the content of the rubrics and could use the feedback provided by the rubric to change their future performance.

The ELIPSS rubrics allowed instructors to explicitly assess the critical thinking and information processing skills that they wanted their students to develop in their courses. The instructors were able to clarify their expectations for both their teaching assistants and students and provide consistent feedback to students about their performance. Supporting the adoption of active-learning pedagogies should also include changes to assessment strategies to measure the skills that are developed as students engage in more meaningful learning experiences. Tools such as the ELIPSS rubrics provide a resource for instructors to better align assessments with intended learning outcomes.

Introduction

Why assess process skills.

Process skills, also known as professional skills (ABET Engineering Accreditation Commission, 2012 ), transferable skills (Danczak et al., 2017 ), or cognitive competencies (National Research Council, 2012 ), are commonly cited as critical for students to develop during their undergraduate education (ABET Engineering Accreditation Commission, 2012 ; American Chemical Society Committee on Professional Training, 2015 ; National Research Council, 2012 ; Singer et al., 2012 ; The Royal Society, 2014 ). Process skills such as problem-solving, critical thinking, information processing, and communication are widely applicable to many academic disciplines and careers, and they are receiving increased attention in undergraduate curricula (ABET Engineering Accreditation Commission, 2012 ; American Chemical Society Committee on Professional Training, 2015 ) and workplace hiring decisions (Gray & Koncz, 2018 ; Pearl et al., 2019 ). Recent reports from multiple countries (Brewer & Smith, 2011 ; National Research Council, 2012 ; Singer et al., 2012 ; The Royal Society, 2014 ) indicate that these skills are emphasized in multiple undergraduate academic disciplines, and annual polls of about 200 hiring managers indicate that employers may place more importance on these skills than in applicants’ content knowledge when making hiring decisions (Deloitte Access Economics, 2014 ; Gray & Koncz, 2018 ). The assessment of process skills can provide a benchmark for achievement at the end of an undergraduate program and act as an indicator of student readiness to enter the workforce. Assessing these skills may also enable instructors and researchers to more fully understand the impact of active learning pedagogies on students.

A recent meta-analysis of 225 studies by Freeman et al. ( 2014 ) showed that students in active learning environments may achieve higher content learning gains than students in traditional lectures in multiple STEM fields when comparing scores on equivalent examinations. Active learning environments can have many different attributes, but they are commonly characterized by students “physically manipulating objects, producing new ideas, and discussing ideas with others” (Rau et al., 2017 ) in contrast to students sitting and listening to a lecture. Examples of active learning pedagogies include POGIL (Process Oriented Guided Inquiry Learning) (Moog & Spencer, 2008 ; Simonson, 2019 ) and PLTL (Peer-led Team Learning) (Gafney & Varma-Nelson, 2008 ; Gosser et al., 2001 ) in which students work in groups to complete activities with varying levels of guidance from an instructor. Despite the clear content learning gains that students can achieve from active learning environments (Freeman et al., 2014 ), the non-content-gains (including improvements in process skills) in these learning environments have not been explored to a significant degree. Active learning pedagogies such as POGIL and PLTL place an emphasis on students developing non-content skills in addition to content learning gains, but typically only the content learning is assessed on quizzes and exams, and process skills are not often explicitly assessed (National Research Council, 2012 ). In order to fully understand the effects of active learning pedagogies on all aspects of an undergraduate course, evidence-based tools must be used to assess students’ process skill development. The goal of this work was to develop resources that could enable instructors to explicitly assess process skills in STEM undergraduate classrooms in order to provide feedback to themselves and their students about the students’ process skills development.

Theoretical frameworks

The incorporation of these rubrics and other currently available tools for use in STEM undergraduate classrooms can be viewed through the lenses of constructive alignment (Biggs, 1996 ) and self-regulated learning (Zimmerman, 2002 ). The theory of constructivism posits that students learn by constructing their own understanding of knowledge rather than acquiring the meaning from their instructor (Bodner, 1986 ), and constructive alignment extends the constructivist model to consider how the alignment between a course’s intended learning outcomes, tasks, and assessments affects the knowledge and skills that students develop (Biggs, 2003 ). Students are more likely to develop the intended knowledge and skills if there is alignment between the instructor’s intended learning outcomes that are stated at the beginning of a course, the tasks that the instructor and students perform, and the assessment strategies that the instructor uses (Biggs, 1996 , 2003 , 2014 ). The nature of the tasks and assessments indicates what the instructor values and where students should focus their effort when studying. According to Biggs ( 2003 ) and Ramsden ( 1997 ), students see assessments as defining what they should learn, and a misalignment between the outcomes, tasks, and assessments may hinder students from achieving the intended learning outcomes. In the case of this work, the intended outcomes are improved process skills. In addition to aligning the components of a course, it is also critical that students receive feedback on their performance in order to improve their skills. Zimmerman’s theory of self-regulated learning (Zimmerman, 2002 ) provides a rationale for tailoring assessments to provide feedback to both students and instructors.

Zimmerman’s theory of self-regulated learning defines three phases of learning: forethought/planning, performance, and self-reflection. According to Zimmerman, individuals ideally should progress through these three phases in a cycle: they plan a task, perform the task, and reflect on their performance, then they restart the cycle on a new task. If a student is unable to adequately progress through the phases of self-regulated learning on their own, then feedback provided by an instructor may enable the students to do so (Butler & Winne, 1995 ). Thus, one of our criteria when creating rubrics to assess process skills was to make the rubrics suitable for faculty members to use to provide feedback to their students. Additionally, instructors can use the results from assessments to give themselves feedback regarding their students’ learning in order to regulate their teaching. This theory is called self-regulated learning because the goal is for learners to ultimately reflect on their actions to find ways to improve. We assert that, ideally, both students and instructors should be “learners” and use assessment data to reflect on their actions, although with different aims. Students need consistent feedback from an instructor and/or self-assessment throughout a course to provide a benchmark for their current performance and identify what they can do to improve their process skills (Black & Wiliam, 1998 ; Butler & Winne, 1995 ; Hattie & Gan, 2011 ; Nicol & Macfarlane-Dick, 2006 ). Instructors need feedback on the extent to which their efforts are achieving their intended goals in order to improve their instruction and better facilitate the development of process skills through course experiences.

In accordance with the aforementioned theoretical frameworks, tools used to assess undergraduate STEM student process skills should be tailored to fit the outcomes that are expected for undergraduate students and be able to provide formative assessment and feedback to both students and faculty about the students’ skills. These tools should also be designed for everyday classroom use to enable students to regularly self-assess and faculty to provide consistent feedback throughout a semester. Additionally, it is desirable for assessment tools to be broadly generalizable to measure process skills in multiple STEM disciplines and institutions in order to increase the rubrics’ impact on student learning. Current tools exist to assess these process skills, but they each lack at least one of the desired characteristics for providing regular feedback to STEM students.

Current tools to assess process skills

Current tests available to assess critical thinking include the Critical Thinking Assessment Test (CAT) (Stein & Haynes, 2011 ), California Critical Thinking Skills Test (Facione, 1990a , 1990b ), and Watson Glaser Critical Thinking Appraisal (Watson & Glaser, 1964 ). These commercially available, multiple-choice tests are not designed to provide regular, formative feedback throughout a course and have not been implemented for this purpose. Instead, they are designed to provide summative feedback with a focus on assessing this skill at a programmatic or university level rather than for use in the classroom to provide formative feedback to students. Rather than using tests to assess process skills, rubrics could be used instead. Rubrics are effective assessment tools because they can be quick and easy to use, they provide feedback to both students and instructors, and they can evaluate individual aspects of a skill to give more specific feedback (Brookhart & Chen, 2014 ; Smit & Birri, 2014 ). Rubrics for assessing critical thinking are available, but they have not been used to provide feedback to undergraduate STEM students nor were they designed to do so (Association of American Colleges and Universities, 2019 ; Saxton et al., 2012 ). The Critical Thinking Analytic Rubric is designed specifically to assess K-12 students to enhance college readiness and has not been broadly tested in collegiate STEM courses (Saxton et al., 2012 ). The critical thinking rubric developed by the Association of American Colleges and Universities (AAC&U) as part its Valid Assessment of Learning in Undergraduate Education (VALUE) Institute and Liberal Education and America’s Promise (LEAP) initiative (Association of American Colleges and Universities, 2019 ) is intended for programmatic assessment rather than specifically giving feedback to students throughout a course. As with tests for assessing critical thinking, current rubrics to assess critical thinking are not designed to act as formative assessments and give feedback to STEM faculty and undergraduates at the course or task level. Another issue with the assessment of critical thinking is the degree to which the construct is measurable. A National Research Council report (National Research Council, 2011 ) has suggested that there is little evidence of a consistent, measurable definition for critical thinking and that it may not be different from one’s general cognitive ability. Despite this issue, we have found that critical thinking is consistently listed as a programmatic outcome in STEM disciplines (American Chemical Society Committee on Professional Training, 2015 ; The Royal Society, 2014 ), so we argue that it is necessary to support instructors as they attempt to assess this skill.

Current methods for evaluating students’ information processing include discipline-specific tools such as a rubric to assess physics students’ use of graphs and equations to solve work-energy problems (Nguyen et al., 2010 ) and assessments of organic chemistry students’ ability to “[manipulate] and [translate] between various representational forms” including 2D and 3D representations of chemical structures (Kumi et al., 2013 ). Although these assessment tools can be effectively used for their intended context, they were not designed for use in a wide range of STEM disciplines or for a variety of tasks.

Despite the many tools that exist to measure process skills, none has been designed and tested to facilitate frequent, formative feedback to STEM undergraduate students and faculty throughout a semester. The rubrics described here have been designed by the Enhancing Learning by Improving Process Skills in STEM (ELIPSS) Project (Cole et al., 2016 ) to assess undergraduate STEM students’ process skills and to facilitate feedback at the classroom level with the potential to track growth throughout a semester or degree program. The rubrics described here are designed to assess critical thinking and information processing in student written work. Rubrics were chosen as the format for our process skill assessment tools because the highest level of each category in rubrics can serve as an explicit learning outcome that the student is expected to achieve (Panadero & Jonsson, 2013 ). Rubrics that are generalizable to multiple disciplines and institutions can enable the assessment of student learning outcomes and active learning pedagogies throughout a program of study and provide useful tools for a greater number of potential users.

Research questions

This work sought to answer the following research questions for each rubric:

Does the rubric adequately measure relevant aspects of the skill?

How well can the rubrics provide feedback to instructors and students?

Can multiple raters use the rubrics to give consistent scores?

This work received Institutional Review Board approval prior to any data collection involving human subjects. The sources of data used to construct the process skill rubrics and answer these research questions were (1) peer-reviewed literature on how each skill is defined, (2) feedback from content experts in multiple STEM disciplines via surveys and in-person, group discussions regarding the appropriateness of the rubrics for each discipline, (3) interviews with students whose work was scored with the rubrics and teaching assistants who scored the student work, and (4) results of applying the rubrics to samples of student work.

Defining the scope of the rubrics

The rubrics described here and the other rubrics in development by the ELIPSS Project are intended to measure process skills, which are desired learning outcomes identified by the STEM community in recent reports (National Research Council, 2012 ; Singer et al., 2012 ). In order to measure these skills in multiple STEM disciplines, operationalized definitions of each skill were needed. These definitions specify which aspects of student work (operations) would be considered evidence for the student using that skill and establish a shared understanding of each skill by members of each STEM discipline. The starting point for this work was the process skill definitions developed as part of the POGIL project (Cole et al., 2019a ). The POGIL community includes instructors from a variety of disciplines and institutions and represented the intended audience for the rubrics: faculty who value process skills and want to more explicitly assess them. The process skills discussed in this work were defined as follows:

Critical thinking is analyzing, evaluating, or synthesizing relevant information to form an argument or reach a conclusion supported with evidence.

Information processing is evaluating, interpreting, and manipulating or transforming information.

Examples of critical thinking include the tasks that students are asked to perform in a laboratory course. When students are asked to analyze the data they collected, combine data from different sources, and generate arguments or conclusions about their data, we see this as critical thinking. However, when students simply follow the so-called “cookbook” laboratory instructions that require them to confirm pre-determined conclusions, we do not think students are engaging in critical thinking. One example of information processing is when organic chemistry students are required to re-draw molecules in different formats. The students must evaluate and interpret various pieces of one representation, and then they recreate the molecule in another representation. However, if students are asked to simply memorize facts or algorithms to solve problems, we do not see this as information processing.

Iterative rubric development

The development process was the same for the information processing rubric and the critical thinking rubric. After defining the scope of the rubric, an initial version was drafted based upon the definition of the target process skill and how each aspect of the skill is defined in the literature. A more detailed discussion of the literature that informed each rubric category is included in the “Results and Discussion” section. This initial version then underwent iterative testing in which the rubric was reviewed by researchers, practitioners, and students. The rubric was first evaluated by the authors and a group of eight faculty from multiple STEM disciplines who made up the ELIPSS Project’s primary collaborative team (PCT). The PCT was a group of faculty members with experience in discipline-based education research who employ active-learning pedagogies in their classrooms. This initial round of evaluation was intended to ensure that the rubric measured relevant aspects of the skill and was appropriate for each PCT member’s discipline. This evaluation determined how well the rubrics were aligned with each instructor’s understanding of the process skill including both in-person and email discussions that continued until the group came to consensus that each rubric category could be applied to student work in courses within their disciplines. There has been an ongoing debate regarding the role of disciplinary knowledge in critical thinking and the extent to which critical thinking is subject-specific (Davies, 2013 ; Ennis, 1990 ). This work focuses on the creation of rubrics to measure process skills in different domains, but we have not performed cross-discipline comparisons. This initial round of review was also intended to ensure that the rubrics were ready for classroom testing by instructors in each discipline. Next, each rubric was tested over three semesters in multiple classroom environments, illustrated in Table 1 . The rubrics were applied to student work chosen by each PCT member. The PCT members chose the student work based on their views of how the assignments required students to engage in process skills and show evidence of those skills. The information processing and critical thinking rubrics shown in this work were each tested in at least three disciplines, course levels, and institutions.

After each semester, the feedback was collected from the faculty testing the rubric, and further changes to the rubric were made. Feedback was collected in the form of survey responses along with in-person group discussions at annual project meetings. After the first iteration of completing the survey, the PCT members met with the authors to discuss how they were interpreting each survey question. This meeting helped ensure that the surveys were gathering valid data regarding how well the rubrics were measuring the desired process skill. Questions in the survey such as “What aspects of the student work provided evidence for the indicated process skill?” and “Are there edits to the rubric/descriptors that would improve your ability to assess the process skill?” allowed the authors to determine how well the rubric scores were matching the student work and identify necessary changes to the rubric. Further questions asked about the nature and timing of the feedback given to students in order to address the question of how well the rubrics provide feedback to instructors and students. The survey questions are included in the Supporting Information . The survey responses were analyzed qualitatively to determine themes related to each research question.

In addition to the surveys given to faculty rubric testers, twelve students were interviewed in fall 2016 and fall 2017. In the United States of America, the fall semester typically runs from August to December and is the first semester of the academic year. Each student participated in one interview which lasted about 30 min. These interviews were intended to gather further data to answer questions about how well the rubrics were measuring the identified process skills that students were using when they completed their assignments and to ensure that the information provided by the rubrics made sense to students. The protocol for these interviews is included in the Supporting Information . In fall 2016, the students interviewed were enrolled in an organic chemistry laboratory course for non-majors at a large, research-intensive university in the United States. Thirty students agreed to have their work analyzed by the research team, and nine students were interviewed. However, the rubrics were not a component of the laboratory course grading. Instead, the first author assessed the students’ reports for critical thinking and information processing, and then the students were provided electronic copies of their laboratory reports and scored rubrics in advance of the interview. The first author had recently been a graduate teaching assistant for the course and was familiar with the instructor’s expectations for the laboratory reports. During the interview, the students were given time to review their reports and the completed rubrics, and then they were asked about how well they understood the content of the rubrics and how accurately each category score represented their work.

In fall 2017, students enrolled in a physical chemistry thermodynamics course for majors were interviewed. The physical chemistry course took place at the same university as the organic laboratory course, but there was no overlap between participants. Three students and two graduate teaching assistants (GTAs) were interviewed. The course included daily group work, and process skill assessment was an explicit part of the instructor’s curriculum. At the end of each class period, students assessed their groups using portions of ELIPSS rubrics, including the two process skill rubrics included in this paper. About every 2 weeks, the GTAs assessed the student groups with a complete ELIPSS rubric for a particular skill, then gave the groups their scored rubrics with written comments. The students’ individual homework problem sets were assessed once with rubrics for three skills: critical thinking, information processing, and problem-solving. The students received the scored rubric with written comments when the graded problem set was returned to them. In the last third of the semester, the students and GTAs were interviewed about how rubrics were implemented in the course, how well the rubric scores reflected the students’ written work, and how the use of rubrics affected the teaching assistants’ ability to assess the student skills. The protocols for these interviews are included in the Supporting Information .

Gathering evidence for utility, validity, and reliability

The utility, validity, and reliability of the rubrics were measured throughout the development process. The utility is the degree to which the rubrics are perceived as practical to experts and practitioners in the field. Through multiple meetings, the PCT faculty determined that early drafts of the rubric seemed appropriate for use in their classrooms, which represented multiple STEM disciplines. Rubric utility was reexamined multiple times throughout the development process to ensure that the rubrics would remain practical for classroom use. Validity can be defined in multiple ways. For example, the Standards for Educational and Psychological Testing (Joint Committee on Standards for Educational Psychological Testing, 2014 ) defines validity as “the degree to which all the accumulated evidence supports the intended interpretation of test scores for the proposed use.” For the purposes of this work, we drew on the ways in which two distinct types of validity were examined in the rubric literature: content validity and construct validity. Content validity is the degree to which the rubrics cover relevant aspects of each process skill (Moskal & Leydens, 2000 ). In this case, the process skill definition and a review of the literature determined which categories were included in each rubric. The literature review was finished once the data was saturated: when no more new aspects were found. Construct validity is the degree to which the levels of each rubric category accurately reflect the process that students performed (Moskal & Leydens, 2000 ). Evidence of construct validity was gathered via the faculty surveys, teaching assistant interviews, and student interviews. In the student interviews, students were given one of their completed assignments and asked to explain how they completed the task. Students were then asked to explain how well each category applied to their work and if any changes were needed to the rubric to more accurately reflect their process. Due to logistical challenges, we were not able to obtain evidence for convergent validity, and this is further discussed in the “Limitations” section.

Adjacent agreement, also known as “interrater agreement within one,” was chosen as the measure of interrater reliability due to its common use in rubric development projects (Jonsson & Svingby, 2007 ). The adjacent agreement is the percentage of cases in which two raters agree on a rating or are different by one level (i.e., they give adjacent ratings to the same work). Jonsson and Svingby ( 2007 ) found that most of the rubrics they reviewed had adjacent agreement scores of 90% or greater. However, they noted that the agreement threshold varied based on the number of possible levels of performance for each category in the rubric, with three and four being the most common numbers of levels. Since the rubrics discussed in this report have six levels (scores of zero through five) and are intended for low-stakes assessment and feedback, the goal of 80% adjacent agreement was selected. To calculate agreement for the critical thinking and information processing rubrics, two researchers discussed the scoring criteria for each rubric and then independently assessed the organic chemistry laboratory reports.

Results and discussion

The process skill rubrics to assess critical thinking and information processing in student written work were completed after multiple rounds of revision based on feedback from various sources. These sources include feedback from instructors who tested the rubrics in their classrooms, TAs who scored student work with the rubrics, and students who were assessed with the rubrics. The categories for each rubric will be discussed in terms of the evidence that the rubrics measure the relevant aspects of the skill and how they can be used to assess STEM undergraduate student work. Each category discussion will begin with a general explanation of the category followed by more specific examples from the organic chemistry laboratory course and physical chemistry lecture course to demonstrate how the rubrics can be used to assess student work.

Information processing rubric

The definition of information processing and the focus of the rubric presented here (Fig. 1 ) are distinct from cognitive information processing as defined by the educational psychology literature (Driscoll, 2005 ). The rubric shown here is more aligned with the STEM education construct of representational competency (Daniel et al., 2018 ).

figure 1

Rubric for assessing information processing

When solving a problem or completing a task, students must evaluate the provided information for relevance or importance to the task (Hanson, 2008 ; Swanson et al., 1990 ). All the information provided in a prompt (e.g., homework or exam questions) may not be relevant for addressing all parts of the prompt. Students should ideally show evidence of their evaluation process by identifying what information is present in the prompt/model, indicating what information is relevant or not relevant, and indicating why information is relevant. Responses with these characteristics would earn high rubric scores for this category. Although students may not explicitly state what information is necessary to address a task, the information they do use can act as indirect evidence of the degree to which they have evaluated all of the available information in the prompt. Evidence for students inaccurately evaluating information for relevance includes the inclusion of irrelevant information or the omission of relevant information in an analysis or in completing a task. When evaluating the organic chemistry laboratory reports, the focus for the evaluating category was the information students presented when identifying the chemical structure of their products. For students who received a high score, this information included their measured value for the product’s melting point, the literature (expected) value for the melting point, and the peaks in a nuclear magnetic resonance (NMR) spectrum. NMR spectroscopy is a commonly used technique in chemistry to obtain structural information about a compound. Lower scores were given if students omitted any of the necessary information or if they included unnecessary information. For example, if a student discussed their reaction yield when discussing the identity of their product, they would receive a low Evaluating score because the yield does not help them determine the identity of their product; the yield, in this case, would be unnecessary information. In the physical chemistry course, students often did not show evidence that they determined which information was relevant to answer the homework questions and thus earned low evaluating scores. These omissions will be further addressed in the “Interpreting” section.

Interpreting

In addition to evaluating, students must often interpret information using their prior knowledge to explain the meaning of something, make inferences, match data to predictions, and extract patterns from data (Hanson, 2008 ; Nakhleh, 1992 ; Schmidt et al., 1989 ; Swanson et al., 1990 ). Students earn high scores for this category if they assign correct meaning to labeled information (e.g., text, tables, graphs, diagrams), extract specific details from information, explain information in their own words, and determine patterns in information. For the organic chemistry laboratory reports, students received high scores if they accurately interpreted their measured values and NMR peaks. Almost every student obtained melting point values that were different than what was expected due to measurement error or impurities in their products, so they needed to describe what types of impurities could cause such discrepancies. Also, each NMR spectrum contained one peak that corresponded to the solvent used to dissolve the students’ product, so the students needed to use their prior knowledge of NMR spectroscopy to recognize that peak did not correspond to part of their product.

In physical chemistry, the graduate teaching assistant often gave students low scores for inaccurately explaining changes to chemical systems such as changes in pressure or entropy. The graduate teaching assistant who assessed the student work used the rubric to identify both the evaluating and interpreting categories as weaknesses in many of the students’ homework submissions. However, the students often earned high scores for the manipulating and transforming categories, so the GTA was able to give students specific feedback on their areas for improvement while also highlighting their strengths.

Manipulating and transforming (extent and accuracy)

In addition to evaluating and interpreting information, students may be asked to manipulate and transform information from one form to another. These transformations should be complete and accurate (Kumi et al., 2013 ; Nguyen et al., 2010 ). Students may be required to construct a figure based on written information, or conversely, they may transform information in a figure into words or mathematical expressions. Two categories for manipulating and transforming (i.e., extent and accuracy) were included to allow instructors to give more specific feedback. It was often found that students would either transform little information but do so accurately, or transform much information and do so inaccurately; the two categories allowed for differentiated feedback to be provided. As stated above, the organic chemistry students were expected to transform their NMR spectral data into a table and provide a labeled structure of their final product. Students were given high scores if they converted all of the relevant peaks from their spectrum into the table format and were able to correctly match the peaks to the hydrogen atoms in their products. Students received lower scores if they were only able to convert the information for a few peaks or if they incorrectly matched the peaks to the hydrogen atoms.

Critical thinking rubric

Critical thinking can be broadly defined in different contexts, but we found that the categories included in the rubric (Fig. 2 ) represented commonly accepted aspects of critical thinking (Danczak et al., 2017 ) and suited the needs of the faculty collaborators who tested the rubric in their classrooms.

figure 2

Rubric for assessing critical thinking

When completing a task, students must evaluate the relevance of information that they will ultimately use to support a claim or conclusions (Miri et al., 2007 ; Zohar et al., 1994 ). An evaluating category is included in both critical thinking and information processing rubrics because evaluation is a key aspect of both skills. From our previous work developing a problem-solving rubric (manuscript in preparation) and our review of the literature for this work (Danczak et al., 2017 ; Lewis & Smith, 1993 ), the overlap was seen between information processing, critical thinking, and problem-solving. Additionally, while the Evaluating category in the information processing rubric assesses a student’s ability to determine the importance of information to complete a task, the evaluating category in the critical thinking rubric places a heavier emphasis on using the information to support a conclusion or argument.

When scoring student work with the evaluating category, students receive high scores if they indicate what information is likely to be most relevant to the argument they need to make, determine the reliability of the source of their information, and determine the quality and accuracy of the information itself. The information used to assess this category can be indirect as with the Evaluating category in the information processing rubric. In the organic chemistry laboratory reports, students needed to make an argument about whether they successfully produced the desired product, so they needed to discuss which information was relevant to their claims about the product’s identity and purity. Students received high scores for the evaluating category when they accurately determined that the melting point and nearly all peaks except the solvent peak in the NMR spectrum indicated the identity of their product. Students received lower scores for evaluating when they left out relevant information because this was seen as evidence that the student inaccurately evaluated the information’s relevance in supporting their conclusion. They also received lower scores when they incorrectly stated that a high yield indicated a pure product. Students were given the opportunity to demonstrate their ability to evaluate the quality of information when discussing their melting point. Students sometimes struggled to obtain reliable melting point data due to their inexperience in the laboratory, so the rubric provided a way to assess the student’s ability to critique their own data.

In tandem with evaluating information, students also need to analyze that same information to extract meaningful evidence to support their conclusions (Bailin, 2002 ; Lai, 2011 ; Miri et al., 2007 ). The analyzing category provides an assessment of a student’s ability to discuss information and explore the possible meaning of that information, extract patterns from data/information that could be used as evidence for their claims, and summarize information that could be used as evidence. For example, in the organic chemistry laboratory reports, students needed to compare the information they obtained to the expected values for a product. Students received high scores for the analyzing category if they could extract meaningful structural information from the NMR spectrum and their two melting points (observed and expected) for each reaction step.

Synthesizing

Often, students are asked to synthesize or connect multiple pieces of information in order to draw a conclusion or make a claim (Huitt, 1998 ; Lai, 2011 ). Synthesizing involves identifying the relationships between different pieces of information or concepts, identifying ways that different pieces of information or concepts can be combined, and explaining how the newly synthesized information can be used to reach a conclusion and/or support an argument. While performing the organic chemistry laboratory experiments, students obtained multiple types of information such as the melting point and NMR spectrum in addition to other spectroscopic data such as an infrared (IR) spectrum. Students received high scores for this category when they accurately synthesized these multiple data types by showing how the NMR and IR spectra could each reveal different parts of a molecule in order to determine the molecule’s entire structure.

Forming arguments (structure and validity)

The final key aspect of critical thinking is forming a well-structured and valid argument (Facione, 1984 ; Glassner & Schwarz, 2007 ; Lai, 2011 ; Lewis & Smith, 1993 ). It was observed that students can earn high scores for evaluating, analyzing, and synthesizing, but still struggle to form arguments. This was particularly common in assessing problem sets in the physical chemistry course.

As with the manipulating and transforming categories in the information processing rubric, two forming arguments categories were included to allow instructors to give more specific feedback. Some students may be able to include all of the expected structural elements of their arguments but use faulty information or reasoning. Conversely, some students may be able to make scientifically valid claims but not necessarily support them with evidence. The two forming arguments categories are intended to accurately assess both of these scenarios. For the forming arguments (structure) category, students earn high scores if they explicitly state their claim or conclusion, list the evidence used to support the argument, and provide reasoning to link the evidence to their claim/conclusion. Students who do not make a claim or who provide little evidence or reasoning receive lower scores.

For the forming arguments (validity) category, students earn high scores if their claim is accurate and their reasoning is logical and clearly supports the claim with provided evidence. Organic chemistry students earned high scores for the forms and supports arguments categories if they made explicit claims about the identity and purity of their product and provided complete and accurate evidence for their claim(s) such as the melting point values and positions of NMR peaks that correspond to their product. Additionally, the students provided evidence for the purity of their products by pointing to the presence or absence of peaks in their NMR spectrum that would match other potential side products. They also needed to provide logical reasoning for why the peaks indicated the presence or absence of a compound. As previously mentioned, the physical chemistry students received lower scores for the forming arguments categories than for the other aspects of critical thinking. These students were asked to make claims about the relationships between entropy and heat and then provide relevant evidence to justify these claims. Often, the students would make clearly articulated claims but would provide little evidence to support them. As with the information processing rubric, the critical thinking rubric allowed the GTAs to assess aspects of these skills independently and identify specific areas for student improvement.

Validity and reliability

The goal of this work was to create rubrics that can accurately assess student work (validity) and be consistently implemented by instructors or researchers within multiple STEM fields (reliability). The evidence for validity includes the alignment of the rubrics with literature-based descriptions of each skill, review of the rubrics by content experts from multiple STEM disciplines, interviews with undergraduate students whose work was scored using the rubrics, and interviews of the GTAs who scored the student work.

The definitions for each skill, along with multiple iterations of the rubrics, underwent review by STEM content experts. As noted earlier, the instructors who were testing the rubrics were given a survey at the end of each semester and were invited to offer suggested changes to the rubric to better help them assess their students. After multiple rubric revisions, survey responses from the instructors indicated that the rubrics accurately represented the breadth of each process skill as seen in each expert’s content area and that each category could be used to measure multiple levels of student work. By the end of the rubrics’ development, instructors were writing responses such as “N/A” or “no suggestions” to indicate that the rubrics did not need further changes.

Feedback from the faculty also indicated that the rubrics were measuring the intended constructs by the ways they responded to the survey item “What aspects of the student work provided evidence for the indicated process skill?” For example, one instructor noted that for information processing, she saw evidence of the manipulating and transforming categories when “students had to transform their written/mathematical relationships into an energy diagram.” Another instructor elicited evidence of information processing during an in-class group quiz: “A question on the group quiz was written to illicit [sic] IP [information processing]. Students had to transform a structure into three new structures and then interpret/manipulate the structures to compare the pKa values [acidity] of the new structures.” For this instructor, the structures written by the students revealed evidence of their information processing by showing what information they omitted in the new structures or inaccurately transformed. For critical thinking, an instructor assessed short research reports with the critical thinking rubric and “looked for [the students’] ability to use evidence to support their conclusions, to evaluate the literature studies, and to develop their own judgements by synthesizing the information.” Another instructor used the critical thinking rubric to assess their students’ abilities to choose an instrument to perform a chemical analysis. According to the instructor, the students provided evidence of their critical thinking because “in their papers, they needed to justify their choice of instrument. This justification required them to evaluate information and synthesize a new understanding for this specific chemical analysis.”

Analysis of student work indicates multiple levels of achievement for each rubric category (illustrated in Fig. 3 ), although there may have been a ceiling effect for the evaluating and the manipulating and transforming (extent) categories in information processing for organic chemistry laboratory reports because many students earned the highest possible score (five) for those categories. However, other implementations of the ELIPSS rubrics (Reynders et al., 2019 ) have shown more variation in student scores for the two process skills.

figure 3

Student rubric scores from an organic chemistry laboratory course. The two rubrics were used to evaluate different laboratory reports. Thirty students were assessed for information processing and 28 were assessed for critical thinking

To provide further evidence that the rubrics were measuring the intended skills, students in the physical chemistry course were interviewed about their thought processes and how well the rubric scores reflected the work they performed. During these interviews, students described how they used various aspects of information processing and critical thinking skills. The students first described how they used information processing during a problem set where they had to answer questions about a diagram of systolic and diastolic blood pressure. Students described how they evaluated and interpreted the graph to make statements such as “diastolic [pressure] is our y-intercept” and “volume is the independent variable.” The students then demonstrated their ability to transform information from one form to another, from a graph to a mathematic equation, by recognizing “it’s a linear relationship so I used Y equals M X plus B ” and “integrated it cause it’s the change, the change in V [volume]. For critical thinking, students described their process on a different problem set. In this problem set, the students had to explain why the change of Helmholtz energy and the change in Gibbs free energy were equivalent under a certain given condition. Students first demonstrated how they evaluated the relevant information and analyzed what would and would not change in their system. One student said, “So to calculate the final pressure, I think I just immediately went to the ideal gas law because we know the final volume and the number of moles won’t change and neither will the temperature in this case. Well, I assume that it wouldn’t.” Another student showed evidence of their evaluation by writing out all the necessary information in one place and stating, “Whenever I do these types of problems, I always write what I start with which is why I always have this line of information I’m given.” After evaluating and analyzing, students had to form an argument by claiming that the two energy values were equal and then defending that claim. Students explained that they were not always as clear as they could be when justifying their claim. For instance, one student said, “Usually I just write out equations and then hope people understand what I’m doing mathematically” but they “probably could have explained it a little more.”

Student feedback throughout the organic chemistry course and near the end of the physical chemistry course indicated that the rubric scores were accurate representations of the students’ work with a few exceptions. For example, some students felt like they should have received either a lower or higher score for certain categories, but they did say that the categories themselves applied well to their work. Most notably, one student reported that the forms and supports arguments categories in the critical thinking rubric did not apply to her work because she “wasn’t making an argument” when she was demonstrating that the Helmholtz and Gibbs energy values were equal in her thermodynamics assignment. We see this as an instance where some students and instructors may define argument in different ways. The process skill definitions and the rubric categories are meant to articulate intended learning outcomes from faculty members to their students, so if a student defines the skills or categories differently than the faculty member, then the rubrics can serve to promote a shared understanding of the skill.

As previously mentioned, reliability was measured by two researchers assessing ten laboratory reports independently to ensure that multiple raters could use the rubrics consistently. The average adjacent agreement scores were 92% for critical thinking and 93% for information processing. The exact agreement scores were 86% for critical thinking and 88% for information processing. Additionally, two different raters assessed a statistics assignment that was given to sixteen first-year undergraduates. The average pairwise adjacent agreement scores were 89% for critical thinking and 92% for information processing for this assignment. However, the exact agreement scores were much lower: 34% for critical thinking and 36% for information processing. In this case, neither rater was an expert in the content area. While the exact agreement scores for the statistics assignment are much lower than desirable, the adjacent agreement scores do meet the threshold for reliability as seen in other rubrics (Jonsson & Svingby, 2007 ) despite the disparity in expertise. Based on these results, it may be difficult for multiple raters to give exactly the same scores to the same work if they have varying levels of content knowledge, but it is important to note that the rubrics are primarily intended for formative assessment that can facilitate discussions between instructors and students about the ways for students to improve. The high level of adjacent agreement scores indicates that multiple raters can identify the same areas to improve in examples of student work.

Instructor and teaching assistant reflections

The survey responses from faculty members determined the utility of the rubrics. Faculty members reported that when they used the rubrics to define their expectations and be more specific about their assessment criteria, the students seemed to be better able to articulate the areas in which they needed improvement. As one instructor put it, “having the rubrics helped open conversations and discussions” that were not happening before the rubrics were implemented. We see this as evidence of the clear intended learning outcomes that are an integral aspect of achieving constructive alignment within a course. The instructors’ specific feedback to the students, and the students’ increased awareness of their areas for improvement, may enable the students to better regulate their learning throughout a course. Additionally, the survey responses indicated that the faculty members were changing their teaching practices and becoming more cognizant of how assignments did or did not elicit the process skill evidence that they desired. After using the rubrics, one instructor said, “I realize I need to revise many of my activities to more thoughtfully induce process skill development.” We see this as evidence that the faculty members were using the rubrics to regulate their teaching by reflecting on the outcomes of their practices and then planning for future teaching. These activities represent the reflection and forethought/planning aspects of self-regulated learning on the part of the instructors. Graduate teaching assistants in the physical chemistry course indicated that the rubrics gave them a way to clarify the instructor’s expectations when they were interacting with the students. As one GTA said, “It’s giving [the students] feedback on direct work that they have instead of just right or wrong. It helps them to understand like ‘Okay how can I improve? What areas am I lacking in?’” A more detailed account of how the instructors and teaching assistants implemented the rubrics has been reported elsewhere (Cole et al., 2019a ).

Student reflections

Students in both the organic and physical chemistry courses reported that they could use the rubrics to engage in the three phases of self-regulated learning: forethought/planning, performing, and reflecting. In an organic chemistry interview, one student was discussing how they could improve their low score for the synthesizing category of critical thinking by saying “I could use the data together instead of trying to use them separately,” thus demonstrating forethought/planning for their later work. Another student described how they could use the rubric while performing a task: “I could go through [the rubric] as I’m writing a report…and self-grade.” Finally, one student demonstrated how they could use the rubrics to reflect on their areas for improvement by saying that “When you have the five column [earn a score of five], I can understand that I’m doing something right” but “I really need to work on revising my reports.” We see this as evidence that students can use the rubrics to regulate their own learning, although classroom facilitation can have an effect on the ways in which students use the rubric feedback (Cole et al., 2019b ).

Limitations

The process skill definitions presented here represent a consensus understanding among members of the POGIL community and the instructors who participated in this study, but these skills are often defined in multiple ways by various STEM instructors, employers, and students (Danczak et al., 2017 ). One issue with critical thinking, in particular, is the broadness of how the skill is defined in the literature. Through this work, we have evidence via expert review to indicate that our definitions represent common understandings among a set of STEM faculty. Nonetheless, we cannot claim that all STEM instructors or researchers will share the skill definitions presented here.

There is currently a debate in the STEM literature (National Research Council, 2011 ) about whether the critical thinking construct is domain-general or domain-specific, that is, whether or not one’s critical thinking ability in one discipline can be applied to another discipline. We cannot make claims about the generalness of the construct based on the data presented here because the same students were not tested across multiple disciplines or courses. Additionally, we did not gather evidence for convergent validity, which is “the degree to which an operationalized construct is similar to other operationalized constructs that it theoretically should be similar to” (National Research Council, 2011 ). In other words, evidence for convergent validity would be the comparison of multiple measures of information processing or critical thinking. However, none of the instructors who used the ELIPSS rubrics also used a secondary measure of the constructs. Although the rubrics were examined by a multidisciplinary group of collaborators, this group was primarily chemists and included eight faculties from other disciplines, so the content validity of the rubrics may be somewhat limited.

Finally, the generalizability of the rubrics is limited by the relatively small number of students who were interviewed about their work. During their interviews, the students in the organic and physical chemistry courses each said that they could use the rubric scores as feedback to improve their skills. Additionally, as discussed in the “Validity and Reliability” section, the processes described by the students aligned with the content of the rubric and provided evidence of the rubric scores’ validity. However, the data gathered from the student interviews only represents the views of a subset of students in the courses, and further study is needed to determine the most appropriate contexts in which the rubrics can be implemented.

Conclusions and implications

Two rubrics were developed to assess and provide feedback on undergraduate STEM students’ critical thinking and information processing. Faculty survey responses indicated that the rubrics measured the relevant aspects of each process skill in the disciplines that were examined. Faculty survey responses, TA interviews, and student interviews over multiple semesters indicated that the rubric scores accurately reflected the evidence of process skills that the instructors wanted to see and the processes that the students performed when they were completing their assignments. The rubrics showed high inter-rater agreement scores, indicating that multiple raters could identify the same areas for improvement in student work.

In terms of constructive alignment, courses should ideally have alignment between their intended learning outcomes, student and instructor activities, and assessments. By using the ELIPSS rubrics, instructors were able to explicitly articulate the intended learning outcomes of their courses to their students. The instructors were then able to assess and provide feedback to students on different aspects of their process skills. Future efforts will be focused on modifying student assignments to enable instructors to better elicit evidence of these skills. In terms of self-regulated learning, students indicated in the interviews that the rubric scores were accurate representations of their work (performances), could help them reflect on their previous work (self-reflection), and the feedback they received could be used to inform their future work (forethought). Not only did the students indicate that the rubrics could help them regulate their learning, but the faculty members indicated that the rubrics had helped them regulate their teaching. With the individual categories on each rubric, the faculty members were better able to observe their students’ strengths and areas for improvement and then tailor their instruction to meet those needs. Our results indicated that the rubrics helped instructors in multiple STEM disciplines and at multiple institutions reflect on their teaching and then make changes to better align their teaching with their desired outcomes.

Overall, the rubrics can be used in a number of different ways to modify courses or for programmatic assessment. As previously stated, instructors can use the rubrics to define expectations for their students and provide them with feedback on desired skills throughout a course. The rubric categories can be used to give feedback on individual aspects of student process skills to provide specific feedback to each student. If an instructor or department wants to change from didactic lecture-based courses to active learning ones, the rubrics can be used to measure non-content learning gains that stem from the adoption of such pedagogies. Although the examples provided here for each rubric were situated in chemistry contexts, the rubrics were tested in multiple disciplines and institution types. The rubrics have the potential for wide applicability to assess not only laboratory reports but also homework assignments, quizzes, and exams. Assessing these tasks provides a way for instructors to achieve constructive alignment between their intended outcomes and their assessments, and the rubrics are intended to enhance this alignment to improve student process skills that are valued in the classroom and beyond.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

American Association of Colleges and Universities

Critical Thinking Assessment Test

Comprehensive University

Enhancing Learning by Improving Process Skills in STEM

Liberal Education and America’s Promise

Nuclear Magnetic Resonance

Primary Collaborative Team

Peer-led Team Learning

Process Oriented Guided Inquiry Learning

Primarily Undergraduate Institution

Research University

Science, Technology, Engineering, and Mathematics

Valid Assessment of Learning in Undergraduate Education

ABET Engineering Accreditation Commission. (2012). Criteria for Accrediting Engineering Programs . Retrieved from http://www.abet.org/accreditation/accreditation-criteria/criteria-for-accrediting-engineering-programs-2016-2017/ .

American Chemical Society Committee on Professional Training. (2015). Unergraduate Professional Education in Chemistry: ACS Guidelines and Evaluation Procedures for Bachelor's Degree Programs . Retrieved from https://www.acs.org/content/dam/acsorg/about/governance/committees/training/2015-acs-guidelines-for-bachelors-degree-programs.pdf

Association of American Colleges and Universities. (2019). VALUE Rubric Development Project. Retrieved from https://www.aacu.org/value/rubrics .

Bailin, S. (2002). Critical Thinking and Science Education. Science and Education, 11 , 361–375.

Article   Google Scholar  

Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education, 32 (3), 347–364.

Biggs, J. (2003). Aligning teaching and assessing to course objectives. Teaching and learning in higher education: New trends and innovations, 2 , 13–17.

Google Scholar  

Biggs, J. (2014). Constructive alignment in university teaching. HERDSA Review of higher education, 1 (1), 5–22.

Black, P., & Wiliam, D. (1998). Assessment and Classroom Learning. Assessment in Education: Principles, Policy & Practice, 5 (1), 7–74.

Bodner, G. M. (1986). Constructivism: A theory of knowledge. Journal of Chemical Education, 63 (10), 873–878.

Brewer, C. A., & Smith, D. (2011). Vision and change in undergraduate biology education: a call to action. American Association for the Advancement of Science . DC : Washington .

Brookhart, S. M., & Chen, F. (2014). The quality and effectiveness of descriptive rubrics. Educational Review , 1–26.

Butler, D. L., & Winne, P. H. (1995). Feedback and Self-Regulated Learning: A Theoretical Synthesis. Review of Educational Research, 65 (3), 245–281.

Cole, R., Lantz, J., & Ruder, S. (2016). Enhancing Learning by Improving Process Skills in STEM. Retrieved from http://www.elipss.com .

Cole, R., Lantz, J., & Ruder, S. (2019a). PO: The Process. In S. R. Simonson (Ed.), POGIL: An Introduction to Process Oriented Guided Inquiry Learning for Those Who Wish to Empower Learners (pp. 42–68). Sterling, VA: Stylus Publishing.

Cole, R., Reynders, G., Ruder, S., Stanford, C., & Lantz, J. (2019b). Constructive Alignment Beyond Content: Assessing Professional Skills in Student Group Interactions and Written Work. In M. Schultz, S. Schmid, & G. A. Lawrie (Eds.), Research and Practice in Chemistry Education: Advances from the 25 th IUPAC International Conference on Chemistry Education 2018 (pp. 203–222). Singapore: Springer.

Chapter   Google Scholar  

Danczak, S., Thompson, C., & Overton, T. (2017). ‘What does the term Critical Thinking mean to you?’A qualitative analysis of chemistry undergraduate, teaching staff and employers' views of critical thinking. Chemistry Education Research and Practice, 18 , 420–434.

Daniel, K. L., Bucklin, C. J., Leone, E. A., & Idema, J. (2018). Towards a Definition of Representational Competence. In Towards a Framework for Representational Competence in Science Education (pp. 3–11). Switzerland: Springer.

Davies, M. (2013). Critical thinking and the disciplines reconsidered. Higher Education Research & Development, 32 (4), 529–544.

Deloitte Access Economics. (2014). Australia's STEM Workforce: a survey of employers. Retrieved from https://www2.deloitte.com/au/en/pages/economics/articles/australias-stem-workforce-survey.html .

Driscoll, M. P. (2005). Psychology of learning for instruction . Boston, MA: Pearson Education.

Ennis, R. H. (1990). The extent to which critical thinking is subject-specific: Further clarification. Educational researcher, 19 (4), 13–16.

Facione, P. A. (1984). Toward a theory of critical thinking. Liberal Education, 70 (3), 253–261.

Facione, P. A. (1990a). The California Critical Thinking Skills Test--College Level . In Technical Report #1 . Experimental Validation and Content : Validity .

Facione, P. A. (1990b). The California critical thinking skills test—college level . In Technical Report #2 . Factors Predictive of CT : Skills .

Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences, 111 (23), 8410–8415.

Gafney, L., & Varma-Nelson, P. (2008). Peer-led team learning: evaluation, dissemination, and institutionalization of a college level initiative (Vol. 16): Springer Science & Business Media, Netherlands.

Glassner, A., & Schwarz, B. B. (2007). What stands and develops between creative and critical thinking? Argumentation? Thinking Skills and Creativity, 2 (1), 10–18.

Gosser, D. K., Cracolice, M. S., Kampmeier, J. A., Roth, V., Strozak, V. S., & Varma-Nelson, P. (2001). Peer-led team learning: A guidebook: Prentice Hall Upper Saddle River, NJ .

Gray, K., & Koncz, A. (2018). The key attributes employers seek on students' resumes. Retrieved from http://www.naceweb.org/about-us/press/2017/the-key-attributes-employers-seek-on-students-resumes/ .

Hanson, D. M. (2008). A cognitive model for learning chemistry and solving problems: implications for curriculum design and classroom instruction. In R. S. Moog & J. N. Spencer (Eds.), Process-Oriented Guided Inquiry Learning (pp. 15–19). Washington, DC: American Chemical Society.

Hattie, J., & Gan, M. (2011). Instruction based on feedback. Handbook of research on learning and instruction , 249-271.

Huitt, W. (1998). Critical thinking: an overview. In Educational psychology interactive Retrieved from http://www.edpsycinteractive.org/topics/cogsys/critthnk.html .

Joint Committee on Standards for Educational Psychological Testing. (2014). Standards for Educational and Psychological Testing : American Educational Research Association.

Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2 (2), 130–144.

Kumi, B. C., Olimpo, J. T., Bartlett, F., & Dixon, B. L. (2013). Evaluating the effectiveness of organic chemistry textbooks in promoting representational fluency and understanding of 2D-3D diagrammatic relationships. Chemistry Education Research and Practice, 14 , 177–187.

Lai, E. R. (2011). Critical thinking: a literature review. Pearson's Research Reports, 6 , 40–41.

Lewis, A., & Smith, D. (1993). Defining higher order thinking. Theory into Practice, 32 , 131–137.

Miri, B., David, B., & Uri, Z. (2007). Purposely teaching for the promotion of higher-order thinking skills: a case of critical thinking. Research in Science Education, 37 , 353–369.

Moog, R. S., & Spencer, J. N. (Eds.). (2008). Process oriented guided inquiry learning (POGIL) . Washington, DC: American Chemical Society.

Moskal, B. M., & Leydens, J. A. (2000). Scoring rubric development: validity and reliability. Practical Assessment, Research and Evaluation, 7 , 1–11.

Nakhleh, M. B. (1992). Why some students don't learn chemistry: Chemical misconceptions. Journal of Chemical Education, 69 (3), 191.

National Research Council. (2011). Assessing 21st Century Skills: Summary of a Workshop . Washington, DC: The National Academies Press.

National Research Council. (2012). Education for Life and Work: Developing Transferable Knowledge and Skills in the 21st Century . Washington, DC: The National Academies Press.

Nguyen, D. H., Gire, E., & Rebello, N. S. (2010). Facilitating Strategies for Solving Work-Energy Problems in Graphical and Equational Representations. 2010 Physics Education Research Conference, 1289 , 241–244.

Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31 (2), 199–218.

Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: a review. Educational Research Review, 9 , 129–144.

Pearl, A. O., Rayner, G., Larson, I., & Orlando, L. (2019). Thinking about critical thinking: An industry perspective. Industry & Higher Education, 33 (2), 116–126.

Ramsden, P. (1997). The context of learning in academic departments. The experience of learning, 2 , 198–216.

Rau, M. A., Kennedy, K., Oxtoby, L., Bollom, M., & Moore, J. W. (2017). Unpacking “Active Learning”: A Combination of Flipped Classroom and Collaboration Support Is More Effective but Collaboration Support Alone Is Not. Journal of Chemical Education, 94 (10), 1406–1414.

Reynders, G., Suh, E., Cole, R. S., & Sansom, R. L. (2019). Developing student process skills in a general chemistry laboratory. Journal of Chemical Education , 96 (10), 2109–2119.

Saxton, E., Belanger, S., & Becker, W. (2012). The Critical Thinking Analytic Rubric (CTAR): Investigating intra-rater and inter-rater reliability of a scoring mechanism for critical thinking performance assessments. Assessing Writing, 17 , 251–270.

Schmidt, H. G., De Volder, M. L., De Grave, W. S., Moust, J. H. C., & Patel, V. L. (1989). Explanatory Models in the Processing of Science Text: The Role of Prior Knowledge Activation Through Small-Group Discussion. J. Educ. Psychol., 81 , 610–619.

Simonson, S. R. (Ed.). (2019). POGIL: An Introduction to Process Oriented Guided Inquiry Learning for Those Who Wish to Empower Learners . Sterling, VA: Stylus Publishing, LLC.

Singer, S. R., Nielsen, N. R., & Schweingruber, H. A. (Eds.). (2012). Discipline-Based education research: understanding and improving learning in undergraduate science and engineering . Washington D.C.: The National Academies Press.

Smit, R., & Birri, T. (2014). Assuring the quality of standards-oriented classroom assessment with rubrics for complex competencies. Studies in Educational Evaluation, 43 , 5–13.

Stein, B., & Haynes, A. (2011). Engaging Faculty in the Assessment and Improvement of Students' Critical Thinking Using the Critical Thinking Assessment Test. Change: The Magazine of Higher Learning, 43 , 44–49.

Swanson, H. L., Oconnor, J. E., & Cooney, J. B. (1990). An Information-Processing Analysis of Expert and Novice Teachers Problem-Solving. American Educational Research Journal, 27 (3), 533–556.

The Royal Society. (2014). Vision for science and mathematics education: The Royal Society Science Policy Centre . London: England.

Watson, G., & Glaser, E. M. (1964). Watson-Glaser Critical Thinking Appraisal Manual . New York, NY: Harcourt, Brace, and World.

Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory into Practice, 41 (2), 64–70.

Zohar, A., Weinberger, Y., & Tamir, P. (1994). The Effect of the Biology Critical Thinking Project on the Development of Critical Thinking. Journal of Research in Science Teaching, 31 , 183–196.

Download references

Acknowledgements

We thank members of our Primary Collaboration Team and Implementation Cohorts for collecting and sharing data. We also thank all the students who have allowed us to examine their work and provided feedback.

Supporting information

• Product rubric survey

• Initial implementation survey

• Continuing implementation survey

This work was supported in part by the National Science Foundation under collaborative grants #1524399, #1524936, and #1524965. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Author information

Authors and affiliations.

Department of Chemistry, University of Iowa, W331 Chemistry Building, Iowa City, Iowa, 52242, USA

Gil Reynders & Renée S. Cole

Department of Chemistry, Virginia Commonwealth University, Richmond, Virginia, 23284, USA

Gil Reynders & Suzanne M. Ruder

Department of Chemistry, Drew University, Madison, New Jersey, 07940, USA

Juliette Lantz

Department of Chemistry, Ball State University, Muncie, Indiana, 47306, USA

Courtney L. Stanford

You can also search for this author in PubMed   Google Scholar

Contributions

RC, JL, and SR performed an initial literature review that was expanded by GR. All authors designed the survey instruments. GR collected and analyzed the survey and interview data with guidance from RC. GR revised the rubrics with extensive input from all other authors. All authors contributed to reliability measurements. GR drafted all manuscript sections. RC provided extensive comments during manuscript revisions; JL, SR, and CS also offered comments. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Renée S. Cole .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1..

Supporting Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Reynders, G., Lantz, J., Ruder, S.M. et al. Rubrics to assess critical thinking and information processing in undergraduate STEM courses. IJ STEM Ed 7 , 9 (2020). https://doi.org/10.1186/s40594-020-00208-5

Download citation

Received : 01 October 2019

Accepted : 20 February 2020

Published : 09 March 2020

DOI : https://doi.org/10.1186/s40594-020-00208-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Constructive alignment
  • Self-regulated learning
  • Process skills
  • Professional skills
  • Critical thinking
  • Information processing

information processing research paper

Information Retrieval and Knowledge Extraction for Academic Writing

  • Open Access
  • First Online: 15 September 2023

Cite this chapter

You have full access to this open access chapter

Book cover

  • Fernando Benites 8  

4347 Accesses

The amount of unstructured scientific data in the form of documents, reports, papers, patents, and the like is exponentially increasing each year. Technological advances and their implementations emerge at a similarly fast pace, making for many disciplines a manual overview of interdisciplinary and relevant studies nearly impossible. Consequently, surveying large corpora of documents without any automation, i.e. information extraction systems, seems no longer feasible. Fortunately, most articles are now accessible through digital channels, enabling automatic information retrieval by large database systems. Popular examples of such systems are Google Scholar or Scopus. As they allow us to rapidly find relevant and high-quality citations and references to previous work, these systems are particularly valuable in academic writing. However, not all users are aware of the mechanisms underlying relevance sorting, which we will address in this chapter. For example, in addition to searching for specific terms, new tools facilitate the discovery of relevant studies by using synonyms as well as similar works/citations. The near future holds even better tools for the creation of surveys, such as automatic summary generation or automatic question-answering systems over large corpora. In this chapter, we will discuss the relevant technologies and systems and their use in the academic writing context.

You have full access to this open access chapter,  Download chapter PDF

  • Machine learning
  • Human–machine interaction
  • Research for academic writing
  • Language modelling
  • Information retrieval
  • Knowledge extraction

1 Introduction

The creation of texts has accelerated in the last few decades. The number of patents, websites on the internet, and the amount of data in general has increased exponentially. Searching for the right piece of information is a ubiquitous problem (referred here as general-purpose information searching). Although scientific writing is particularly affected by that, the problem of information searching (especially when writing a literature review) is that many researchers do not know how the search engines work. While journals and renowned conferences help sort articles in a research field and identify the state of the art, individual researchers often struggle to get a comprehensive overview of all the relevant studies. Not only has the speed of the procedures of writing and publishing studies been accelerating, but also the pressure to publish or to perish has been quantified into numbers and scores, such as h-index, Footnote 1 finally increasing the amount of data to be searched. The idea that nonetheless digitalization and search engines can simply lead to substantial time gains when surveying a subject for a certain scientific field is appealing, but it actually often entails the problem of finding appropriate studies while being confronted with a too large list of potentially relevant matches.

In this situation, academics are similarly confronted with problems that arose with large data and the internet, especially overflow of information. Information retrieval focuses on developing algorithms for searching for a piece of information in a large corpus or in general in large corpora. This problem appeared in the late 1960s and 1970s with the creation of databases, but more specifically, with the storage of large parts of texts such as in libraries and large institutions. Databases use an index to access data quickly; unfortunately, creating an index over texts is not that easy. For instance, sometimes a part of a word is interesting (when looking for graduate, the word undergraduate is relevant), so using a simple alphabetic index will not cover basic use cases. Better methods needed to be developed, turning databases into search engines. Nevertheless, textual data is unstructured data, which cannot be processed to extract knowledge by computers easily. Knowledge extraction refers to the field which studies approaches targeting the challenge of extracting structured information from a textual form. Since the beginning of electronic computers, there has been a large amount of data embedded into textual data; thus, manually extracting structured information from it is an arduous task. In particular, when performing knowledge extraction, information retrieval might be a first task to execute, so information retrieval and knowledge extraction are closely related.

In the last two decades, the issue of information retrieval has become omnipresent, for example with the dispute between search engines such as Altavista, Yahoo, Microsoft Search (Bing), and Google, who ended up with the lion’s share. Even today, there are attempts to break Google’s monopoly with new search engines such as ecosia and duckduckgo. However, Google’s algorithm in its core (we will cover it later), is the most popular nowadays.

When writing scientific articles, thanks to the rapid digitalization of academic publishing and the rise of search engines, we now have access to so much more data and information than before that we are now often confronted with the challenge of finding a needle in the haystack. This is where online tools can help, especially those providing access to scientific publications. Hence, academic social network platforms, search engines and bibliographic databases such as Google Scholar (Halevi et al., 2017 ), Scopus, Microsoft Academic, ResearchGate or Academia.edu have become very popular over the last decade (Ortega, 2014 ; van Noorden 2014 ). These specialized search engines are needed and make a great gain in contrast to conventional search engines, since the procedure for academic writing is very different from general purpose information searching (Raamkumar et al., 2017 ). Most of these online platforms offer more or less detailed search interfaces to retrieve relevant scientific output. Moreover, they provide us with some indicators allowing us to assess the relevance of the search results: the number of citations, specific keywords, reference to further relevant studies through automatic linking from the citation list, articles suggested on the basis of previous searches or according to preferences set in one’s profile, amongst others. However, many challenges still remain, such as the ontological challenge of finding the right search terms (many terms being ambiguously coined), including all possible designations for a given topic, as well as assessing the quality of the articles presented in the results list.

On top of that, with the rise of academic social-networking activities, the number of potentially interesting and quickly accessible publications surpasses our human capacities. As a result, we depend more and more on algorithms to perform a first selection and extract relevant information which we can turn into knowledge for our scientific writing purpose. In that sense, algorithms provide us with two important services: on one side, information retrieval, which is becoming each day more sophisticated, and on the other side, knowledge extraction, i.e. the access to structured data Footnote 2 allowing us to process the information automatically, e.g. for statistics or surveys. This chapter will present and discuss the methods used to solve these tasks.

2 Information Retrieval

When we use an academic search engine or database to obtain an overview of the relevant articles on a given topic, we come up with a moderate number of words that, to our opinion, sum up the topic, and enter them in the search field. By launching the search, we give over to the machine and the actual information retrieval process. The main purpose of information retrieval is to find relevant texts in a large collection given the handful of words provided by a human user, and, more specifically, to rank these documents on their relevance to the query words. The resulting list of matches is thus created according to various criteria usually not known by the users. Yet, gaining insights into the information retrieval process might help understand and assess the relevance of the displayed search results. Especially, what is on top of the ranked list and what might get suppressed or be ranked down. As we will see later, depending on the search engine a search term needs to be written exactly or the search engine can provide us with helpful synonyms, or links to interesting papers.

The first approach for information retrieval is to break down the query words and analyze the corpora individually, looking for the appearances of each of the terms in the texts. The occurrence of a term in a document might increase the relevance of the document to the query, especially if there are many occurrences of the term within the same document. However, if a term is equally frequent in the language in general compared to its frequency in the corpus, it might be of no help. A metric aiming to engage this issue is term-frequency inverse-document-frequency (TF-IDF) (Manning & Schütze, 1999 ), which is often used for an array of natural language problems, and, amongst others, in automatic text classification (e.g., spam recognition, document classification [Benites, 2017 ]), dialect classification (Benites et al., 2018 ), but also for research-paper recommender systems (Beel, 2015 ). This method can find words that are important/specific for a certain document within a collection of documents, by giving more weight if a word frequently occurs in the document and less weight if it frequently occurs in the collection. Further, other considerations might help sort the results. If we want to find something about “scientific text writing” in a scientific database of articles on the internet, there will probably be just too many hits. Adding more words will reduce the list of results (since they are aggregated by an AND operation, seldom by an OR), but this implies choosing an adequate term that gives the query more purpose and specificity. For example, adding the word “generation” will break down the result set, but it could be equally helpful to discard some less important query terms, i.e. “text.” Moreover, very large documents might contain all the query words, which would lead to considering them a good match. However, if the terms are scattered throughout different parts of the document and have no vicinity or direct relation with each other, chances are that there are different disjoint subjects that do not automatically reunite towards the subject of interest. This is why some methods also foresee the penalization of lengthy documents as well as the prioritization of documents showing indicators of centrality, such as the number of citations, to obtain a more relevant set of results. And more importantly, these criteria have a direct impact on the ranking order of the results.

However, all those aspects do not consider the semantic context of the word. A “bank” can be a piece of furniture, a financial institution, or the land alongside a river. This is why more and more search engines use so-called contextual language models (such as transformers): artificial neural networks (machine learning approaches) trained to predict missing words in sentences from a collection of billions of texts (Devlin et al., 2018 ). This training procedure is called a self-supervised Footnote 3 task but is also known as pre-training. This approach helps the model memorize which words are used in the vicinity of certain words. After the pre-training phase, these models can be fine-tuned to down-stream tasks, such as document classification, similarity ranking of documents, sentiment analysis (e.g., is a tweet negative or positive), named-entity recognition (e.g., classification of words: Mr. President Obama, Senator Obama, Barrack Obama, all refer to one entity), language generation (for chatbots, for rephrasing tools), and the list goes on and on. Their range is so broad because they can create document representations Footnote 4 that take the context into account, and they can determine if two documents are relevant to each other, even though they might only be connected by synonyms, i.e., they do not use the same exact vocabulary but have a similar meaning. This allows a search which is much more semantically guided and less orthographic (the exact spelling of a word).

After breaking the text into single words and examining them, the next step in providing better ranking is to not only look for a single word, but to analyze word combinations and see if they constitute a term/construction in the corpus. The TF-IDF approach would only search for n-grams (a contiguous sequence of words) of the terms, and to that purpose, it would need to build an index with all the possible word combinations (usually n-grams of 3–7 words). This index can quickly become oversized with the explosion of combinations (multiple hundreds of gigabyte, depending on the corpus size and diversity of the vocabulary). Newer language models, such as transformers, take a different approach. They dissect the words in subwords and then try to grasp the combination from the whole sentence or paragraph (usually 512 subwords which can be up to 200–300 words). They use a mechanism called self-attention, which weights a word from different perspectives (one for being a query of other words, one for being a key for other words, and lastly one for being the value searched by the query and key), using a positional encoding for each word. The intuition is that it can then check correlations between the words, as it takes the whole sentence as input. Plus, neural networks consider all possible combinations at the same time. This creates a computational problem, which is dealt with by a myriad of heuristics and a massive amount of computational power. Consequently, this produces powerful language models able to grasp context even over long distances in the sentences, enabling, for instance, context-aware coreference resolution (the cat ate the mouse, it was hungry, “it” is referring to which animal?). This can be used for search engines when analyzing search words: are the queried words found in the documents and if so, are they used as central words in the right context?

While search terms play a major role in the information retrieval process, most academic search engines also still heavily rely on citations, using them to create graphs. Such graphs can use the PageRank (Page et al., 1999 ) algorithm Footnote 5 to prioritize works that are highly cited. CiteSeer used a different approach and implemented a “Common Citation Inverse Document Frequency” (Giles et al., 1998 ). It is also possible to create networks based on the search terms and count only citations that are relevant for the search. The use of citations for Google Scholar was also examined in Beel and Gipp ( 2009 ). The paradigm of the PageRank algorithm can be observed in a citation network Footnote 6 by ranking more important seminal papers. As Raamkumar et al. ( 2017 ) point out, seminality is critical for a scientific reading list, along with sub-topic relevance, diversity, and recency. These criteria can also be applied for a literature survey and for ranking scientific publications for the use case of scientific writing.

In sum, automatic information retrieval is a complex process involving multiple elements such as words, subwords, synonyms, document length, and citations. However, the way these elements are used and combined by the machine to establish a ranked list of matches is generally not displayed along with the results. This is why being aware of such mechanisms can help take a constructive critical stance towards the identified literature.

3 Knowledge Extraction

As the amount of scientific literature grows significantly, the need for systematic literature reviews in specific research fields is also increasing. Human-centered approaches have been developed and established as standards, e.g., the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) method (Page et al., 2021 ). Nevertheless, the overwhelming amount of available literature in some fields calls for automated solutions. Unlike information retrieval, knowledge extraction directly taps into a publication’s content to extract and categorize data.

The construction of structured data that can be saved into a schematized database and processed automatically from unstructured data (e.g., a simple text document) is a vast research field. The ultimate goal of processing unstructured data, especially documents or articles, is of great importance for algorithms. For example, in medical research, contraindications of a substance or illnesses associated with a certain drug could be easily found automatically in the literature, therefore guiding the search process and speeding up research even more. Unfortunately, it is not so easy to identify the substances, or which relationship connects them. In the field of natural language processing (NLP), we speak of named entity recognition (substances) and relation extraction (how do the substances relate to each other). Although finding relevant entities seems easy enough, there are many cases where it is quite difficult. For example, the 44th President of the United States of America can be referred to by his name Barack Hussein Obama II, Mr. President (even though he is not active in this position anymore), candidate for President, Senator Obama, President Obama, Peace Nobel Prize laureate, and so on. Usually, authors will use multiple denominations of the same entities to avoid repetitions, rendering the finding and tracking of named entities very difficult for an automatic algorithm. Although, in the last years, many improvements were made to grasp the semantic context of a word, the understanding and world modelling (real world) of the NLP algorithms is extremely limited. The algorithms can extract many relations from texts, but a chain of consequences is difficult to form. They are merely advanced pattern matching procedures: given a word, they find which words are related to that; however, they are not yet capable of extrapolation or abstract association (i.e., connecting the associations to rules or rule chains). Nonetheless, the results can be quite impressive in some specific tasks, such as coreference resolution of entities, which has some very accurate approaches (Dobrovolskii, 2021 ), yet not perfect nor near human performance. Although the current generation is learning to master relatively simple tasks for the next generation of algorithms, a paradigm change is yet to be developed.

Being able to search for entities and for relations between entities can be helpful in many fields, such as chemistry or drug-development (contraindications). When performing a literature review, it is equally important to know what the key papers are, what methods were used, how the data were collected, etc. Automatic knowledge extraction could also be used for creating surveys on a new task or a specific method. Although creating a database of entities and their different relations is not new, and even constitutes a paradigm in the field of database (graph database), it remains very complicated, especially when it comes to resolving conflicts, ambiguities, and evolving relations. On the other hand, if a document contains a graph, a text can be created automatically (see Benites, Benites, & Anson, “ Automated Text Generation and Summarization for Academic Writing ”).

Still, some information, like cited authors or how certain research objects are dealt with, can be extracted automatically, and this method can be applied to hundreds of papers, which makes the writing of research synthesis papers much easier. We can cluster and find similarities and differences much faster. Extracting entities from unstructured data such as texts is usually performed with neural networks that are trained on news articles. Until recently, this meant that the language model of these algorithms was confined to the so-called “news article” genre. Transformers (Vaswani et al., 2017 ), especially BERT (Devlin et al., 2018 ), changed that since they are trained on a very large corpus using multiple genres, from Wikipedia articles, to books, to news articles, and to scientific articles, but in an unsupervised manner Footnote 7 allowing the language model to learn different facets of the language. After the first training phase, the transformer is fine-tuned in a supervised manner to a specific task (e.g., entity recognition in books, where much less data is needed to achieve satisfying results). In that sense, the first step constitutes a pre-training, allowing the actual training to be performed with low amounts of specific data and without substantial computational effort.

This method, however, is still pattern matching, although in a much broader context. As a result, certain manipulations and associative relations are not accounted for (such a triangle inequality), showing the limitations of these large language models. Some newer approaches try to tackle the problem of semantic relation and logical implications, but there are many problems to be solved before they can be used; for instance, some language models in summarization can count from 2–5 but jump to 7 skipping 6 (e.g., number of ships in an article, Zhang et al., 2020 ). Other approaches use a graph over the documents to infer relations and central entities from the documents, but this is not very reliable, as pointed out earlier.

Thus, although knowledge extraction is a very promising avenue in light of the exploding amount of scientific data being released every day, there is still work to be done before this can be considered a reliable, fully automated solution. At the moment, there is no clear path how to inject the information of knowledge extraction into large text generation models (see Benites, Benites, & Anson, “ Automated Text Generation and Summarization for Academic Writing ”), which could make many mistakes (false facts) avoidable in many cases. The combination of knowledge graphs and language models is a possibility since the extracted knowledge can be embedded into a graph where reasoning can be performed. This would allow to check the content of a sentence while writing against the facts stored in the knowledge graph, and thus contributing to speeding up writing, making better citations, etc.

Knowing the entities and relations could also help information retrieval systems since the connection between synonyms becomes clearer, and reasoning over the search query could also be performed. This could helps researchers find what they are looking for faster and even help gather data for statistics. For example, in a Google Scholar search the number of hits is shown, but it would be good to know if they all handle the same use case or a method across disciplines, what the time span is, and whether the subjects are about the same or different topics. Also, a survey of papers could show how many papers use a certain dataset, employ a certain methodology, or refer positively or negatively to a specific term.

3.1 Functional Specifications

Search engines allow us to do a literature review or survey much faster and more precise than 20–30 years ago. More importantly, they allow us to also scavenge social media, a facete that is becoming more important for science. Which papers are discussed in the community and why, are there some critical issues that cannot be easily inferred from the paper?

However, finding an interesting paper (because it uses similar methodology) without knowing the specific words it uses still remains a challenging task. Using knowledge graphs of a certain field, allows to find these scattered pieces and put together a more precise and concise image of the state of the art. Although generating such graphs is also not trivial, it could be much easier to perform with automated procedures. Maintaining a certain degree of scepticism towards the results may nonetheless be a good precaution.

3.2 Main Products

Both information retrieval and knowledge extraction belong to the technologies used by scholarly search engines—and hence used by a wide majority of researchers, scientific writers and students, even when they are not aware of them. This is why a succinct overview of current academic search engines can help establish their relevance for academic writing.

CiteSeer (Giles et al., 1998 ) was an early internet index for research (beginning of 2000s), especially for computer science. It already offered some knowledge extraction in the form of simple parsing such as extraction of headers with title and author, abstract, introduction, citations, citation context and full text. It also had a citation merging function, recognizing when the same article was being cited with a different citation format. For information retrieval, CiteSeer used a combination of TF-IDF, string matching Footnote 8 and citation network.

The most part of popular databases and search engines for scientific article search do not disclose their relevance ranking algorithm. For Google Scholar, we do not know much about the algorithm behind Google Scholar’s search engine, only that it uses the citation count in its ranking (Beel & Gipp, 2009 ). Researchgate and Academia.edu are new social networks for the scientific community, both offering to upload and share scholarly publications. This also enables a search engine capability, and a recommendation system for papers to read. Springer’s SpringerLink is an online service that covers reputable conferences and journals. IEEE Xplore, ACM Digital Library, and Mendeley/Scopus are similar to SpringerLink for the respective publishers IEEE, ACM and Elsevier.

Martín-Martín et al. ( 2021 ) published a comparison of the various popular search engines for academic papers and documents. The study examined the index of most used search engines such as Google Scholar and Elsevier’s Scopus. The authors compared the coverage of these databases of 3 million citations from 2006. Further, in the discussion, the authors argue that the algorithms for ranking are non-transparent and might change the rankings over time. This last issue will hinder reproducible results, but as the popularity of the papers change over time, it might also be difficult to argue against it. The authors point out that while Google Scholar and Microsoft Academic has a broad coverage, there are more sophisticated search engines as Scopus and Web of Science (WoS), but they cover mostly articles behind pay-walls. Further comparisons between Google Scholar, Microsoft Academic, WoS, and Scopus can be found in Rovira et al. ( 2019 ), and between Google Scholar and Researchgate in Thelwall and Kousha ( 2017 ). The most relevant finding for academic writing is that Google Scholar attributes great importance to citation, and Researchgate seems to tap the same data pool as Google Scholar.

3.3 Research on Information Retrieval and Knowledge Extraction

Much research is being conducted in information retrieval and knowledge extraction, especially in the light of the recent developments in NLP and big data. The new, better-learning language models and broader representation of documents through contrastive learning Footnote 9 will heavily influence the next generation of search engines. One focus of the research is the field of author academic paper recommender system and academic search engine optimization (Rovira et al., 2019 ), which will become more and more important, especially given the growing awareness of these search engines among academic writers and the distribution of scholarship to wider audiences. As previously mentioned, the amount of research to be reviewed before writing will increase, and methods for automatization of selection will prevail over manual evaluation of certain sources. Footnote 10 For writers, this would optimize the writing process since the search engine would display the work more prominently.

Other rapidly-developing technologies might heavily influence the way how we perform searches in the near future. Automatic summarization is getting better and better, leading the way to automatically summarizing a collection of results provided by a search engine and even grouping the documents by topics. This can help easily create a literature overview and even give an overview over the state of the art, shortening by a large margin the work performed by researchers when writing articles. The most relevant paper for a search can also be highlighted as well as papers that may contradict its findings.

A further advance is the automatic question answering, where an algorithm tries to find an answer to a question within a given text. Hereafter, the search question answering system can further refine the list by recommending keywords or by filtering irrelevant articles from the result list, even by posing questions to the user, helping the user find relevant terms and aspects of the document collection resulting from the search. Lastly, the results can be better visualized as graphs showing clusters and influential concepts for each cluster, thus grasping the essence of the search results. This can help not only to refine the research question when writing but also to find good articles, insights, and ideas for writing.

3.4 Implications of This Technology for Writing Theory and Practice

The way the results are prioritized makes quite an impact, especially since many researchers will not scroll through the exhaustive number of hits of their query to find appropriate papers. If they do not find relevant matches within the first entries, they will most likely rephrase their query. Papers that are highly cited might be more prominently placed in the list, although they might be only a secondary source (such a case occurred with the field of association rules in data mining where a concept was reintroduced in the 1990s, although it was discovered in the 1960s, and the former became the defacto standard citation). Many concepts are coined almost simultaneously by different authors using different terminologies, and generally only one becomes mainstream, making it difficult to obtain a fair overview of a field with search methods based on TF-IDF and citation count. This might change in the future, as there is progress on structured data and some subfields as mathematics (Thayaparan et al., 2020 ), but understanding that two concepts are similar/related requires cognition and understanding, something that algorithms still cannot perform over scientific natural language.

Google’s PageRank (and thus citation counts) was built for the internet. If a group of persons finds an internet page interesting, they will link this to this page, and thus make the work of marking interesting sites for the algorithm. However, if something new and relevant but less popular or known emerges, this algorithm might take a while to catch up. Finding early citations is very important to stay current and relevant and have a longer citation span for an article, which impacts the career of a researcher. While it seems that Google Scholar is very good at it (Thelwall & Kousha, 2017 ), the algorithm still does not know if the results are truly relevant for your research or not. This shows the limits of ranking paradigms based on non-academic internet popularity for scientific research, since novelty and relevance are usually more important factors than popularity. From the academic writing point of view, search engines can only take you so far; a good scholarly network and dissemination of research at different venues can help get to new research findings faster.

4 Tool List

H-index measures how many publications with how many citations an author has (e.g. an h-index of 5 means at least 5 publications with 5 citations).

Structured data is data that does have a data model and thus can be easily processed by an algorithm or computer.

Self-supervised tasks refer to the procedure to take a training sample and remove parts of it, so the machine learning model needs to reconstruct the sample by itself (related to auto-associative memory).

The language models can transform the text to a latent space (latent representation), from which simple linear classifiers can perform a specific task.

PageRank algorithm gives better score for entities (documents, websites, persons in social networks) which are referred more often by other entities (e.g. websites linked to others).

Citation network refers to the network of citations created by a paper.

Supervised learning refers to machine learning algorithms which need labelled data, i.e. for sentiment classification if a tweet was positive or negative. Unsupervised learning algorithms process the data so that groups, similarities and discriminations in the data are apparent.

String matching is a way computers compare two words, simply by comparing character by character.

Contrastive learning refers to the tasks to learning similar samples from a collection, this produces usually better representation of the samples in an abstract latent space. These representations are often used afterwards for classification.

Precision is important, finding trustworthy and relevant sources, however, researchers will not accept a complete missing of a very similar study. This might render the whole writing of their research redundant and irrelevant. Thus, the bigger the pool of articles the more certain researchers can be of creating new findings.

Beel, J., & Gipp, B. (2009). Google Scholar’s ranking algorithm: The impact of citation counts (An empirical study). In 2009 Third International Conference on Research Challenges in Information Science (pp. 439–446). https://doi.org/10.1109/RCIS.2009.5089308

Beel, J. (2015). Research-paper recommender systems: A literature survey. International Journal on Digital Libraries, 17 (4), 305–338.

Google Scholar  

Benites, F. (2017). Multi-label classification with multiple class ontologies (Doctoral dissertation, Ph.D. thesis). University of Konstanz, Konstanz.

Benites, F., Grubenmann, R., von Däniken, P., Von Gruenigen, D., Deriu, J. M., & Cieliebak, M. (2018). Twist bytes: German dialect identification with data mining optimization . 27th International Conference on Computational Linguistics (COLING 2018) (pp. 218–227), Santa Fe, August 20–26. VarDial.

Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding (arXiv preprint). arXiv:1810.04805

Dobrovolskii, V. (2021). Word-level coreference resolution (arXiv preprint). arXiv:2109.04127

Giles, C. L., Bollacker, K. D., & Lawrence, S. (1998, May). CiteSeer: An automatic citation indexing system. In Proceedings of the third ACM conference on Digital libraries (pp. 89–98).

Halevi, G., Moed, H., & Bar-Ilan, J. (2017). Suitability of Google Scholar as a source of scientific information and as a source of data for scientific evaluation—Review of the literature. Journal of informetrics , 11 (3), 823–834.

Manning, C., & Schütze, H. (1999). Foundations of statistical natural language processing . MIT Press.

Martín-Martín, A., Thelwall, M., Orduna-Malea, E., et al. (2021). Google Scholar, Microsoft Academic, Scopus, Dimensions, Web of Science, and OpenCitations’ COCI: A multidisciplinary comparison of coverage via citations. Scientometrics, 126 , 871–906. https://doi.org/10.1007/s11192-020-03690-4

Article   Google Scholar  

Ortega, J. L. (2014). Academic search engines: A quantitative outlook . Elsevier.

Page, L., Brin, S., Motwani, R., & Winograd, T. (1999). The PageRank citation ranking: Bringing order to the web . Stanford InfoLab.

Page, M. J., McKenzie, J. E., Bossuyt, P. M., et al. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Systematic Reviews, 10 , 89. https://doi.org/10.1186/s13643-021-01626-4

Raamkumar, A. S., Foo, S., & Pang, N. (2017). Using author-specified keywords in building an initial reading list of research papers in scientific paper retrieval and recommender systems. Information Processing & Management, 53 (3), 577–594.

Rovira, C., Codina, L., Guerrero-Solé, F., & Lopezosa, C. (2019). Ranking by relevance and citation counts, a comparative study: Google Scholar, Microsoft Academic, WoS and Scopus. Future Internet , 11 (9), 202.

Thelwall, M., & Kousha, K. (2017). ResearchGate versus Google Scholar: Which finds more early citations? Scientometrics, 112 , 1125–1131. https://doi.org/10.1007/s11192-017-2400-4

Thayaparan, M., Valentino, M., & Freitas, A. (2020). Explanationlp: Abductive reasoning for explainable science question answering. arXiv preprint arXiv:2010.13128

Van Noorden, R. (2014). Scientists and the social network. Nature , 512 (7513), 126–129.

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems (pp. 5998–6008).

Zhang, J., Zhao, Y., Saleh, M., & Liu, P. (2020, November). Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In International conference on machine learning (pp. 11328–11339). PMLR. https://ai.googleblog.com/2020/06/pegasus-state-of-art-model-for.html . Accessed 22 May 2022.

Download references

Author information

Authors and affiliations.

Institute for Data Science, University of Applied Sciences and Arts Northwestern Switzerland, Windisch, Switzerland

Fernando Benites

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Fernando Benites .

Editor information

Editors and affiliations.

School of Applied Linguistics, Zurich University of Applied Sciences, Winterthur, Switzerland

School of Management and Law, Center for Innovative Teaching and Learning, Zurich University of Applied Sciences, Winterthur, Switzerland

Christian Rapp

North Carolina State University, Raleigh, NC, USA

Chris M. Anson

TECFA, Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland

Kalliopi Benetos

English Department, Iowa State University, Ames, IA, USA

Elena Cotos

School of Education, Trinity College Dublin, Dublin, Ireland

TD School, University of Technology Sydney, Sydney, NSW, Australia

Antonette Shibani

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2023 The Author(s)

About this chapter

Benites, F. (2023). Information Retrieval and Knowledge Extraction for Academic Writing. In: Kruse, O., et al. Digital Writing Technologies in Higher Education . Springer, Cham. https://doi.org/10.1007/978-3-031-36033-6_19

Download citation

DOI : https://doi.org/10.1007/978-3-031-36033-6_19

Published : 15 September 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-36032-9

Online ISBN : 978-3-031-36033-6

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Main Navigation

  • Contact NeurIPS
  • Code of Ethics
  • Code of Conduct
  • Create Profile
  • Journal To Conference Track
  • Diversity & Inclusion
  • Proceedings
  • Future Meetings
  • Exhibitor Information
  • Privacy Policy

NeurIPS 2024, the Thirty-eighth Annual Conference on Neural Information Processing Systems, will be held at the Vancouver Convention Center

Monday Dec 9 through Sunday Dec 15. Monday is an industry expo.

information processing research paper

Registration

Pricing » Registration 2024 Registration Cancellation Policy » . Certificate of Attendance

Our Hotel Reservation page is currently under construction and will be released shortly. NeurIPS has contracted Hotel guest rooms for the Conference at group pricing, requiring reservations only through this page. Please do not make room reservations through any other channel, as it only impedes us from putting on the best Conference for you. We thank you for your assistance in helping us protect the NeurIPS conference.

Announcements

  • The call for High School Projects has been released
  • The Call For Papers has been released
  • See the Visa Information page for changes to the visa process for 2024.

Latest NeurIPS Blog Entries [ All Entries ]

Important dates.

If you have questions about supporting the conference, please contact us .

View NeurIPS 2024 exhibitors » Become an 2024 Exhibitor Exhibitor Info »

Organizing Committee

General chair, program chair, workshop chair, workshop chair assistant, tutorial chair, competition chair, data and benchmark chair, diversity, inclusion and accessibility chair, affinity chair, ethics review chair, communication chair, social chair, journal chair, creative ai chair, workflow manager, logistics and it, mission statement.

The Neural Information Processing Systems Foundation is a non-profit corporation whose purpose is to foster the exchange of research advances in Artificial Intelligence and Machine Learning, principally by hosting an annual interdisciplinary academic conference with the highest ethical standards for a diverse and inclusive community.

About the Conference

The conference was founded in 1987 and is now a multi-track interdisciplinary annual meeting that includes invited talks, demonstrations, symposia, and oral and poster presentations of refereed papers. Along with the conference is a professional exposition focusing on machine learning in practice, a series of tutorials, and topical workshops that provide a less formal setting for the exchange of ideas.

More about the Neural Information Processing Systems foundation »

  • Cognition and Information Processing Research Paper

Academic Writing Service

View sample communication research paper on cognition and information processing. Browse research paper examples for more inspiration. If you need a thorough research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our writing service for professional assistance. We offer high-quality assignments for reasonable rates.

A simple fact motivates this research paper: The mind makes human communication possible. Obviously, there are many extra-individual elements and processes, including symbol systems (e.g., language), culture, and so on, that play a role in shaping and constituting communication events. Cognitive processes, though, are the absolutely essential and ineluctable foundation of communication— without these processes (which will be examined in this research paper), communication (whether it be interpersonal communication, intercultural communication, mass communication, whatever) simply doesn’t transpire.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% off with 24start discount code.

“Cognition and information processing” is an umbrella term that encompasses all mental states and activities—those we are conscious of, those that take place outside of consciousness, and even consciousness itself. From the instant we encounter a stimulus to the time (a moment or years later) when we respond, cognitive processes are at work. Cognitive processes allow us to see, comprehend, move, and speak.

A Quick (and Selective) Survey of the Domain

To gain an appreciation of just how deeply intertwined cognitive processes and communication are, it is useful to consider some examples of the phenomena encompassed by “cognition and information processing.” To set the stage for that discussion, let us begin with a rudimentary scheme that partitions the mental realm into three components: the inputprocessing system, memory systems, and the output system. 1

Basic Frameworks of the Information-Processing System

The input-processing system.

The input side of the information-processing system includes all those activities involved in taking in and making sense of the stimulus environment. The input-processing system thus includes processes such as attention, perception , and comprehension. The attentional subsystem functions to bring cognitive resources to bear in processing certain inputs, to the relative exclusion of others. In essence, attention is a selection system that serves as the “front door” to the rest of the information processing system: If you don’t attend to what another person is saying, or a message in the mass media, those inputs are not likely to have an impact.

Perception refers to a set of cognitive operations by which we segment and categorize stimulus inputs into meaningful relationships and kinds. For example, the perceptual subsystem allows us to recognize that some objects in our visual field are farther away than others, to identify the letters printed on this page, to isolate the sound units that make up spoken language, and to recognize facial expressions of emotion. Our perceptual systems allow us to arrive at a coherent, “sensible” grasp of what’s out there. Take away perceptual systems, and the world becomes a morass of unintelligible shapes, colors, movement, and sound.

Comprehension encompasses those processes by which we combine basic perceptual information with knowledge of causal relationships, rules of language, social regularities, and so on to construct a model of unfolding reality.

For example, it is comprehension processes that permit us to move from the perception of linguistic sound units to an understanding of the meaning of a phrase (e.g., “You look so handsome tonight.”); to link successive utterances, even when there is no stated connection between them (“And you must have been drinking.”); and to make sense of those phrases as elements of a larger discourse or story (“That’s it; I’m going home to mother!”).

The Memory System

The memory system can be partitioned in various ways, but doubtless the most fundamental distinction is between long-term memory (LTM) and short-term , or working , memory (STM) . As the label suggests, LTM is a repository that preserves information for an extended period—even years or decades. Moreover, the capacity of LTM is virtually unlimited—we don’t have to forget old information in order to make room for new facts. Again, there are different ways of subdividing the LTM system, but one common approach distinguishes between declarative and procedural memory. Declarative memory is memory for facts— the things you know about the world. It is in declarative memory that you have retained your mother’s middle name, the story line of No Country for Old Men , and the lyrics of your favorite songs. Procedural memory, on the other hand, contains information about how to do things: It is the basis of our skills. Because of procedural memory, people are able to drive a car, pronounce the words of their native language, or play a musical instrument.

While LTM preserves a vast store of information, most of that information isn’t available for use or processing at any particular time—cognitive scientists say that it isn’t “activated.” The STM system, then, is the site of information that is activated and available for further processing. You could think of LTM as a blackboard filled with written statements in a darkened room and STM as a small portion of that information illuminated by the beam of a flashlight. As the light beam moves across the board, information is lost from the STM as new facts enter. So, unlike LTM, where the storage capacity is virtually unlimited and where information is preserved perhaps for a lifetime, STM holds a very limited amount of information and for only a brief period. On the other hand, STM allows you to rehearse, manipulate, and elaborate its contents: You can, for example, rehearse a new acquaintance’s telephone number, add its digits, or even mentally traverse them in reverse.

The Output System

The output side of the information-processing system involves all those processes by which we formulate and execute behavioral responses. Essential components include, but aren’t limited to, activities such as those identified in the goals-plans-action (G-P-A) model (Dillard, 2008): goal formation, response planning , and the assembly and implementation of overt behaviors. Scholars differ in their conception of the exact nature of “goals” (e.g., must they be conscious?), but the basic notion of what goals are is one we all intuitively grasp: Goals define what we’re trying to accomplish and constrain how we go about it (e.g., the car salesperson tries to make a sale without appearing to be too aggressive). Goal-formation processes, then, are those that give rise to these objectives and constraints. As the “drivers” of the output system, goals channel mental resources toward particular cognitive activities and thereby shape how and what we think, and what we do and how we go about it.

Planning involves formulating a behavior, or more likely a sequence of behaviors, for accomplishing one’s goal(s), and it entails several distinct subprocesses (see Berger, 1997). For example, one aspect of planning is the identification of subgoals—intermediate steps that must be taken to achieve the overarching objective (e.g., to accomplish the goal of serving dinner, one must procure the necessary ingredients, combine them properly, preheat the oven, etc.). Planning also involves identifying potential ways of accomplishing each goal or subgoal (e.g., given the goal of introducing himself to a stranger, a certain denizen of the jungle might consider saying “Hello, my name is Tarzan, and I am delighted to make your acquaintance” or, alternatively, “Me Tarzan.”). And yet a third component of planning is anticipating the likely outcomes of potential behaviors, that is, “engaging in Behavior X is likely to result in Outcome Y.”

Assembling and implementing behaviors involves those processes by which our plans are actually manifested as overt responses (see Greene, 2007). The content of plans is represented in relatively abstract mental formats—the sorts of symbolically based representational formats of which we are consciously aware and that we can even report or describe to others (so, e.g., you can tell your roommate what you’re planning to do over the weekend). On the other hand, overt behavior consists of the motor commands that allow us to speak and move. There is an intricate system that translates our conscious conceptions of what to do into actual behavior, and without this component of the output system, we might possibly still be thinkers (of a sort), but we couldn’t be doers.

A “Second Tier”: Building on a Basic Framework

The preceding section should make it pretty clear that the input-processing, memory, and output systems play an inescapable role in communication processes. But for communication scientists, the properties of these systems are typically not so much a primary focus as they are an essential foundation for exploring a vast array of phenomena that derive from and are shaped by the nature of these systems. 2 It is instructive, then, to consider a few examples of these “second-tier” phenomena.

Cognitive Load

Among the most common conceptions in cognitive science is the notion that the mind is a limited-capacity system in the sense that there is a finite pool of “processing resources” that restricts the number of mental activities we can carry out at any given time. If some activity makes heavy demands on our processing resources, we are said to be under a heavy “cognitive load,” and our ability to carry out other activities is curtailed. So, for example, if you are engrossed in text-messaging your roommate, you probably aren’t going to be able to process what your professor is saying about some complex topic. This idea of a limitedcapacity system shows up in a variety of communication phenomena. For example, while it is not always true, everything else being equal, lying tends to be more cognitively demanding than telling the truth: The liar has to fabricate an account, keep his or her story straight, control nonverbal behaviors so as not to give himself or herself away, and so on. As a result, liars often exhibit behaviors indicative of heavy cognitive load (e.g., speech errors). A second example, often in the news these days, comes from studies that show that hands-free cell phones are no safer when driving than hand-held models. The reason, of course, is that it’s not having something in your hand that creates the problem; rather, the problem stems from having something other than monitoring the road on your mind.

Communication Skill Acquisition

It will come as no surprise to the readers of this volume that communication skills matter—skillful communicators simply fare better in the workplace and in their interpersonal relationships (e.g., marriages). But people aren’t born with communication skills; they are acquired over time, through practice. The process of skill acquisition is accompanied by a number of behavioral changes; for example, we get faster, we make fewer errors, and we experience less cognitive load. Cognitive science has made considerable strides in illuminating what happens in the mind as we acquire a skill and why these behavioral changes occur (see Greene, 2003). For example, recalling the declarative versus procedural memory distinction described above, one of the things thought to happen in skill acquisition is that one may learn a set of facts about what to do, and through practice, gradually convert this declarative knowledge into procedural form so that it is no longer necessary to keep the rules about what to do in mind.

There is another layer to the advances that have come from studies of skill acquisition. Because research has shed light on what happens in the mind as we acquire a skill, we can take that understanding and use it to design more effective training programs. For example, what are the most effective ways of instructing people about the skill, what conditions of practice are most effective, and what sorts of feedback are best for learning and skill transfer?

Creativity and Pattern

Human behavior is characterized by regularity and pattern. We readily recognize this in the behavior of others (and sometimes in ourselves). Our friends and loved ones have ways of speaking (e.g., favorite topics, vocabulary) and moving (e.g., facial expressions, mannerisms) that are just “them.” And the patterning of human behavior doesn’t just show up in individuals’ idiosyncratic ways of doing things. Members of groups (e.g., sorority members; church congregations), and even entire cultures, exhibit routines in their behavior that are common to all members of the group. We all know, for example, the basics of what to say and do when being introduced to someone.

Because this patterned, repetitive character of human behavior is so universal and so ubiquitous, failing to address it would leave some pretty big gaps in our understanding of communication processes. This is precisely one of the places, though, where cognitive science has made some important inroads. The fact that there is a repetitive element to our behavior implicates LTM. In other words, people must have preserved (in some form) the information used to produce the patterns they exhibit. Guided by this assumption, cognitive scientists have learned a lot about the nature of the LTM system(s), where knowledge of behavioral routines is represented, how that knowledge is acquired, and how it is used in shaping our actions (see Kellermann & Lim, 2008).

What is particularly fascinating in the context of a discussion of the patterned and repetitive nature of human behavior is that it is also simultaneously unique and creative. Sure, we exhibit idiosyncratic and shared ways of doing things, but we never do them in exactly the same way from one time to another. It turns out, for example, that even if you tried to repeat even a simple phrase three times in a row in exactly the same way, there would be variations in the vocal spectrograph of each repetition. Even more important, we have the capacity to think and say new things—to come up with ideas that we’ve never heard, seen, or thought before. This penchant for creation is the source of much of the best in human communication— our ability to tell a compelling story, to express an idea or feeling in just the right words, and even to come up with a great joke. As you might guess, studying and understanding the creative side of communication behavior is more difficult than coming to grips with the patterned aspects, but even here, theoretical and methodological advances have been made (see Greene, 2008).

Self and Self-Regulation

Like no other species, human beings have the ability to reflect on themselves—we are self-aware; we possess conceptions of who we “are” (and who we wish we were); and we think of ourselves and our actions in relation to others and their perceptions, actions, and purposes. These and related self-relevant phenomena are central to social interaction. Such processes have been shown to be linked to social motivation (including concerns with self-presentation), social anxiety, marital satisfaction, and attitude—behavior relationships, to list just a few out of many.

Because they are mental phenomena, self-concept, selfregulation, and so on fall squarely in the domain of cognitive science, and considerable progress has been made in understanding their nature (see Baumeister, 1998). For example, one property of our experience of self is that it is relatively stable—we have a sense of unity and continuity concerning who we are. When I wake up in the morning, I feel that I am essentially the same person I was the day before. On the other hand, it turns out to be fairly easy to demonstrate that people’s conceptions of their abilities, attributes, and so on are often internally inconsistent and also malleable and subject to change. Models that describe the way self-relevant information is stored in and retrieved from LTM help explain how we can entertain inconsistent views of ourselves; how those perceptions of self can shift, sometimes pretty rapidly; and how even in the face of such inconsistency and change, we are able to maintain a coherent sense of self.

As a second example, while the self very often plays a role in motivating and shaping one’s behavior, this is not always true: There are times when we are not conscious of our selves (our thoughts, behavior, etc.). Models of selfawareness have shed light on those conditions under which aspects of the self are more or less likely to come into play, as well as on the behavioral consequences of selfawareness (see Baumeister, 1998).

Cognitive Changes Over the Life Span

Among the most intensively studied aspects of cognitive functioning are the various changes in mental processes that occur over the course of a person’s life. As we grow from infancy to childhood, adolescence, and beyond, numerous developments take place in the inputprocessing, memory, and output systems. For example, cognitive processes simply get faster over the course of childhood, we acquire the capacity to think in more abstract ways, and we develop the ability to monitor and regulate our own behavior.

Some of the cognitive changes accompanying development are particularly relevant to communication and social interaction. These include language acquisition, which typically commences with the production of single words around the end of the first year and rapidly progresses to multiple-word strings by about 18 months (see Clark, 2003). A second example of a socially relevant developmental change during childhood concerns what is termed the “theory of mind,” or the understanding that other people possess knowledge, beliefs, goals, and so on and that these may differ from one’s own mental states (see Premack & Premack, 2003). In the same vein, as children develop the ability to take the perspective of others, they also begin to engage in strategic self-presentation in order to manipulate what others think of them (e.g., Aloise-Young, 1993).

The other end-of-the-life course is marked by cognitive changes as well. The efficiency of mental processing peaks as we reach early adulthood but begins a gradual decline shortly thereafter. An overall slowing of information processing begins in the early 20s. This decline becomes more pronounced as we age, ultimately affecting the ability to process language and text. Evidence of this decay is apparent in the communication patterns used by older adults, which can often be characterized as less complex (e.g., shorter, grammatically simpler constructions; fewer personal pronouns) than those exhibited by their younger counterparts (Kemper, 2006).As we age, we also experience deterioration in the efficiency of the attentional subsystem. We become less proficient in our ability to inhibit irrelevant stimuli (e.g., extraneous thoughts), making it more difficult to be attuned to important message features. Older adulthood affects memory as well. Decreases in STM begin in the 20s and become more pronounced with each passing decade—a deficit directly related to difficulty in sentence processing. With respect to the LTM system, declarative memory is negatively affected by age, but procedural memory is relatively impervious to the aging process. So while you may forget the name of your childhood best friend, you won’t forget how to play the piano.

A Second Pass at the “Second Tier”

The phenomena we’ve considered to this point, cognitive load, skill acquisition, creativity, the self, and life span changes, are simply examples, albeit fascinating examples, of what we’ve termed “second-tier” cognitive processes. There are many other such phenomena, and we should at least mention some of those that we could have as easily selected for inclusion here: cultural differences and similarities in information processing, the role of gender in thought and action, aesthetics and the perception of order and beauty, second-language acquisition, verbal and nonverbal message production (including understanding the link between the two), imagined interactions, person perception and impression formation, stereotyping and prejudice, attitudes (and the link between attitudes and behavior), self-deception, reasoning and decision making, consciousness, motivation, and emotion.

The Methods of Cognitive Science

The overview of the input-processing, memory, and output systems in the preceding section should make it obvious that no matter what communication phenomenon one sets out to understand, sooner or later, dedicated pursuit of that phenomenon leads to the realm of the mind. It is possible, of course, to skirt the boundaries of the mental realm, assuming or ascribing properties (warranted or not) to the ultimate seat of message making and message processing (just as the early mapmakers designated the locations of “Atlantis” and “dragons”). Cognitivists, though, tend to be the sort of thinkers who want to know what’s there. And just as explorers during the Age of Discovery developed new tools and techniques for exploring the terrestrial realm (and those of our age, the celestial), cognitive scientists are able to draw on an array of innovative methods for understanding the nature of the mind. These techniques are both varied and numerous, but among the most important are verbal reports, memory assessments, temporal measures, and examination of performance errors.

Verbal Reports

Doubtless the most obvious and straightforward way of gaining insight into what and how people are thinking is to ask them. Under certain conditions, for example, people might reasonably be expected to be able, and willing, to tell you what they are trying to accomplish and how they are planning to go about it. Such verbal reports, however, can take numerous forms, and some are more or less reliable and valid than others (see Ericsson & Simon, 1996). For example, asking people to report on their activities and motivations is often problematic because they may censor or alter their accounts due to concerns associated with social appropriateness or out of considerations about what the investigator “wants to hear.” Even framing a verbal report as an instance of “communication” can shift the content of what is said from “that which one is thinking” to “that which would be sensible to the listener.”

In addition to the various social constraints on the content and quality of verbal reports, cognitive factors also pertain. For example, evidence suggests that people are more likely to give accurate reports of current thoughts and activities as opposed to retrospective accounts. Similarly, when people are asked about whether certain events or stimuli (e.g., an advertisement) may have influenced their behavior, they may quite easily answer the question not by relying on any specific or accurate memory of that influence but rather by inferring a plausible link (e.g., “I must have seen the ad, and I’m almost certain I bought the product, therefore I distinctly remember being influenced by that ad.”).

Finally, as a way of shedding light on cognitive processes, the usefulness of verbal reports is limited by the fact that many mental processes simply aren’t available to conscious awareness. You are aware, for example, of the words on this printed page, but not of the cognitive operations that allow you to perceive them; you apprehend the contents of your own consciousness at this instant, and maybe, if you direct your thoughts deeper, even of the environmental stimuli and remembered events that contribute to the thought(s) in your mind at this moment. But chase as you will, you can only capture the content and residue of your thoughts and not the processes by which they came to be.

Memory Assessments

As noted in the first section of this research paper, the memory system holds a central place in science’s understanding of mental processes. It should be no surprise, then, that a great deal of effort has been devoted to exploring the nature of memory and various memory phenomena (see Tulving & Craik, 2000). In the main, studies of LTM involve either recognition or recall paradigms. Recognition studies typically involve two phases: In the first, people are presented with a series of stimuli (e.g., magazine ads), and in the second phase, the original stimuli are presented along with new stimuli of the same type. Participants, then, are asked to judge whether each item is “old” or “new.” Recall studies, in contrast, simply ask respondents to produce previously encountered information (e.g., “What is the capital of New York?”). The distinction between recognition and recall studies is exemplified in the difference between multiple-choice tests (which involve recognition) and short-answer or fill-in-the-blank exams (which require recall), and as you might expect, people tend to perform better on recognition tasks than on recall tasks. However, what is remarkable is that there are certain conditions under which that tendency is reversed, where people can actually recall information that they cannot recognize.

One of the key understandings to emerge from the research on memory processes is that memory is fundamentally a constructive process. In other words, memory doesn’t work like pulling up an intact document file stored on your computer. Instead (completely out of conscious awareness), multiple (incomplete) memory traces are retrieved, combined with current environmental stimuli, and laid over with sense-making cognitive processes to create a “recollection” of what transpired at some point in the past. Neath and Surprenant (2003) report an interesting example of this sort of memory construction involving a student who had fond childhood memories of a beloved family dog. As real as this memory was for this young woman, it turned out that the dog had died 2 years before she was born! This same sort of memory construction has been shown to apply in cases of eye-witness testimony, which is notoriously inaccurate, even under oath and, literally, with life-or-death decisions at stake (see Loftus, 1996). And that’s not the end of it: So pervasive is the constructive nature of recall that it occurs even to those “crystallized” moments that seem so indelibly etched in our minds that we would never forget them. For example, Talarico and Rubin (2003) found that in less than 1 year, people’s recollection of the events of September 11, 2001, were just as subject to loss and error as their memories of everyday occurrences.

Beyond our (in)ability to remember the events of our lives, other memory phenomena are equally compelling. For example, one line of research has examined people’s ability to remember visual versus textual information (see David, 2008). This work shows that we tend to have better memory for pictures than text, and this effect extends even to printed words that have visual referents (e.g., “mountain”) versus those that are more abstract (e.g., “freedom”). Other studies have examined our ability to remember messages, both from face-to-face interactions and from the media (e.g., news reports, movies). Among the interesting findings of these studies is that we tend to have better memory for “what the other person said” in an interaction than what “we said” and that we are much better at recalling the “gist” of a conversation or story than the specific words that were used in its telling.

One final example of what we’ve learned about memory processes will resonate if you’ve ever observed that your ability to remember class material is worse on the final exam or your recall of jokes is better in bars. It turns out that our ability to retrieve information from memory is better when the conditions at the time of recall are similar to those at the time we originally acquired that information. So if your class meets in one room all semester and then you take the final in a different room, your ability to recall course material is reduced. And the same effect applies to your physiological state: If you study while drinking coffee, you’ll have better recall with some caffeine in your system. Similarly, if you learn all your best jokes while drinking beer in campus bars, you’re more likely to remember those jokes when you’re in that same state and environment. This effect is so strong that people who are given the task of learning word lists underwater in scuba gear have better recall of those words when they’re back underwater than when they’re on dry land (see Neath & Surprenant, 2003).

Temporal Measures

Since the very inception of the scientific study of the mind almost 150 years ago, scholars have relied on measures of time to draw conclusions about the nature of mental processes. There are several, interrelated reasons that time is one of the most important tools of the cognitive scientist. Most simply, cognitive processes, like all other processes (e.g., boiling an egg, a solar eclipse) transpire over time, and for that reason, one of the key elements of understanding how a process works is to know how long it takes. By extension, assessing time allows one to determine whether a process takes longer under some conditions than others: If you have a good grasp of how something is operating, then you should be able to predict what factors will cause it to speed up or slow down (e.g., lowering the temperature will cause a chemical reaction to proceed more slowly). A third consideration is that examining the temporal characteristics of cognitive processes applies even to phenomena that occur outside of conscious awareness and thus are not available for verbal report. A final reason why temporal measures play such an important role in cognitive science stems from the notion of “cognitive load” discussed previously. When mental activities make heavy demands on our finite pool of processing resources, our ability to carry out those activities is often slowed. For this reason, then, measures of response time can be used to draw conclusions about the cognitive demand that a person is experiencing.

Temporal assessments involve a variety of methodologies, depending on the specific aspect of the mental system that is under examination, but they typically involve measuring the time between presentation of some stimulus or task and initiation (or completion) of a subsequent response. The instructions can be as simple as pressing a button when a sound occurs or as complex as solving calculus problems. As a result of their versatility, temporal measures are commonly used to study each of the three major systems of the mind (input processing, memory, and output). In the input-processing realm, for example, studies indicate why some visual images take longer to be recognized than others (perception) and why it sometimes takes a while to comprehend a message (as when it takes us a few seconds to “get a joke”). With respect to memory, any number of experiments have shed light on conditions under which it takes us longer to retrieve information from LTM (the sort of occurrence that will resonate if you’ve ever “forgotten” your own phone number, experienced the “tip-of-the-tongue phenomenon,” or momentarily blanked on the name of a person whom you know well).

One additional group of temporal measures merits special mention because they are directly tied to communication processes. Consider that verbal message production is characterized by various temporal parameters including speech rate (e.g., words per minute), speaker-turn latency (the period between when one person stops talking and another begins), and pause-phonation ratio (the duration of periods of silence divided by the duration of periods of talk during a person’s speaking turn). These sorts of variables are particularly interesting because they lead “dual lives”: On the one hand they have social significance because they are related to perceptions of credibility, social attractiveness, and so on, and on the other, they provide a window on the cognitive processes underlying speech production. Research on these temporal parameters shows that we speak more fluently (i.e., quickly, with less silent pausing) when we are familiar with our topic and when we’ve had an opportunity to plan what we’re going to say in advance. Conversely, multiple-goal messages (e.g., trying to tell a friend that you didn’t think much of her American Idol audition while also trying to be supportive) tend to slow us down.

Performance Errors

The final type of assessment in the cognitivist’s toolkit actually goes hand in hand with those we’ve already covered. Studies that involve temporal measures also almost always examine errors in what a person says and does. This is because most tasks involve a speed-accuracy trade-off: The quicker we act, the more errors we tend to make. (So you don’t want to rush through an exam, and you really don’t want your surgeon to be in too big a hurry!) Moreover, as we’ve alluded, in their verbal reports and in their memory performance, people make errors. What is particularly useful, though, is that when we do commit these errors, they are not random glitches—they emerge under certain conditions and not others; they are of certain types and not others. For example, people tend to “recall” events that never happened if those unseen occurrences are part of a “script,” or a familiar sequence of events. Similarly, we tend to “run off” well-practiced behavioral routines, even when they are not appropriate—and you have experienced this if you’ve ever called a new beau by the last one’s name or, less embarrassingly, moved and found yourself dialing your old telephone number or even “driving home” to your old address! The key point is that because performance errors exhibit regularity rather than randomness, they provide an important window on the operation of the mind, not just when it “fails” but also when it is functioning normally.

The Special Allure of Cognitivism

The overarching theme of this research paper has been that cognitive processes lie at the very heart of human communication: The mind is the seat of message making and message comprehension. You can take away any other aspect of human existence (language, relationships, culture, cell phones, iPods, . . . —you fill in the blanks) and still have communication, but without the mind, you’ve got nothing. By extension, whatever other communication phenomenon one seeks to understand, sooner or later, pursuit of that issue will lead you to confront the nature of comprehension and action.

But the central and inescapable role of cognition in communication is only one of the reasons why cognitivism has come, here in the dawning years of the 21st century, to hold the position it does among all the various alternative approaches for studying communication processes. If the two of us, as your authors, have done any sort of creditable job to this point, a second contributing factor should be readily apparent: The phenomena in cognitivism’s wheelhouse are inherently fascinating. As much as humans wonder at the complexity of the galaxies and the intricate nature of Earth’s ecosystems, when God spoke these things into being, He was only getting warmed up, and He saved His best for last. The phenomena of the mind are intrinsically compelling: What is consciousness? How is it possible to think and do something new? What is the nature of dreams (and daydreaming)? How does what I think become manifested as speech and movement? How is it possible to know what to do, and to want to do it, and still do something else? How can I be so certain in my recollections—and so wrong? Why can’t some people dance without looking at their feet?

Beyond the essential place of cognition in communication and the fascinating nature of the phenomena that it encompasses, there is yet a third reason (a whole cluster of reasons, actually) that gives cognitivism its particular appeal, and this is that it allows us to have our cake and eat it too. By this, we mean that people are sometimes led to think in terms of trade-offs and dichotomies (you can have one or the other), but the special nature of cognitivism allows one to work simultaneously at both “ends” of some commonly supposed continua. Let’s consider three examples that illustrate this point.

Science and Aesthetics

One of the peculiar properties of human sense making is that we are so very prone to error and bias in what we perceive and suppose about the world. We see order, regularity, and association where it doesn’t exist, and conversely, we fail to detect processes that really are at work. As an approach to understanding, science functions to minimize such error (see Haack, 1999). Rather than accept an assertion on faith or because someone in authority says it is so, science ultimately hinges on publicly available and replicable methods and data; it employs rigorous techniques to reduce the chances of illusion and the impact of wishing it were so.

At the same time that cognitivism affords the special advantages of science as a way of knowing, it also resonates with our aesthetic nature and our appreciation of order and beauty.And this is true in two distinct senses that are analogous to the ways in which a work of art can function aesthetically. A still life of a flower arrangement, for example, could reveal the beauty and structure of the blossoms, and at another level, that same painting can be appreciated for the artist’s technique. In much the same way, the data and models of cognitive science reveal an elegance and order in human behavior that we might not apprehend otherwise.And at another level, the theories and models of cognitive science can, themselves, be a source of pleasure and satisfaction.

Theory and Practical Application

As we have just noted, thinking theoretically can be a source of genuine pleasure and excitement. People enjoy working on Sudoku and crossword puzzles, but building scientific theories is like working out newspaper puzzles on steroids. Theory construction is problem solving— finding ways of making sense of patterns of regularities and anomalies, and it requires imagination, intellectual discipline, and courage. And the appeal of thinking theoretically doesn’t pertain just to building one’s own theories; investing the effort to master the theories of others helps us to appreciate the “big picture” of how things fit together and why they work as they do, to understand how someone else went about trying to solve a problem, and even to think about things that person didn’t see.

The sense of insight and satisfaction that comes from thinking theoretically is only half the story here because, even though cognitivism is fundamentally theory driven, the problems addressed by cognitive science are those that have very real applications and implications for people’s lives. Just a few examples should be sufficient to illustrate the point: How can children with learning disabilities best acquire social skills; how can the cognitive changes that occur with advancing age be delayed or accommodated; and how can health campaign messages (e.g., don’t drink during pregnancy). be designed to enhance the likelihood that people will attend, comprehend, and implement their recommendations? The overarching point is that if you want to make a difference in the quality of people’s lives, it helps to understand how, and what, they think.

Universality and Difference

Cognitive science seeks to develop models of mental processes that are general in the sense that they apply to everyone.As an example, consider that people integrate sensory inputs with information in LTM in a way that allows them to understand the dialogue and follow the storyline of a movie. The cognitive theorist, then, sets out to develop an account of how this happens in a way that applies to all people (and all movies), not unlike the way physicists attempt to articulate the laws that govern the motion of all objects.

At the same time that congnitivism seeks to develop powerful, general accounts, it also seeks to understand the source and nature of differences in the way people process information and generate responses. Are there, for example, cultural differences in the content and structure of information in the LTM that are manifested in perception, comprehension, recall, and speech and action?At an individual level, why do experts in a particular domain perceive and interpret domainrelevant stimuli differently than do novices?

This striving after both universality and difference is illustrated in work that your authors have conducted on creativity in thought and behavior over the last half-dozen years. As we noted earlier in this research paper, human action is inherently creative—we all do it, and so part of our project has been to understand how it is possible for us to think and say new things (see Greene, 2008). On the other hand, some people just seem better at it than others: We all know people who just seem to be able to “think on their feet,” and we’ve been exploring what is at the root of this individual difference (see Morgan, Greene, Gill, & McCullough, in press).

Communication’s Place in Cognitive Science

Cognitive science is an enormously broad interdisciplinary enterprise that spans a great many traditional fields of inquiry.Without any effort to formulate a comprehensive list, we can say that cognitive science draws on philosophy, neuroscience, anthropology, sociology, psychology, linguistics, computer science, mathematics, . . . and communication. One of the great freedoms afforded by cognitivism is that of pursuing one’s questions wherever they may lead. Rather than stopping or changing course because what one is doing is “not communication,” the cognitivist can go where he or she will.

At the same time, communication’s emphasis on message behavior, code systems, social relationships, channel effects, and so on puts scholars in the field in a position to make unique contributions to cognitive science. A particularly interesting example involves the study of mutual influence processes—the ways in which the behaviors of interactants unfold in interdependent ways (see Burgoon, Stern, & Dillman, 1995). While much of the history of cognitivism has focused on studying the mind of the individual engaged in various tasks, there is a growing emphasis on exploring the interplay of minds (whether it be face to face, online, etc.). Communication scholars have much to contribute to that conversation.

  • You probably wouldn’t want to push any general scheme for segmenting the realm of cognitive processes too far because, in point of fact, the mind functions as a system, and no matter where you draw lines for describing various subsystems and processes, those processes almost inevitably crop up as components of phenomena that have been partitioned into other subsystems. For example, even with the simple scheme introduced here (i.e., “input-processing system,” “memory systems,” “output system”), it should be readily apparent that memory plays a role in perception and comprehension and in behavioral production.
  • There are, of course, obvious exceptions such as attention processes to mass media and message memory.

Bibliography:

  • Aloise-Young, P. A. (1993). The development of selfpresentation: Self-promotion in 6- to 10-year-old children. Social Cognition, 11, 201–222.
  • Baumeister, R. F. (1998). The self. In D. T. Gilbert, S. T. Fiske, & G. Lindzey (Eds.), Handbook of social psychology (4th ed., pp. 680–740). Boston: McGraw-Hill.
  • Berger, C. R. (1997). Planning strategic interaction: Attaining goals through communicative action. Mahwah, NJ: Lawrence Erlbaum.
  • Burgoon, J. K., Stern, L. A., & Dillman, L. (1995). Interpersonal adaptation: Dyadic interaction patterns. New York: Cambridge University Press.
  • Clark, E. V. (2003). First language acquisition. Cambridge, UK: Cambridge University Press.
  • David, P. (2008). Dual coding theory. In W. Donsbach (Ed.), International encyclopedia of communication. Malden, MA: Blackwell.
  • Dillard, J. P. (2008). Goals-plans-action theory of message production. In L.A. Baxter & D. O. Braithwaite (Eds.), Engaging theories in interpersonal communication: Multiple perspectives (pp. 65–76). Thousand Oaks, CA: Sage.
  • Ericsson, K.A., & Simon, H.A. (1996). Protocol analysis: Verbal reports as data (Rev. ed.). Cambridge, MA: Bradford.
  • Gardner, H. (1985). The mind’s new science: A history of the cognitive revolution. New York: Basic Books.
  • Greene, J. O. (2003). Models of adult communication skill acquisition: Practice and the course of performance improvement. In J. O. Greene & B. R. Burleson (Eds.), Handbook of communication and social interaction skills (pp. 51–91). Mahwah, NJ: Lawrence Erlbaum.
  • Greene, J. O. (2007). Formulating and producing verbal and nonverbal messages:An action assembly theory. In B. B. Whaley & W. Samter (Eds.), Explaining communication: Contemporary theories and exemplars (pp. 165–180). Mahwah, NJ: Lawrence Erlbaum.
  • Greene, J. O. (2008). Action assembly theory: Forces of creation. In L. A. Baxter & D. O. Braithwaite (Eds.), Engaging theories in interpersonal communication (pp. 23–35). Thousand Oaks, CA: Sage.
  • Haack, S. (1999). A fallibilist among the cynics. Skeptical Inquirer, 23 (1), 47–50.
  • Kellermann, K., & Lim, T. (2008). Scripts. In W. Donsbach (Ed.), International encyclopedia of communication (pp. 4517–4521) . Malden, MA: Blackwell.
  • Kemper, S. (2006). Language in adulthood. In E. Bialystok & F. I. M. Craik (Eds.), Lifespan cognition: Mechanisms of change (pp. 223–238). Oxford, UK: Oxford University
  • Loftus, E. F. (1996). Eyewitness testimony. Cambridge, MA: Harvard University Press.
  • Morgan, M., Greene, J. O., Gill, E. A., & McCullough, J. (in press). The creative character of talk: Individual differences in narrative production ability. Communication Studies.
  • Neath, I., & Surprenant, A. M. (2003). Human memory: An introduction to research, data, and theory (2nd ed.). Belmont, CA: Wadsworth.
  • Oates, J., & Grayson, A. (2004). Cognitive and language development in children. Oxford, UK: Blackwell.
  • Premack, D., & Premack, A. (2003). Original intelligence: Unlocking the mystery of who we are. New York: McGraw-Hill.
  • Talarico, J. M., & Rubin, D. C. (2003). Confidence, not consistency, characterizes flashbulb memories. Psychological Science, 14, 455–461.
  • Tulving, E., & Craik, F. I. M. (Eds.). (2000). The Oxford handbook of memory. New York: Oxford University Press.

More Communication Research Paper Examples:

Communication Research Paper

  • Bias in Communication Research Paper
  • Communication and Friendship Research Paper
  • Speech Communication Research Paper
  • Philosophy and Theory of Communication Research Paper
  • Rhetorical Criticism in Communication Research Paper
  • Quantitative Approaches to Communication Research Paper
  • Qualitative Approaches to Communication Research Paper
  • Critical Cultural Communication Research Paper
  • Feminist Approaches to Communication Research Paper
  • Queer Approaches to Communication Research Paper
  • Message Construction and Editing Research Paper
  • Perspective Taking Research Paper
  • Social Construction Research Paper
  • Listening, Understanding, and Misunderstanding Research Paper
  • Performance and Storytelling Research Paper
  • Persuasion and Compliance Gaining Research Paper
  • Identity in Communication Research Paper
  • Conversation, Dialogue, and Discourse Research Paper
  • Interviewing Research Paper
  • Public Speaking Research Paper
  • Deliberation, Debate, and Decision Making Research Paper
  • Conflict Management Research Paper
  • Visual Rhetoric Research Paper
  • Memorials and Collective Memory Research Paper
  • Verbal and Nonverbal Communication Research Paper
  • Rhetorical Style Research Paper
  • Drama and Dramatic Elements Research Paper
  • Rhetorical Exigency and Argumentation Research Paper
  • Supportive Communication Research Paper
  • Communication in Relationships Research Paper
  • Intergenerational Communication Research Paper
  • Romantic Relationships and Communication Research Paper
  • Workplace Communication Research Paper
  • Group Communication Research Paper
  • Instructional Communication Research Paper
  • Patient-Provider Communication Research Paper
  • Gender and Communication Research Paper
  • Sexual Orientation and Communication Research Paper
  • Culture and Communication Research Paper
  • Risk Communication Research Paper
  • Freedom of Expression Research Paper
  • Globalization and Communication Research Paper
  • Ethical and Unethical Communication Research Paper
  • Competent and Incompetent Communication Research Paper
  • Unwanted Communication, Aggression, and Abuse Research Paper
  • Sexual Harassment and Communication Research Paper
  • Deception and Communication Research Paper
  • Professional Communication Research Paper

ORDER HIGH QUALITY CUSTOM PAPER

information processing research paper

IMAGES

  1. (PDF) Data Processing

    information processing research paper

  2. (PDF) Information Processing Research

    information processing research paper

  3. Tips on How to Write a Research Paper on Machine Learning

    information processing research paper

  4. Methodology Sample In Research

    information processing research paper

  5. (PDF) Information Processing in Research Paper Recommender System Classes

    information processing research paper

  6. Perspectives On a Consumer Information Processing Research Program

    information processing research paper

VIDEO

  1. The Practitioners Update on Intelligent Document Processing (Part One)

  2. Data Processing and Analysis in Research Methodology

  3. Data Processing System

  4. Using Intelligent Document Processing IDP with RPA

  5. Information processing cycle || What is information processing cycle ||

  6. Combined Methods for Information Retrieval Systems

COMMENTS

  1. Information Processing: The Language and Analytical Tools for Cognitive Psychology in the Information Age

    Our interest in this paper is, of course, with the research tools and logic that Fisher provided, along with their application to scientific content. ... Cognitive Psychology and Information Processing. Along with the information theory, the arrival of the computer provided one of the most viable models to help researchers understand the human ...

  2. Information Processing & Management

    Information Processing and Management publishes cutting-edge original research at the intersection of computing and information science concerning theory, methods, or applications in a range of domains, including but not limited to advertising, business, health, information science, information technology marketing, and social computing.

  3. (PDF) An Overview of the Information Processing Approach and its

    Further research is required to refine the Information Processing Approach, explore its application to other cognitive processes, and investigate its practical implications for cognitive ...

  4. Information Processing: A Critical Literature Review and Future

    The paper reviews the extensive literature in this domain, deriving information and models from a wide variety of disciplines including: cognitive information processing, attitudes and attitudinal change, elaboration and receiver involvement, sub-routines and sub-processors, semiotics, cognitive science and psycholinguistics.

  5. (PDF) Information Retrieval: Recent Advances and Beyond

    This paper provides an extensive and thorough overview of the models and techniques utilized in the first and second stages of the typical information retrieval processing chain. Our discussion ...

  6. (PDF) A Study on Improving Information Processing ...

    This research proposes a method for teaching information processing abilities based on a problem-based learning model, and was tested with elementary students. The students developed an improved ...

  7. Information Processing and Human Memory

    Summary. Information processing is a cognitive learning theory that helps explain how individuals acquire, process, store, and retrieve information from memory. The cognitive architecture that facilitates the processing of information consists of three components: memory stores, cognitive processes, and metacognition.

  8. Artificial Intelligence and Information Processing: A Systematic ...

    This study aims to understand the development trends and research structure of articles on artificial intelligence (AI) and information processing in the past 10 years. In particular, this study analyzed 13,294 papers published from 2012 to 2021 in the Web of Science, used the bibliometric analysis method to visualize the data of the papers, and drew a scientific knowledge map.

  9. An Introduction to Cognitive Information Processing Theory, Research

    The primary purpose of this paper is to introduce essential elements of cognitive information processing (CIP) theory, research, and practice as they existed at the time of this writing. The introduction that follows describes the nature of career choices and career interventions, and the integration of theory, research, and practice. After the introduction, the paper continues with three main ...

  10. Information-Processing Theory

    While information processing theory has a rich history of research into its role in the cognitive aging process, it is no longer the prevalent framework for age-related cognitive decline. Mapping changes in cognitive function over the life span to neural correlates has been at the recent forefront of research regarding this topic.

  11. Information Retrieval: Recent Advances and Beyond

    Correspondence: [email protected]. Abstract: In this paper, we provide a detailed overview of the models used for information retrieval in the first and second stages of the typical processing chain. We discuss the current state-of-the-art models, including methods based on terms, semantic retrieval, and neural.

  12. PDF A Study on Improving Information Processing Abilities Based on Pbl

    Information Processing Abilities The Information Processing Abilities are based on the 'Standard ICT skills of elementary-middle school students' as defined by the Korea Education & Research Information Service (Korea Education & Research Information Service, 2004). Many suggestions are provided.

  13. PDF Information Processing and Memory: Theory and Applications

    In this theory, intelligence is comprised of three kinds of information processing components: metacomponents, performance components, and knowledge-acquisition components. In Sternberg's (1988) model, each of these three components works together to facilitate learning and cognitive development.

  14. (PDF) Information Processing

    Information processing in biologically motivated Boolean networks is of interest in recent information theoretic research. One measure to quantify this ability is the well-known mutual information.

  15. Rubrics to assess critical thinking and information processing in

    This work received Institutional Review Board approval prior to any data collection involving human subjects. The sources of data used to construct the process skill rubrics and answer these research questions were (1) peer-reviewed literature on how each skill is defined, (2) feedback from content experts in multiple STEM disciplines via surveys and in-person, group discussions regarding the ...

  16. Effectiveness of Information Processing Strategy Training on ...

    Learning disabilities (LD) can be associated with problems in the four stages of information processing used in learning: input, throughput, output, and feedback. These problems affect the child's ability to learn and perform activities in daily life, especially during academic activities. This study is a pilot study aimed at investigating the effectiveness of information processing strategy ...

  17. Information Retrieval and Knowledge Extraction for Academic ...

    Unlike information retrieval, knowledge extraction directly taps into a publication's content to extract and categorize data. The construction of structured data that can be saved into a schematized database and processed automatically from unstructured data (e.g., a simple text document) is a vast research field.

  18. A Review on recent research in information retrieval

    This paper is divided into three different sections. The first section gives a brief overview of the information retrieval system. The second section describes the information search process, it presents all the phases of text pro- ∗ Corresponding author. Tel.: +212641513301 ; fax: +0-000-000-0000.

  19. Information Processing Research Paper Topics

    Information processing is an approach to the study of behavior that seeks to explain what people think, say, and do by describing the mental systems that give rise to those phenomena. At the heart of the information-processing perspective is the conception of the mind as a representational system. That is, the mind is viewed as a system that (1 ...

  20. Education Research Paper on Information Processing

    This research-paper explores the information processing view of learning, which currently offers the most comprehensive, best supported, and most widely accepted theory of how people learn (Bransford, Brown, & Cocking, 1999; Bruning, Schraw, Norby, & Ronning, 2004; Mayer, 2008). In this research-paper, I summarize the main tenets of information ...

  21. Theory and Implications of Information Processing

    Information processing is a model for human thinking and learning, and it is a part. of the resurgence of cognitive perspecti ves of learning. The cognitiv e perspective. asserts that complex ...

  22. NeurIPS 2024

    The Neural Information Processing Systems Foundation is a non-profit corporation whose purpose is to foster the exchange of research advances in Artificial Intelligence and Machine Learning, principally by hosting an annual interdisciplinary academic conference with the highest ethical standards for a diverse and inclusive community.

  23. Cognition and Information Processing Research Paper

    View sample communication research paper on cognition and information processing. Browse research paper examples for more inspiration. If you need a thorough research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A!

  24. Electronics

    To accurately assess students' cognitive state of knowledge points in the learning process within the smart classroom, a knowledge tracing (KT) model based on classroom network characteristic learning engagement and temporal-spatial feature fusion (CL-TSKT) is proposed. First, a classroom network is constructed based on the information of the student ID, seating relationship, student ...